CN110651242B - Apparatus, method and graphical user interface for touch input processing - Google Patents

Apparatus, method and graphical user interface for touch input processing Download PDF

Info

Publication number
CN110651242B
CN110651242B CN201880031229.8A CN201880031229A CN110651242B CN 110651242 B CN110651242 B CN 110651242B CN 201880031229 A CN201880031229 A CN 201880031229A CN 110651242 B CN110651242 B CN 110651242B
Authority
CN
China
Prior art keywords
touch
gesture
view
touch input
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880031229.8A
Other languages
Chinese (zh)
Other versions
CN110651242A (en
Inventor
B·D·尼罗
C·K·托马斯
D·瓦格纳
M·T·特尼尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DKPA201770371A external-priority patent/DK179979B1/en
Application filed by Apple Inc filed Critical Apple Inc
Priority to CN201911262546.3A priority Critical patent/CN111026306B/en
Publication of CN110651242A publication Critical patent/CN110651242A/en
Application granted granted Critical
Publication of CN110651242B publication Critical patent/CN110651242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An electronic device receives, at a first application, information describing a first touch detected at a location on a touch-sensitive surface corresponding to a respective user interface object while displaying a user interface for the first application. In accordance with a determination that the first touch meets predefined criteria associated with initiating a respective type of interface operation, the device initiates a respective type of user interface operation involving a respective user interface object. The first application receives an instruction to ignore the first touch with respect to user interface operations other than the respective type of interface operation, and to ignore the first touch with respect to user interface operations other than the respective type of interface operation. In some implementations, the respective type of interface operation is a drag operation for dragging an object in or from a view of the first application.

Description

Apparatus, method and graphical user interface for touch input processing
Technical Field
This relates generally to electronic devices having touch-sensitive surfaces, including but not limited to electronic devices having touch-sensitive surfaces that use heuristics and other gesture recognition techniques to distinguish between various touch inputs including drag-and-drop gestures, long-press gestures, swipe gestures, and the like.
Background
The use of touch-sensitive surfaces as input devices for computers and other electronic computing devices has grown significantly in recent years. Exemplary touch sensitive surfaces include touchpads and touch screen displays. Such surfaces are widely used to manipulate user interface objects on a display.
Exemplary manipulations include dragging and dropping an object or user interface element, such as moving information from one application to another, or from one portion of an application or data structure to another. Other manipulations include selecting user interface objects by touching them in a predefined manner, launching an application by tapping or otherwise interacting with a corresponding application launch icon or other user interface element, accessing application features by performing swipes, drags, taps, pinch, long presses, deep presses (e.g., having a strength above a threshold) and other touch inputs on the respective user interface elements of the user interface of the application.
Disambiguation between various touch-based gestures, such as drag-and-drop gestures, spring-loaded gestures (e.g., opening an object to see an element of the object), and swipe gestures and long press gestures are very important so that a device or system performs actions corresponding to a user's intent. Such disambiguation can be challenging given the sometimes subtle differences between touch-based gestures.
Although gesture recognizers with different event or gesture definitions have been used to assist in such disambiguation. Disambiguation between touch-based drag and drop gestures and spring-loaded (sometimes written as "spring-loaded") gestures (e.g., opening an object to see an element of the object), on the one hand, and swipe gestures and long press gestures, on the other hand, continue to be challenging and sometimes cumbersome and inefficient. Such inefficiency may involve mechanisms for disambiguation, and once disambiguation or at least a portion of disambiguation has been achieved, delivery of touch input information to various software mechanisms. Furthermore, as devices with touch-based input mechanisms provide more and more features, the number of active gesture recognizers for at least some applications or application views has grown to a point where only a portion of those gesture recognizers are needed to process user input, making such mechanisms cumbersome and inefficient.
Disclosure of Invention
Accordingly, there is a need for an electronic device having faster, more efficient methods and interfaces for handling drag and drop gestures, and corresponding drag and drop operations (and optionally spring-loaded operations). Such methods and interfaces optionally complement or replace conventional methods for touch input processing and gesture recognition and processing. Such methods and interfaces reduce the amount, degree, and/or nature of input from a user and result in a more efficient human-machine interface. For battery-operated devices, such methods and interfaces may save electricity and increase the time between battery charges.
The above-described drawbacks and other problems associated with user interfaces of electronic devices having touch-sensitive surfaces may be reduced or eliminated with the disclosed devices. In some embodiments, the device is a desktop computer. In some embodiments, the device is portable (e.g., a notebook, tablet, or handheld device). In some embodiments, the device is a personal electronic device (e.g., a wearable electronic device such as a watch). In some embodiments, the device has a touch pad. In some implementations, the device has a touch sensitive display (also referred to as a "touch screen" or "touch screen display"). In some embodiments, the device has a Graphical User Interface (GUI), one or more processors, memory, and one or more modules, a program or set of instructions stored in the memory for performing a plurality of functions. In some embodiments, the user interacts with the GUI primarily through stylus and/or finger contacts and gestures on the touch-sensitive surface. In some embodiments, these functions optionally include image editing, drawing, presentation, word processing, spreadsheet making, game playing, phone calls, video conferencing, email sending and receiving, instant messaging, fitness support, digital photography, digital video recording, web browsing, digital music playing, notes taking, and/or digital video playing. Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
According to some embodiments, the method is performed at an electronic device having a display, a touch-sensitive surface, and optionally one or more sensors for detecting intensity of contact with the touch-sensitive surface. The method comprises the following steps: a user interface for a first application including a plurality of user interface objects is displayed on a display, and when the user interface for the first application is displayed, information (e.g., one or more touch events) describing a first touch detected at a location on the touch-sensitive surface corresponding to respective ones of the plurality of user interface objects in the first application is received at the first application. The method includes, in response to receiving information describing the first touch, and in accordance with a determination that the first touch meets predefined criteria associated with initiating a respective type of interface operation: initiate a respective type of user interface operation involving a respective user interface object; when the first touch continues to be detected on the touch-sensitive surface and when user interface operations involving the respective user interface object continue, an instruction is received at the first application to ignore the first touch with respect to user interface operations other than the respective type of interface operation, and the first touch is ignored by the first application with respect to user interface operations other than the respective type of interface operation. In some embodiments, the respective type of interface operation is a drag operation for dragging an object from the first application or a view of the first application; or a drag-and-drop operation in which the dragged object is dropped into a different application or application view from which the dragged object was dragged; or a spring-loaded operation in which a view or container in which content may be stored is opened, or a control for the view or object is activated.
According to some embodiments, a method is performed at an electronic device having a display and a touch-sensitive surface. The method includes displaying a user interface for a first application on a display including a plurality of user interface objects; and, while displaying the user interface for the first application, receiving, at the first application, information describing the first touch. A first touch is detected on the touch-sensitive surface at a location corresponding to a respective user interface object of the plurality of user interface objects in the user interface for the first application and moved outside of the area corresponding to the user interface for the first application into the area corresponding to the user interface for the second application. The method further includes, in response to receiving information describing the first touch: in accordance with a determination that the first touch does not satisfy the predefined criteria associated with initiating the respective type of interface operation, providing information describing the first touch to the first application continues even after the first touch moves outside of an area corresponding to the user interface for the first application and into an area corresponding to the user interface for the second application. The method further includes, in accordance with a determination that the first touch meets predefined criteria associated with initiating a respective type of interface operation: initiate a respective type of user interface operation involving a respective user interface object; and, while continuing to detect the first touch on the touch-sensitive surface and while continuing to user interface operations involving the respective user interface object: receiving, at the first application, an instruction to ignore the first touch with respect to a user interface operation other than the corresponding type of interface operation; ignoring, by the first application, the first touch with respect to user interface operations other than the corresponding type of interface operation; providing information describing the first touch to the first application when the first touch is within an area corresponding to a user interface for the first application; and switching to providing information describing the first touch to the second application when the first touch moves outside of and into an area corresponding to a user interface for the first application.
According to some embodiments, the method is performed at an electronic device having a display, a touch-sensitive surface, and optionally one or more sensors for detecting intensity of contact with the touch-sensitive surface. The method comprises the following steps: a user interface is displayed on a display that includes a plurality of views including a first view associated with a first set of one or more standard gesture recognizers for processing touch input directed to the first view and a second view associated with a second set of one or more standard gesture recognizers for processing touch input directed to the second view, wherein one or more of the plurality of views is associated with a first interaction identifier (e.g., a drag, drop, or spring-loaded interaction identifier) that indicates that the first set of one or more supplemental gesture recognizers can be added to the corresponding view. The method further includes detecting, via the touch-sensitive surface, touch input on the touch-sensitive surface at a location corresponding to a respective view of the plurality of views; and in response to detecting the touch input via the touch-sensitive surface: in accordance with a determination that the touch input meets supplemental gesture recognizer addition criteria, wherein the supplemental gesture recognizer addition criteria include a criterion that is met when the respective view has a first interaction identity, adding a first set of one or more supplemental gesture recognizers associated with the first interaction identity to the respective view; and processing the touch input with the one or more supplemental gesture recognizers and one or more standard gesture recognizers associated with the respective views. The method further includes, in accordance with a determination that the touch input does not meet the supplemental gesture recognizer addition criteria, processing the touch input at the respective view without adding a first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view.
According to some embodiments, the method is performed at an electronic device having a display, a touch-sensitive surface, and optionally one or more sensors for detecting intensity of contact with the touch-sensitive surface. The method includes displaying a user interface of an application on a display; and while displaying the user interface of the application, detecting user input corresponding to a portion of the user interface of the application associated with the plurality of gesture recognizers, wherein: a first set of one or more standard gesture recognizers of the plurality of gesture recognizers is associated with (e.g., assigned to) a portion of the user interface by the application; a second set of one or more supplemental gesture recognizers of the plurality of gesture recognizers is associated with (e.g., assigned to) the portion of the user interface through a system process (e.g., an application-independent process); and a first fault requirement related to the first standard gesture recognizer and the first supplemental gesture recognizer is associated with (e.g., assigned to) the portion of the user interface through the system process. The method also includes, in response to detecting the user input, processing the user input in accordance with the first standard gesture recognizer, the first supplemental gesture recognizer, and the first fault requirement. For example, in some implementations, the first set of one or more standard gesture recognizers and the second set of one or more gesture recognizers are arranged in a gesture recognition hierarchy, wherein the first failure requirement includes requiring the first respective gesture recognizer to fail in order for the second respective gesture recognizer to successfully recognize the gesture, wherein the first respective gesture recognizer is at a higher level in the gesture recognition hierarchy than the second respective gesture recognizer.
According to some embodiments, an electronic device includes a display, a touch-sensitive surface, a memory, one or more processors, one or more programs, and optionally one or more sensors for detecting intensity of contacts with the touch-sensitive surface; one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include instructions for performing or causing performance of the operations of any of the methods described herein.
According to some embodiments, a computer-readable storage medium has stored therein instructions that, when executed by an electronic device having a display, a touch-sensitive surface, and optionally one or more sensors for detecting intensity of contact with the touch-sensitive surface, cause the device to perform or cause to be performed the operations of any of the methods described herein.
According to some embodiments, a graphical user interface on an electronic device with a display, a touch-sensitive surface, optionally one or more sensors to detect intensity of contact with the touch-sensitive surface, a memory, and one or more processors executing one or more programs stored in the memory, includes one or more elements displayed in any of the methods described herein, the one or more elements updated in response to an input, as described in any of the methods described herein.
According to some embodiments, an electronic device includes: a display, a touch-sensitive surface, and optionally one or more sensors for detecting intensity of contact with the touch-sensitive surface; and means for performing or causing the operations of any one of the methods described herein.
According to some embodiments, an information processing device for use in an electronic device having a display and a touch-sensitive surface, and optionally one or more sensors for detecting contact strength with the touch-sensitive surface, comprises means for performing or causing to be performed the operations of any of the methods described herein.
Accordingly, an electronic device having a display, a touch-sensitive surface, and optionally one or more sensors for detecting the intensity of contact with the touch-sensitive surface is provided with faster, more efficient methods and interfaces for processing touch-based inputs, including drag-and-drop gestures, long-press gestures, and other gestures that may be effectively recognized for the same application or application view, thereby improving the effectiveness, efficiency, and user satisfaction of such devices. Such methods and interfaces may supplement or replace conventional methods for processing touch inputs in such electronic devices.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following detailed description taken in conjunction with the following drawings, in which like reference numerals designate corresponding parts throughout the several views.
Fig. 1 is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
Fig. 2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
Fig. 3A is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
Fig. 3B is a block diagram illustrating exemplary components for event processing according to some embodiments.
FIG. 3C is a block diagram illustrating exemplary classes and examples of gesture recognizers according to some embodiments.
Fig. 3D is a block diagram illustrating event information flow according to some embodiments.
Fig. 4A illustrates an exemplary user interface for an application menu on a portable multifunction device in accordance with some embodiments.
Fig. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface separate from a display in accordance with some embodiments.
Fig. 4C-4E illustrate examples of dynamic intensity thresholds according to some embodiments.
Fig. 5A-5R illustrate exemplary user interfaces for drag-and-drop gestures and operations, as well as spring-loaded gestures and operations, according to some embodiments.
FIG. 6A illustrates touch event generation, stage, and cancellation and drag event generation and stage during execution of a drag gesture that spans from a user interface area of a first application or application view to a user interface area of a second application or application view, in accordance with some embodiments.
FIG. 6B illustrates touch event generation and stage according to some embodiments, while a touch moves across a user interface area of a first application or application view to a user interface area of a second application or application view without separating the touches.
FIG. 6C illustrates a mechanism for detecting a touch, detecting the start of a drag gesture in a source process, separating the touch, associating a session ID with the touch, and providing the touch to a target process, in accordance with some embodiments.
Fig. 6D and 6E illustrate gesture recognition hierarchies before and after adding one or more supplemental gesture recognizers to an application view according to some embodiments.
Fig. 7A-7F are flowcharts illustrating methods of processing touch inputs recognized as drag gestures, according to some embodiments.
8A-8E are flow diagrams illustrating a method of dynamically adding a supplemental gesture recognizer to an application or application view in response to detecting touch input when specified criteria are met, according to some embodiments.
9A-9D are flowcharts illustrating methods of establishing a fault dependency between a first set of standard gesture recognizers associated with a portion of a user interface of an application and a second set of supplemental gesture recognizers associated with the same portion of the user interface of the application, according to some embodiments.
Detailed Description
Many electronic devices have a graphical user interface and a touch-sensitive surface, whether integrated with a display or otherwise used to receive touch input from a user. As the number of gestures based on touch input increases, disambiguation between the various possible gestures becomes more difficult and more important for successful and efficient use of such electronic devices. In practice, multiple gestures that may be used in the same user interface region may begin with the same initial touch characteristics, such as a stationary touch that remains stationary (e.g., moves less than a predefined distance or amount) for at least a predefined initial amount of time. One mechanism to help disambiguate between such gestures is to use gesture recognizers, each for recognizing a respective gesture or gesture type, and establish a failure requirement that some gesture recognizers fail to recognize user input before allowing other gesture recognizers to successfully recognize the user input as a particular gesture. For example, when the user interface area accepts a long press input for initiating a first operation and a drag input for initiating a drag-and-drop operation, a gesture recognizer for recognizing the long press gesture may fail depending on the drag-and-drop start gesture recognizer for recognizing the start of the drag-and-drop operation.
In another aspect, certain types of gesture recognizers may be added by an application to a user interface region for the application, while supplemental gesture recognizers may be added by a system process to the same user interface region when certain criteria are met. In this way, the number of gesture recognizers established for a particular user interface region may be kept relatively low until a supplemental gesture recognizer is or may be required, as indicated by meeting the criteria described above. For example, gesture recognizers for handling drag and drop operations may be added to the user interface area as supplemental gesture recognizers when they are or may be needed.
In another aspect, the interaction identifier is associated with an application or application view to indicate which supplemental gesture recognizers, if any, can be added to those applications or application views. For example, only applications or application views that can participate in a drag-and-drop operation are associated with the interaction identifier of the drag operation. A second, different interaction identifier may be used to drop the interaction because some application areas may be the source of drag-and-drop operations, but not the target, and vice versa. The third interaction identifier may be used for spring-loaded interactions because some but not all application areas may include containers that can be opened or have controls that can be activated using spring-loaded gestures. In this way, only certain types of applications and views of gestures or operations, such as drag-and-drop gestures and operations or spring-loaded gestures and operations, are allowed to be associated with the corresponding interaction identifier. The use of such interaction identifiers and the resulting control of application areas having supplemental gesture recognizers added to them improves the operating efficiency of the electronic device and also improves the responsiveness of such devices to user inputs by reducing the number of gesture recognizers that process such user inputs.
Fig. 1, 2, and 3A illustrate exemplary devices, fig. 3B-3D illustrate event and gesture recognition and touch event delivery mechanisms, and fig. 4A-4B and 5A-5R illustrate exemplary user interfaces for drag and drop operations. Fig. 6A to 6C illustrate a touch event and drag event generation and management mechanism. Fig. 7A-7F illustrate a flowchart of a method of processing touch input recognized as a drag gesture. 8A-8E illustrate a flowchart of a method of dynamically adding a supplemental gesture recognizer to an application or application view. 9A-9D illustrate a flow chart of a method of establishing a fault dependency between a first set of standard gesture recognizers associated with a portion of a user interface of an application and a second set of supplemental gesture recognizers associated with the same portion of the user interface of the application. The user interfaces in fig. 5A to 5R and the mechanisms shown in fig. 3B to 3D and fig. 6A to 6C are used to illustrate the processes in fig. 7A to 7F, fig. 8A to 8E and fig. 9A to 9D.
Exemplary apparatus
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. Numerous specific details are set forth in the following detailed description in order to provide a thorough understanding of the various described embodiments. It will be apparent, however, to one of ordinary skill in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
It will also be understood that, although the terms "first," "second," etc. may be used herein to describe various elements in some cases, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first contact may be named a second contact, and similarly, a second contact may be named a first contact without departing from the scope of the various described embodiments. The first contact and the second contact are both contacts, but they are not the same contact unless the context clearly indicates otherwise.
The terminology used in the description of the various illustrated embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and in the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term "if" is optionally interpreted to mean "after" at … … when … … "or" in response to determination "or" in response to detection "depending on the context. Similarly, the phrase "if determined … …" or "if detected [ stated condition or event ]" is optionally interpreted to mean "upon determining … …" or "in response to determining … …" or "upon detecting [ stated condition or event ]" or "in response to detecting [ stated condition or event ]" depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and related processes for using such devices are described herein. In some embodiments, the device is a portable communication device, such as a mobile phone, that also includes other functions, such as PDA and/or music player functions. Exemplary embodiments of the portable multifunction device include, but are not limited to, those from Apple inc (Cupertino, california)
Figure BDA0002268768140000091
iPod/>
Figure BDA0002268768140000092
And->
Figure BDA0002268768140000093
An apparatus. Other portable electronic devices are optionally used, such as a portable electronic device having a touch-sensitive surface (e.g., a touch screen displayA display and/or a touch pad). It should also be appreciated that in some embodiments, the device is not a portable communication device, but rather a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad).
In the following discussion, an electronic device including a display and a touch-sensitive surface is described. However, it should be understood that the electronic device optionally includes one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.
The device typically supports various applications, such as one or more of the following: note taking applications, drawing applications, presentation applications, word processing applications, website creation applications, disk editing applications, spreadsheet applications, gaming applications, telephony applications, video conferencing applications, email applications, instant messaging applications, fitness support applications, photo management applications, digital camera applications, digital video camera applications, web browsing applications, digital music player applications, and/or digital video player applications.
The various applications executing on the device optionally use at least one generic physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device are optionally adjusted and/or changed for different applications and/or within the respective applications. In this way, the common physical architecture of the devices (such as the touch-sensitive surface) optionally supports various applications with a user interface that is intuitive and transparent to the user.
Attention is now directed to embodiments of a portable device having a touch sensitive display. Fig. 1 is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display system 112 is sometimes referred to as a "touch screen" for convenience and is sometimes referred to simply as a touch-sensitive display. Device 100 includes memory 102 (which optionally includes one or more computer-readable storage media), memory controller 122, one or more processing units (CPUs) 120, peripheral interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, input/output (I/O) subsystem 106, other input or control devices 116, and external ports 124. The apparatus 100 optionally includes one or more optical sensors 164. The device 100 optionally includes one or more intensity sensors 165 for detecting the intensity of a contact on the device 100 (e.g., a touch-sensitive surface, such as the touch-sensitive display system 112 of the device 100). Device 100 optionally includes one or more tactile output generators 167 for generating tactile outputs on device 100 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 112 of device 100 or touch pad 355 of device 300). These components optionally communicate via one or more communication buses or signal lines 103.
As used in this specification and in the claims, the term "haptic output" is a physical displacement of a device relative to a previous position of the device, a physical displacement of a component of the device (e.g., a touch sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a centroid of the device, to be detected by a user with a user's feel. For example, in the case where the device or component of the device is in contact with a touch-sensitive surface of the user (e.g., a finger, palm, or other portion of the user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a haptic sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or touch pad) is optionally interpreted by a user as a "press click" or "click-down" of a physically actuated button. In some cases, the user will feel a tactile sensation, such as "press click" or "click down", even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moved. As another example, movement of the touch-sensitive surface may optionally be interpreted or sensed by a user as "roughness" of the touch-sensitive surface, even when the smoothness of the touch-sensitive surface is unchanged. While such interpretation of touches by a user will be limited by the user's individualized sensory perception, many sensory perceptions of touches are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click down," "click up," "roughness"), unless stated otherwise, the haptic output generated corresponds to a physical displacement of the device or component thereof that would generate that sensory perception of a typical (or ordinary) user.
It should be understood that the device 100 is merely one example of a portable multifunction device, and that the device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown in fig. 1 are implemented in hardware, software, firmware, or any combination thereof (including one or more signal processing circuits and/or application specific integrated circuits).
Memory 102 optionally includes high-speed random access memory, and also optionally includes non-volatile memory, such as one or more disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to the memory 102 by other components of the device 100, such as the one or more CPUs 120 and the peripheral interface 118, is optionally controlled by a memory controller 122.
The peripheral interface 118 may be used to couple input and output peripherals of the device to the memory 102 and the one or more CPUs 120. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in the memory 102 to perform various functions of the device 100 and process data.
In some embodiments, peripheral interface 118, one or more CPUs 120, and memory controller 122 are optionally implemented on a single chip, such as chip 104. In some other embodiments, they are optionally implemented on separate chips.
The RF (radio frequency) circuit 108 receives and transmits RF signals, also referred to as electromagnetic signals. The RF circuitry 108 converts/converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 108 optionally includes well known circuitry for performing these functions including, but not limited to, an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and the like. RF circuitry 108 optionally communicates via wireless communication with networks such as the internet (also known as the World Wide Web (WWW)), intranets, and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), and other devices. The wireless communication optionally uses any of a variety of communication standards, protocols, and technologies including, but not limited to, global system for mobile communications (GSM), enhanced Data GSM Environment (EDGE), high Speed Downlink Packet Access (HSDPA), high Speed Uplink Packet Access (HSUPA), evolution-only data (EV-DO), HSPA, hspa+, dual cell HSPA (DC-HSPDA), long Term Evolution (LTE), near Field Communication (NFC), wideband code division multiple access (W-CDMA), code Division Multiple Access (CDMA), time Division Multiple Access (TDMA), bluetooth, wireless fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11ac, IEEE 802.11ax, IEEE 802.11b, IEEE 802.11g, and/or IEEE 802.11 n), voice over internet protocol (VoIP), wi-MAX, email protocols (e.g., internet Message Access Protocol (IMAP) and/or Post Office Protocol (POP)), messages (e.g., extensible message handling and presence protocol (XMPP), session initiation protocol (sime), message and presence using extension, message and presence, instant messaging and/or SMS (SMS) protocols, or any other suitable communication protocols including the text file or non-delivery date.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between the user and device 100. Audio circuitry 110 receives audio data from peripheral interface 118, converts the audio data to electrical signals, and transmits the electrical signals to speaker 111. The speaker 111 converts electrical signals into sound waves that are audible to humans. The audio circuit 110 also receives electrical signals converted from sound waves by the microphone 113. The audio circuitry 110 converts the electrical signals into audio data and transmits the audio data to the peripheral interface 118 for processing. The audio data is optionally retrieved from and/or transmitted to the memory 102 and/or the RF circuitry 108 by the peripheral interface 118. In some embodiments, the audio circuit 110 also includes a headset jack (e.g., 212 in fig. 2). The headset jack provides an interface between the audio circuit 110 and removable audio input/output peripherals such as output-only headphones or a headset having both an output (e.g., a monaural or binaural) and an input (e.g., a microphone).
The I/O subsystem 106 couples input/output peripheral devices on the device 100, such as the touch-sensitive display system 112 and other input or control devices 116, to the peripheral device interface 118. The I/O subsystem 106 optionally includes a display controller 156, an optical sensor controller 158, an intensity sensor controller 159, a haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. One or more input controllers 160 receive electrical signals from or transmit electrical signals to other input or control devices 116. Other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and the like. In some alternative implementations, one or more input controllers 160 are optionally coupled to (or not coupled to) any of the following: a keyboard, an infrared port, a USB port, a stylus, and/or a pointing device such as a mouse. One or more buttons (e.g., 208 in fig. 2) optionally include an up/down button for volume control of speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206 in fig. 2).
The touch sensitive display system 112 provides an input interface and an output interface between the device and the user. The display controller 156 receives electrical signals from the touch sensitive display system 112 and/or transmits electrical signals to the touch sensitive display system 112. The touch sensitive display system 112 displays visual output to a user. Visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively, "graphics"). In some embodiments, some or all of the visual output corresponds to a user interface object. As used herein, the term "affordance" is a user-interactive graphical user interface object (e.g., a graphical user interface object configured to respond to input directed to the graphical user interface object). Examples of user interactive graphical user interface objects include, but are not limited to, buttons, sliders, icons, selectable menu items, switches, hyperlinks, or other user interface controls.
The touch-sensitive display system 112 has a touch-sensitive surface, sensor, or set of sensors that receives input from a user based on haptic and/or tactile contact. The touch-sensitive display system 112 and the display controller 156 (along with any associated modules and/or sets of instructions in the memory 102) detect contact (and any movement or interruption of the contact) on the touch-sensitive display system 112 and translate the detected contact into interactions with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on the touch-sensitive display system 112. In one exemplary embodiment, the point of contact between the touch sensitive display system 112 and the user corresponds to a user's finger or stylus.
Touch-sensitive display system 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, but in other embodiments other display technologies are used. Touch sensitive display system 112 and display controller 156 optionally detect contact and any movement or interruption thereof using any of a variety of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch sensitive display system 112. In one exemplary embodiment, a projected mutual capacitance sensing technique is used, such as that from Apple inc (Cupertino, california)
Figure BDA0002268768140000141
iPod/>
Figure BDA0002268768140000142
And->
Figure BDA0002268768140000143
The technology found in (a) is provided.
The touch sensitive display system 112 optionally has a video resolution in excess of 100 dpi. In some implementations, the touch screen video resolution exceeds 400dpi (e.g., 500dpi, 800dpi, or greater). The user optionally uses any suitable object or appendage, such as a stylus, finger, or the like, to contact the touch sensitive display system 112. In some embodiments, the user interface is designed to work with finger-based contacts and gestures, which may not be as accurate as stylus-based input due to the large contact area of the finger on the touch screen. In some embodiments, the device translates the finger-based coarse input into a precise pointer/cursor position or command for performing the action desired by the user.
In some embodiments, the device 100 optionally includes a touch pad (not shown) for activating or deactivating specific functions in addition to the touch screen. In some embodiments, the touch pad is a touch sensitive area of the device that, unlike a touch screen, does not display visual output. The touch pad is optionally a touch-sensitive surface separate from the touch-sensitive display system 112 or an extension of the touch-sensitive surface formed by the touch screen.
The apparatus 100 also includes a power system 162 for powering the various components. The power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in the portable device.
The apparatus 100 optionally further comprises one or more optical sensors 164. FIG. 1 shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The one or more optical sensors 164 optionally include a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The one or more optical sensors 164 receive light projected through the one or more lenses from the environment and convert the light into data representing an image. In conjunction with imaging module 143 (also referred to as a camera module), one or more optical sensors 164 optionally capture still images and/or video. In some embodiments, the optical sensor is located on the back of the device 100 opposite the touch sensitive display system 112 on the front of the device, enabling the touch screen to be used as a viewfinder for still image and/or video image acquisition. In some embodiments, another optical sensor is located on the front of the device to acquire an image of the user (e.g., for self-timer shooting, for video conferencing while the user views other video conference participants on a touch screen, etc.).
The apparatus 100 optionally further comprises one or more contact intensity sensors 165. FIG. 1 shows a contact intensity sensor coupled to an intensity sensor controller 159 in the I/O subsystem 106. The one or more contact strength sensors 165 optionally include one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other strength sensors (e.g., sensors for measuring force (or pressure) of a contact on a touch-sensitive surface). One or more contact strength sensors 165 receive contact strength information (e.g., pressure information or a surrogate for pressure information) from the environment. In some implementations, at least one contact intensity sensor is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on a rear of the device 100 opposite the touch sensitive display system 112 located on a front of the device 100.
The device 100 optionally further includes one or more proximity sensors 166. Fig. 1 shows a proximity sensor 166 coupled to the peripheral interface 118. Alternatively, the proximity sensor 166 is coupled to the input controller 160 in the I/O subsystem 106. In some implementations, the proximity sensor turns off and disables the touch-sensitive display system 112 when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call).
The device 100 optionally further comprises one or more tactile output generators 167. FIG. 1 shows a haptic output generator coupled to a haptic feedback controller 161 in the I/O subsystem 106. The tactile output generator 167 optionally includes one or more electroacoustic devices, such as speakers or other audio components, and/or electromechanical devices that convert energy into linear motion, such as motors, solenoids, electroactive polymers, piezoelectric actuators, electrostatic actuators, or other tactile output generating components (e.g., components that convert electrical signals into tactile output on a device). The haptic output generator 167 receives haptic feedback generation instructions from the haptic feedback module 133 and generates a haptic output on the device 100 that is capable of being perceived by a user of the device 100. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., inward/outward of the surface of device 100) or laterally (e.g., backward and forward in the same plane as the surface of device 100). In some embodiments, at least one tactile output generator sensor is located on a rear of the device 100 opposite the touch sensitive display system 112 located on a front of the device 100.
The device 100 optionally further includes one or more accelerometers 168. Fig. 1 shows accelerometer 168 coupled to peripheral interface 118. Alternatively, accelerometer 168 is optionally coupled with input controller 160 in I/O subsystem 106. In some implementations, information is displayed in a portrait view or a landscape view on a touch screen display based on analysis of data received from the one or more accelerometers. The device 100 optionally includes a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) in addition to the one or more accelerometers 168 for obtaining information regarding the position and orientation (e.g., longitudinal or lateral) of the device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or instruction set) 128, a contact/motion module (or instruction set) 130, a graphics module (or instruction set) 132, a haptic feedback module (or instruction set) 133, a text input module (or instruction set) 134, a Global Positioning System (GPS) module (or instruction set) 135, and an application program (or instruction set) 136. Further, in some embodiments, memory 102 stores device/global internal state 157, as shown in fig. 1 and 3A. The device/global internal state 157 includes one or more of the following: an active application state indicating which applications (if any) are currently active; display status indicating what applications, application views, or other information occupy various areas of the touch-sensitive display system 112; sensor status, including information obtained from various sensors of the device and other input or control devices 116; and position and/or orientation information about the position and/or pose of the device; and optionally other status information.
Operating system 126 (e.g., iOS, darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general-purpose system tasks (e.g., memory management, storage device control, power management, etc.), and facilitates communication between the various hardware and software components.
The communication module 128 facilitates communication with other devices through one or more external ports 124 and also includes various software components for processing data received by the RF circuitry 108 and/or the external ports 124. External port 124 (e.g., universal Serial Bus (USB), firewire, etc.) is adapted to be coupled directly to other devices or indirectly via a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external ports are some of the same as Apple inc (Cupertino, california)
Figure BDA0002268768140000171
iPod/>
Figure BDA0002268768140000172
The same or similar and/or compatible multi-pin (e.g., 30-pin) connectors as used in iPod devices. In some embodiments, the external port is a port connected to Apple inc (c.upertino, california)>
Figure BDA0002268768140000173
iPod/>
Figure BDA0002268768140000174
The same or similar and/or compatible lighting connectors as those used in iPod devices.
The contact/motion module 130 optionally detects contact with the touch-sensitive display system 112 (in conjunction with the display controller 156) and other touch-sensitive devices (e.g., a touch pad or physical click wheel). The contact/motion module 130 includes various software components for performing various operations related to contact detection (e.g., by a finger or stylus), such as determining whether a contact has occurred (e.g., detecting a finger press event), determining the strength of the contact (e.g., the force or pressure of the contact, or a substitute for the force or pressure of the contact), determining whether there is movement of the contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger drag events), and determining whether the contact has stopped (e.g., detecting a finger lift event or contact break). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining a velocity (magnitude), a speed (magnitude and direction), and/or an acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single-finger contacts or stylus contacts) or simultaneous multi-point contacts (e.g., "multi-touch"/multi-finger contacts). In some embodiments, the contact/motion module 130 and the display controller 156 detect contact on the touch pad.
The contact/motion module 130 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different movements, timings, and/or intensities of the detected contacts). Thus, gestures are optionally detected by detecting a particular contact pattern. For example, detecting a single-finger tap gesture includes detecting a finger-down event, and then detecting a finger-up (lift-off) event at the same location (or substantially the same location) as the finger-down event (e.g., at an icon location). As another example, detecting a finger swipe gesture on a touch-sensitive surface includes detecting a finger press event, then detecting one or more finger drag events, and then detecting a finger lift (lift off) event. Similarly, taps, swipes, drags, and other gestures of the stylus are optionally detected by detecting a particular contact pattern of the stylus.
In some embodiments, detecting a finger tap gesture depends on a length of time between detecting a finger press event and a finger lift event, but is independent of a finger contact strength between detecting a finger press event and a finger lift event. In some embodiments, in accordance with a determination that the length of time between the finger press event and the finger lift event is less than a predetermined value (e.g., less than 0.1, 0.2, 0.3, 0.4, or 0.5 seconds), a flick gesture is detected regardless of whether the intensity of the finger contact during the flick reaches a given intensity threshold (greater than a nominal contact detection intensity threshold), such as a tap or deep press intensity threshold. Thus, a finger tap gesture may satisfy a particular input criteria that does not require the characteristic intensity of the contact to satisfy a given intensity threshold to satisfy the particular input criteria. For clarity, finger contact in a flick gesture is typically required to meet a nominal contact detection intensity threshold below which no contact is detected to detect a finger press event. Similar analysis applies to detecting a flick gesture by a stylus or other contact. In the case where the device is capable of detecting finger or stylus contact hovering over a touch sensitive surface, the nominal contact detection intensity threshold optionally does not correspond to physical contact between the finger or stylus and the touch sensitive surface.
The same concepts apply in a similar manner to other types of gestures. For example, a swipe gesture, a pinch gesture, a spread gesture, and/or a long press gesture may optionally be detected based on meeting criteria that one or more contacts that are independent of the intensity of contacts included in the gesture or do not require the performed gesture reach an intensity threshold in order to be recognized. For example, a swipe gesture is detected based on an amount of movement of the one or more contacts; the zoom gesture is detected based on movement of the two or more contacts toward each other; the amplification gesture is detected based on movement of the two or more contacts away from each other; the long press gesture is detected based on a duration of contact on the touch-sensitive surface having less than a threshold amount of movement. Thus, statement that a particular gesture recognition criterion does not require that the contact intensity meet a respective intensity threshold to meet the particular gesture recognition criterion means that the particular gesture recognition criterion can be met when a contact in the gesture does not meet the respective intensity threshold, and can also be met if one or more contacts in the gesture meet or exceed the respective intensity threshold. In some embodiments, a flick gesture is detected based on determining that a finger press event and a finger lift event are detected within a predefined time period, regardless of whether the contact is above or below a respective intensity threshold during the predefined time period, and a swipe gesture is detected based on determining that the contact movement is greater than a predefined magnitude, even though the contact is above the respective intensity threshold at the end of the contact movement. Even in implementations where the detection of gestures is affected by the intensity of the contact performing the gesture (e.g., when the intensity of the contact is higher than an intensity threshold, the device detects a long press faster, or when the intensity of the contact is higher, the device delays the detection of a tap input), the detection of these gestures does not require the contact to reach a certain intensity threshold (e.g., even if the amount of time required to recognize the gesture changes) as long as the criteria for recognizing the gesture can be met without the contact reaching the certain intensity threshold.
In some cases, the contact strength threshold, duration threshold, and movement threshold are combined in various different combinations in order to create a heuristic to distinguish between two or more different gestures for the same input element or region, such that multiple different interactions with the same input element can provide a richer set of user interactions and responses. Statement that a set of specific gesture recognition criteria does not require that the intensity of the contact meet a respective intensity threshold to meet the specific gesture recognition criteria does not preclude simultaneous evaluation of other intensity-related gesture recognition criteria to identify other gestures having criteria that are met when the gesture includes a contact having an intensity above the respective intensity threshold. For example, in some cases, a first gesture recognition criterion of a first gesture (which does not require the intensity of a contact to meet a respective intensity threshold to meet the first gesture recognition criterion) competes with a second gesture recognition criterion of a second gesture (which depends on the contact reaching the respective intensity threshold). In such a competition, if the second gesture recognition criteria of the second gesture is satisfied first, the gesture is optionally not recognized as satisfying the first gesture recognition criteria of the first gesture. For example, if the contact reaches a corresponding intensity threshold before the contact moves by a predefined amount of movement, a deep press gesture is detected instead of a swipe gesture. Conversely, if the contact moves a predefined amount of movement before the contact reaches the corresponding intensity threshold, a swipe gesture is detected instead of a deep press gesture. Even in such cases, the first gesture recognition criteria of the first gesture still does not require the intensity of the contact to meet the respective intensity threshold to meet the first gesture recognition criteria because if the contact remains below the respective intensity threshold until the gesture ends (e.g., a swipe gesture having a contact that does not increase in intensity above the respective intensity threshold), the gesture will be recognized by the first gesture recognition criteria as a swipe gesture. Thus, a particular gesture recognition criterion that does not require the intensity of the contact to meet the respective intensity threshold to meet the particular gesture recognition criterion will (a) in some cases ignore the intensity of the contact with respect to the intensity threshold (e.g., for a flick gesture) and/or (B) in some cases fail to meet the particular gesture recognition criterion (e.g., for a long press gesture) in the sense that the intensity of the contact with respect to the intensity threshold (e.g., for a long press gesture) is still dependent on if a competing set of intensity-related gesture recognition criteria (e.g., for a deep press gesture) recognizes the input as corresponding to the intensity-related gesture before the particular gesture recognition criterion recognizes the gesture corresponding to the input.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch-sensitive display system 112 or other displays, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual attribute) of the displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including without limitation text, web pages, icons (such as user interface objects including soft keys), digital images, video, animation, and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphic module 132 receives one or more codes for designating graphics to be displayed from an application program or the like, and also receives coordinate data and other graphic attribute data together if necessary, and then generates screen image data to output to the display controller 156.
Haptic feedback module 133 includes various software components for generating instructions for use by haptic output generator 167 to generate haptic output at one or more locations on device 100 in response to user interaction with device 100.
Text input module 134, which is optionally a component of graphics module 132, provides a soft keyboard for entering text in various applications (e.g., contacts 137, email 140, IM 141, browser 147, and any other application requiring text input).
The GPS module 135 determines the location of the device and provides such information for use in various applications (e.g., to the phone 138 for location-based dialing, to the camera 143 as picture/video metadata, and to applications that provide location-based services such as weather desktops, page-on-the-earth desktops, and map/navigation desktops).
The application 136 optionally includes the following modules (or sets of instructions) or a subset or superset thereof:
contact module 137 (sometimes referred to as an address book or contact list);
a telephone module 138;
video conferencing module 139;
email client module 140;
an Instant Messaging (IM) module 141;
a fitness support module 142;
a camera module 143 for still and/or video images;
an image management module 144;
browser module 147;
calendar module 148;
a desktop applet module 149, optionally including one or more of: weather desktop applet 149-1, stock desktop applet 149-2, calculator desktop applet 149-3, alarm desktop applet 149-4, dictionary desktop applet 149-5 and other desktop applets obtained by the user, and user created desktop applet 149-6;
A desktop applet creator module 150 for forming a user-created desktop applet 149-6;
search module 151;
a video and music player module 152, optionally consisting of a video player module and a music player module;
notepad module 153;
map module 154; and/or
An online video module 155.
Examples of other applications 136 optionally stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
The respective software application typically has an application state, at least during execution, indicative of the state of the respective software application and its components (e.g., gesture recognizer); see application internal state 321 (FIG. 3B) described below.
In connection with the touch sensitive display system 112, the display controller 156, the contact module 130, the graphics module 132, and the text input module 134, the contact module 137 includes executable instructions for managing an address book or contact list (e.g., in the application internal state 192 of the contact module 137 stored in the memory 102 or the memory 370), including: adding names to address books; deleting names from the address book; associating a telephone number, email address, physical address, or other information with the name; associating the image with the name; classifying and classifying names; providing a telephone number and/or email address to initiate and/or facilitate communication via telephone 138, video conference 139, email 140, or instant message 141; etc.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, phone module 138 includes executable instructions for: inputting a character sequence corresponding to the telephone numbers, accessing one or more telephone numbers in the address book 137, modifying the inputted telephone numbers, dialing the corresponding telephone numbers, conducting a conversation, and disconnecting or hanging up when the conversation is completed. As described above, wireless communication optionally uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, one or more optical sensors 164, optical sensor controller 158, contact module 130, graphics module 132, text input module 134, contact list 137, and telephony module 138, videoconferencing module 139 includes executable instructions to initiate, conduct, and terminate a videoconference between a user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions for creating, sending, receiving, and managing emails in response to user instructions. In conjunction with the image management module 144, the email client module 140 makes it very easy to create and send emails with still or video images captured by the camera module 143.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, instant message module 141 includes executable instructions for: inputting a character sequence corresponding to an instant message, modifying previously inputted characters, transmitting a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for phone-based instant messages or using XMPP, SIMPLE, apple push notification services (apls) or IMPS for internet-based instant messages), receiving an instant message, and viewing the received instant message. In some implementations, the transmitted and/or received instant message optionally includes graphics, photos, audio files, video files, and/or other attachments supported in an MMS and/or Enhanced Messaging Service (EMS). As used herein, "instant message" refers to both telephone-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, APNs or IMPS).
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module 146, workout support module 142 includes executable instructions for creating a workout (e.g., with time, distance, and/or calorie burn targets); communication with fitness sensors (in sports equipment and smart watches); receiving fitness sensor data; calibrating a sensor for monitoring fitness; selecting and playing music for exercise; and displaying, storing and transmitting the fitness data.
In conjunction with touch-sensitive display system 112, display controller 156, one or more optical sensors 164, optical sensor controller 158, contact module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions for: capturing still images or videos (including video streams) and storing them in the memory 102, modifying features of the still images or videos, and/or deleting the still images or videos from the memory 102.
In conjunction with the touch-sensitive display system 112, the display controller 156, the contact module 130, the graphics module 132, the text input module 134, and the camera module 143, the image management module 144 includes executable instructions for arranging, modifying (e.g., editing), or otherwise manipulating, labeling, deleting, presenting (e.g., in a digital slide or album), and storing still images and/or video images.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, touch module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions for browsing the internet (including searching, linking to, receiving, and displaying web pages or portions thereof, and attachments and other files linked to web pages) according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, email client module 140, and browser module 147, calendar module 148 includes executable instructions for creating, displaying, modifying, and storing calendars and data associated with calendars (e.g., calendar entries, to-do items, etc.) according to user instructions.
In conjunction with the RF circuitry 108, the touch-sensitive display system 112, the display system controller 156, the contact module 130, the graphics module 132, the text input module 134, and the browser module 147, the desktop applet module 149 is a mini-application (e.g., weather desktop applet 149-1, stock desktop applet 149-2, calculator desktop applet 149-3, alarm clock desktop applet 149-4, and dictionary desktop applet 149-5) optionally downloaded and used by a user, or a mini-application created by a user (e.g., user created desktop applet 149-6). In some embodiments, the desktop applet includes an HTML (hypertext markup language) file, a CSS (cascading style sheet) file, and a JavaScript file. In some embodiments, the desktop applet includes an XML (extensible markup language) file and a JavaScript file (e.g., yahoo.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, desktop applet creator module 150 includes executable instructions for creating an applet (e.g., turning a user-specified portion of a web page into the applet).
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions for searching for text, music, sound, images, video, and/or other files in memory 102 that match one or more search criteria (e.g., one or more user-specified search terms) according to user instructions.
In conjunction with the touch-sensitive display system 112, the display system controller 156, the contact module 130, the graphics module 132, the audio circuit 110, the speaker 111, the RF circuit 108, and the browser module 147, the video and music player module 152 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats (such as MP3 or AAC files), as well as executable instructions for displaying, presenting, or otherwise playing back video (e.g., on the touch-sensitive display system 112 or on an external display wirelessly connected via the external port 124). In some embodiments, the device 100 optionally includes the functionality of an MP3 player such as an iPod (trademark of Apple inc.).
In conjunction with touch-sensitive display system 112, display controller 156, touch module 130, graphics module 132, and text input module 134, notepad module 153 includes executable instructions for creating and managing notepads, backlog, and the like in accordance with user instructions.
In conjunction with the RF circuitry 108, the touch-sensitive display system 112, the display system controller 156, the contact module 130, the graphics module 132, the text input module 134, the GPS module 135, and the browser module 147, the map module 154 includes executable instructions for receiving, displaying, modifying, and storing maps and data associated with maps (e.g., driving directions, data of stores and other points of interest at or near a particular location, and other location-based data) according to user instructions.
In conjunction with the touch sensitive display system 112, the display system controller 156, the contact module 130, the graphics module 132, the audio circuit 110, the speaker 111, the RF circuit 108, the text input module 134, the email client module 140, and the browser module 147, the online video module 155 includes executable instructions that allow a user to access, browse, receive (e.g., by streaming and/or downloading), play back (e.g., on the touch screen 112 or on an external display connected wirelessly or via the external port 124), send emails with links to particular online videos, and otherwise manage online videos in one or more file formats such as H.264. In some embodiments, the instant messaging module 141 is used to send links to particular online videos instead of the email client module 140.
Each of the modules and applications identified above corresponds to a set of executable instructions for performing one or more of the functions described above, as well as the methods described in this application (e.g., computer-implemented methods and other information processing methods described herein). These modules (i.e., sets of instructions) need not be implemented in separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 102 optionally stores a subset of the modules and data structures identified above. Further, memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device on which the operation of a predefined set of functions is performed exclusively by a touch screen and/or touch pad. By using a touch screen and/or a touch pad as the primary input control device for operating the device 100, the number of physical input control devices (e.g., push buttons, dials, etc.) on the device 100 is optionally reduced.
A predefined set of functions performed solely by the touch screen and/or the touch pad optionally includes navigation between user interfaces. In some embodiments, the touch pad, when touched by a user, navigates the device 100 from any user interface displayed on the device 100 to a main menu, home menu, or root menu. In such implementations, a "menu button" is implemented using a touch pad. In some other embodiments, the menu button is a physical push button or other physical input control device, rather than a touch pad.
Fig. 2 illustrates a portable multifunction device 100 with a touch screen (e.g., touch-sensitive display system 112 of fig. 1) in accordance with some embodiments. The touch screen optionally displays one or more graphics within a User Interface (UI) 200. In this and other embodiments described below, a user can select one or more of these graphics by making a gesture on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figures) or one or more styluses 203 (not drawn to scale in the figures). In some embodiments, selection of one or more graphics will occur when a user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up and/or down), and/or scrolling of a finger that has been in contact with the device 100 (right to left, left to right, up and/or down). In some implementations or in some cases, inadvertent contact with the graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture that swipes over an application icon optionally does not select the corresponding application.
The device 100 optionally also includes one or more physical buttons, such as a "home" button, or a menu button 204. As previously described, menu button 204 is optionally used to navigate to any application 136 in a set of applications that are optionally executed on device 100. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on a touch screen display.
In some embodiments, the device 100 includes a touch screen display, menu buttons 204, a press button 206 for powering the device on/off and for locking the device, one or more volume adjustment buttons 208, a Subscriber Identity Module (SIM) card slot 210, a headset jack 212, and a docking/charging external port 124. Pressing button 206 is optionally used to turn on/off the device by pressing the button and holding the button in the pressed state for a predefined time interval; locking the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or unlock the device or initiate an unlocking process. In some implementations, the device 100 also accepts voice input through the microphone 113 for activating or deactivating certain functions. The device 100 also optionally includes one or more contact intensity sensors 165 for detecting intensity of contacts on the touch-sensitive display system 112, and/or one or more tactile output generators 167 for generating tactile outputs for a user of the device 100.
Fig. 3A is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, the device 300 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child learning toy), a gaming system, or a control device (e.g., a home controller or an industrial controller). The device 300 generally includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. Communication bus 320 optionally includes circuitry (sometimes referred to as a chipset) that interconnects and controls communications between system components. The device 300 includes an input/output (I/O) interface 330 with a display 340, typically a touch screen display. The I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and a touch sensitive touch pad 355, a tactile output generator 357 (e.g., similar to the tactile output generator 167 described above with reference to fig. 1) for generating tactile output on the device 300, a sensor 359 (e.g., the optical sensor 164, accelerometer 168, proximity sensor 166, and/or contact intensity sensor 165 described above with reference to fig. 1). Memory 370 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and optionally includes non-volatile memory such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 370 optionally includes one or more storage devices located remotely from CPU 310. In some embodiments, memory 370 stores programs, modules, and data structures similar to those stored in memory 102 of portable multifunction device 100 (fig. 1), or a subset thereof. Furthermore, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk editing module 388, and/or spreadsheet module 390, while memory 102 (fig. 1) of portable multifunction device 100 optionally does not store these modules.
Each of the above identified elements of fig. 3A are optionally stored in one or more of the previously mentioned memory devices. Each of the identified modules corresponds to a set of instructions for performing the functions described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures identified above. Further, memory 370 optionally stores additional modules and data structures not described above.
Fig. 3B is a block diagram illustrating exemplary components for event processing (e.g., event processing component 391), according to some embodiments. In some implementations, the memory 102 (FIG. 1) includes an event recognizer global method 312 and one or more application programs (e.g., 133-1 through 133-3).
In some implementations, the event recognizer global method 312 includes an event monitor 311, a hit view determination module 314, an active event recognizer determination module 316, and an event dispatcher module 315. In some embodiments, the event recognizer global method 312 resides within an event delivery system in the operating system 126 (FIG. 1). Alternatively, the event recognizer global method 312 is implemented in the corresponding application 133-1. In other embodiments, the event recognizer global method 312 is implemented as a stand-alone module, or as part of another module stored in the memory 102 (e.g., a contact/motion module (not shown)).
The event monitor 311 receives event information from one or more sensors 359, a touch-sensitive display 340, and/or one or more input devices 350, 355. The event information includes information about the event (e.g., a user touch on the touch-sensitive display 156 as part of a multi-touch gesture or movement of the device 102) and/or sub-events (e.g., movement of the touch across the touch-sensitive display 156). For example, the event information for the touch event includes one or more of the following: the location of the touch and the timestamp. Similarly, event information for a swipe event includes two or more of: the location, time stamp, direction and speed of the swipe. The sensor 359, touch-sensitive display 156, and input device 128 send information events and sub-event information to the event monitor 311 directly or through a peripheral interface that retrieves and stores event information. In some implementations, the sensor 359 includes one or more of the following: proximity sensors, accelerometers, gyroscopes, microphones and cameras. In some embodiments, the sensor 359 further includes an input device 128 and/or a touch-sensitive display 156.
In some embodiments, event monitor 311 sends requests to sensor 116 and/or the peripheral interface at predetermined intervals. In response, the sensor 116 and/or the peripheral interface transmits event information. In other embodiments, the sensor 116 and peripheral interface only transmit event information when there is a significant event (e.g., receiving input exceeding a predetermined noise threshold and/or exceeding a predetermined duration).
The event monitor 311 receives the event information and relays the event information to the event dispatcher module 315. In some implementations, the event monitor 311 determines one or more respective applications (e.g., 133-1) to which to deliver event information. In some implementations, the event monitor 311 also determines one or more respective views 317 of one or more respective applications to which to deliver event information.
While the view is generally considered to be a window or other portion of the user interface, technically, the view is a portion of an application that manages a particular area or region of the user interface of the application. Views are typically implemented as instances of a particular class or one of its subclasses, and manage rectangular areas in an application window. Thus, a view is an object that has an associated display area or user interface portion, and also has one or more computer programs (sometimes referred to as "methods") associated with the class that the view is an instance. The view is responsible for drawing the content, handling multi-touch events and managing the layout of any sub-views. Drawing content involves drawing shapes, images, and text within rectangular areas of a view using various graphics techniques. The view responds to touch events in its rectangular area by using a gesture recognizer or directly processing the touch events. In the view hierarchy, the parent view is responsible for locating and resizing its child views, and this can be performed dynamically. This ability to dynamically modify sub-views enables the views to adapt to changing conditions such as interface rotation and animation.
The view may be considered a building block for a programmer or application developer to construct a user interface for an application. Rather than using one view to present all of the content of an application, several views are typically used to build a view hierarchy. Each view in the hierarchy presents a particular portion of the user interface of the application and is typically optimized for a particular type of content. For example, an application may have different views dedicated to rendering images, text, and other types of content.
In some implementations, the event recognizer global method 312 also includes a hit view determination module 314 and/or an active event recognizer determination module 316.
Hit view determination module 314, if present, provides a software program for determining the location of events or sub-events occurring within one or more views when touch sensitive display 156 displays more than one view. The view is made up of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with the corresponding application (e.g., 133-1) is a set of views 317, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. An application view (of the respective application) in which a touch is detected may correspond to a particular view within the view hierarchy of the application. For example, the lowest level view in which a touch is detected may be referred to as a hit view, and the set of events identified as being correctly entered may be determined based at least in part on the hit view of the initial touch that begins the touch-based gesture. Further discussion of the view hierarchy is provided below with reference to FIG. 3D.
Hit view determination module 314 receives information related to events and/or sub-events. When an application has multiple views organized in a hierarchy, hit view determination module 314 identifies the hit view as the lowest view in the hierarchy that should process an event or sub-event. In most cases, the hit view is the lowest level view of the occurrence of the initiating event or sub-event (i.e., the first event or sub-event in the sequence of events and/or the sub-event forming the gesture). Once the hit view is identified by the hit view determination module, the hit view typically receives all events and/or sub-events related to the same touch or input source for which it was identified as the hit view. However, hit views are not always unique views that receive all events and/or sub-events related to the same touch or input source identified as hit view. In other words, in some implementations, another application (e.g., 133-2) or another view of the same application also receives at least a subset of events and/or sub-events related to the same touch or input source, even if (or whether) a hit view of the touch or input source has been determined.
The activity event recognizer determination module 316 determines which view or views within the view hierarchy should receive a particular sequence of events and/or sub-events. In some application contexts, the active event recognizer determination module 316 determines that only hit views should receive a particular sequence of events and/or sub-events. In other application contexts, the activity event recognizer determination module 316 determines that all views that include the physical location of an event or sub-event are actively engaged views and, thus, that all actively engaged views should receive a particular sequence of events and/or sub-events. In other application contexts, even if a touch event and/or sub-event is completely globally limited to an area associated with a particular view, the higher view in the hierarchy remains an actively engaged view, and thus should receive a particular sequence of events and/or sub-events. Additionally or alternatively, the activity event recognizer determination module 316 determines which application(s) in the programmatic hierarchy should receive a particular sequence of events and/or sub-events. Thus, in some embodiments, the activity event recognizer determination module 316 determines that only the corresponding application in the programming hierarchy should receive a particular sequence of events and/or sub-events. In some embodiments, the activity event recognizer determination module 316 determines that multiple applications in the programming hierarchy should receive a particular sequence of events and/or sub-events.
As discussed further below, in some embodiments, a touch is initially considered to be attached to a particular view or set of views, but when a drag gesture is detected, the touch or touches "separate" and as the touch or touches traverse the user interface region associated with those other views, events related to the touch or touches may be delivered to the other views.
The event dispatcher module 315 dispatches the event information to an event recognizer (also referred to herein as a "gesture recognizer") (e.g., event recognizer 325-1). In embodiments that include an active event recognizer determination module 316, the event dispatcher module 315 delivers event information to the event recognizer determined by the active event recognizer determination module 316. In some embodiments, the event dispatcher module 315 stores event information in an event queue that is retrieved by a respective event recognizer 325 (or event receiver 331) of the respective event recognizers 325.
In some embodiments, the respective application (e.g., 133-1) includes an application internal state 321 that indicates one or more current application views that are displayed on the touch-sensitive display 156 when the application is active or executing. In some embodiments, the event recognizer global method 312 uses the device/global internal state 134 (FIG. 1C) to determine which application(s) are currently active, and the event recognizer global method 312 uses the application internal state 321 to determine the application view 317 to which to deliver event information.
In some implementations, the application internal state 321 includes additional information, such as one or more of the following: the information to be used is restored when the application 133-1 resumes execution, user interface state information indicating that the information is being displayed or ready for display by the application 133-1, a state queue for enabling the user to return to a previous state or view of the application 133-1, and a repeat/undo queue of previous actions taken by the user. In some embodiments, the application internal state 321 also includes contextual information/text and metadata 323.
In some implementations, the application 133-1 includes one or more application views 317, each having corresponding instructions (e.g., corresponding event handlers 319, sometimes referred to as gesture handlers) for handling touch events occurring within a respective view of the user interface of the application. At least one application view 317 of the application 133-1 includes one or more event recognizers 325. Typically, the respective application view 317 includes a plurality of event recognizers 325. In other embodiments, one or more of the event recognizers 325 are part of a separate module that is a higher level object such as a user interface toolkit (not shown) or application 133-1 inheriting methods and other properties. In some implementations, the respective application view 317 also includes one or more of the following: a data updater, an object updater, a GUI updater, and/or received event data.
The corresponding application (e.g., 133-1) also includes one or more event handlers 319. Typically, the corresponding application (e.g., 133-1) includes a plurality of event handlers 319.
The corresponding event identifier 325-1 receives event information from the event dispatcher module 315 (directly or indirectly through the application 133-1) and identifies events from the event information. Event recognizer 325-1 includes event receiver 331 and event comparator 333.
The event information includes information about an event (e.g., touch) or sub-event (e.g., touch movement). The event information also includes additional information such as the location of the event or sub-event according to the event or sub-event. When an event or sub-event relates to movement of a touch, the event information may also include the speed and direction of the sub-event. In some embodiments, the event includes rotation of the device from one orientation to another orientation (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about a current orientation of the device (also referred to as a device pose).
The event comparator 333 compares the event information to one or more predefined gesture definitions (also referred to herein as "event definitions") and, based on the comparison, determines an event or sub-event, or determines or updates the state of the event or sub-event. In some embodiments, event comparator 333 includes one or more gesture definitions 335 (also referred to herein as "event definitions" as described above). Gesture definition 335 includes definitions of gestures (e.g., predefined sequences of events and/or sub-events), such as gesture 1 (337-1), gesture 2 (337-2), and so forth. In some implementations, sub-events in gesture definition 335 include, for example, touch start, touch end, touch move, touch cancel, and multiple touches. In one example, the definition of gesture 1 (337-1) is a double click on the displayed object. For example, a double tap includes a first touch on a displayed object at a predetermined stage of the gesture (touch start), a first lift-off at a next predetermined stage of the gesture (touch end), a second touch on a displayed object at a subsequent predetermined stage of the gesture (touch start), and a second lift-off at a last predetermined stage of the gesture (touch end). In another example, the definition of gesture 2 (337-2) includes dragging over the displayed object. For example, dragging includes touching (or contacting) on the displayed object, movement of the touch on the touch-sensitive display 156, and lifting of the touch (touch end).
In some embodiments, event recognizer 325-1 also includes information for event delivery 339. The information for event delivery 339 includes a reference to the corresponding event handler 319. Optionally, the information for event delivery 339 includes action target pairs 354. In some implementations, in response to identifying the gesture (or a portion of the gesture), event information (e.g., an action message) is sent to one or more targets 383 identified by action target pair 354 (see fig. 3D). In other embodiments, the action target pair is activated in response to recognizing a gesture (or a portion of a gesture).
In some implementations, gesture definitions 335 include definitions of gestures for respective user interface objects. In some implementations, the event comparator 333 performs hit testing to determine which user interface object is associated with the sub-event. For example, in an application view that displays three user interface objects on touch-sensitive display 156, when a touch is detected on touch-sensitive display 156, event comparator 333 performs a hit test to determine which of the three user interface objects (if any) is associated with the touch (event). If each displayed object is associated with a corresponding event handler 319, then event comparator 333 uses the results of the hit test to determine which event handler 319 should be activated. For example, event comparator 333 selects event handler 319 associated with the event and object that triggered the hit test.
In some implementations, the respective gesture definitions 337 for the respective gestures also include a delay action that delays delivery of the event information until it has been determined whether the sequence of events and/or sub-events corresponds to an event type of the event recognizer.
When the respective event recognizer 325-1 determines that a series of events and/or sub-events do not match any of the events in the gesture definition 335, the respective event recognizer 325-1 enters an event failure state, after which the respective event recognizer 325-1 ignores subsequent events and/or sub-events of the touch-based gesture. In this case, the other event recognizers (if any) that remain active for the hit view continue to track and process events and/or sub-events of the ongoing touch-based gesture.
In some embodiments, when no event identifiers for hit views remain, event information is sent to one or more event identifiers in higher views in the view hierarchy. Alternatively, when no event identifier for hit view is maintained, event information is ignored. In some embodiments, when no event identifiers for views in the view hierarchy remain, event information is sent to one or more event identifiers in a higher programming level in the program hierarchy. Alternatively, when there is no event identifier for a view in the view hierarchy to hold, then the event information is ignored.
In some embodiments, the corresponding event recognizer 325-1 includes an event recognizer state 334. Event recognizer state 334 includes the state of the corresponding event recognizer 325-1.
In some embodiments, event recognizer state 334 includes recognizer metadata and attributes 349. In some embodiments, the identifier metadata and attributes 349 include one or more of the following: a) Configurable properties, flags, and/or lists that indicate how an event delivery system should perform event and/or sub-event delivery to an actively participating event recognizer; b) Configurable properties, flags, and/or lists that indicate how event identifiers interact with each other; c) Configurable properties, flags, and/or lists that indicate how event information is received by the event recognizer; d) Configurable properties, flags, and/or lists that indicate how the event recognizer recognizes gestures; e) Configurable properties, flags, and/or lists that indicate whether events and/or sub-events are delivered to different levels in the view hierarchy; and F) a reference to a corresponding event handler 319.
In some embodiments, event recognizer state 334 includes event/touch metadata 351. Event/touch metadata 351 includes event/touch information regarding the respective event/touch that has been detected and corresponds to respective gesture definitions 337 of gesture definition 335. The event/touch information includes one or more of the following: the location, timestamp, speed, direction, distance, scale (or scale change) and angle (or angle change) of the corresponding event/touch.
In some implementations, when one or more particular events and/or sub-events of a gesture are identified, the respective event recognizer 325 activates an event handler 319 associated with the respective event recognizer 325. In some implementations, the respective event identifier 325 delivers event information associated with the event to the event handler 319.
Event handlers 319, when activated, perform one or more of the following: create and/or update data, create and update objects, and prepare and send display information for display on display 126 or touch-sensitive display 156.
In some embodiments, the corresponding application view 317-2 includes view metadata 341. As described above with reference to fig. 3B, view metadata 341 includes data about the view. Optionally, view metadata 341 includes one or more of the following: a stop attribute 342, a skip attribute 343, a miss skip attribute 344, an interaction identifier 347 (discussed below with respect to a dynamically added drag, drop, and spring-loaded gesture recognizer), and other view metadata 329.
In some implementations, a first actively engaged view within the view hierarchy can be configured to prevent delivery of a respective sub-event to an event identifier associated with the first actively engaged view. This behavior may implement skip attribute 343. When the skip attribute is set for the application view, delivery of the corresponding sub-event is still performed for event identifiers associated with other actively engaged views in the view hierarchy.
Alternatively, a first active participation view within the view hierarchy may be configured to prevent delivery of a corresponding sub-event to an event identifier associated with the first active participation view unless the first active participation view is a hit view. This behavior may implement conditional miss skip attribute 344.
In some implementations, the second active participation view within the view hierarchy is configured to prevent delivery of the respective sub-event to an event identifier associated with the second active participation view and an event identifier associated with a prototype of the second active participation view. This behavior may implement the stop attribute 342.
FIG. 3C is a block diagram illustrating exemplary classes and examples of gesture recognizers (e.g., event processing component 390) according to some embodiments.
A software application (e.g., application 133-1) has one or more event recognizers 345. In some embodiments, the corresponding event recognizer (e.g., 345-2) is an event recognizer class. The corresponding event recognizer (e.g., 345-2) includes event recognizer specific code 338 (e.g., a set of instructions defining the operation of the event recognizer) and state machine 340.
In some implementations, the application state 321 of the software application (e.g., application 133-1) includes an instance of an event identifier. Each instance of an event recognizer is an object having a state (e.g., event recognizer state 334). The "execution" of the respective event recognizer instance is accomplished by executing the corresponding event recognizer specific code (e.g., 338) and updating or maintaining the state 334 of the event recognizer instance 365. The state 334 of the event recognizer instance 365 includes the state 351 of the state machine 340 of the event recognizer instance.
In some implementations, the application state 321 includes a plurality of event recognizer instances 365. The corresponding event recognizer instances 365 generally correspond to event recognizers that have been bound (also referred to as "attached") to views of an application. In some embodiments, one or more event recognizer instances 365 are bound to respective applications in the program hierarchy without reference to any particular view of the respective applications. In some implementations, the application state 321 includes multiple instances (e.g., 365-1 through 365-L) of the respective event identifier (e.g., 345-2). In some embodiments, the application state 321 includes instances 365 of multiple event identifiers (e.g., 345-1 through 345-R).
In some implementations, the respective instance of the gesture recognizer 345 (e.g., 365-2) includes the event recognizer state 334. As described above, in some embodiments, event recognizer state 334 includes recognizer metadata and attributes 349 and event/touch metadata 351. In some implementations, the event recognizer state 334 also includes a view hierarchy reference 336 that indicates to which view the corresponding instance 365-2 of the gesture recognizer 345-2 is attached.
In some embodiments, the identifier metadata and attributes 349 include the following or a subset or superset thereof:
exclusive tag 324;
an exclusive exception list 326;
wait list 327; when included in the event recognizer state 334 for the corresponding event recognizer (or gesture recognizer), the list 327 indicates the set of event recognizers (or gesture recognizers), if any, that must enter an event impossible or event cancel state before the corresponding event recognizer can recognize the corresponding event. In other words, the listed event or gesture recognizer must not recognize the input or event until the event recognizer with wait list 327 is allowed to recognize the input or event. In effect, the listed event recognizers have a higher priority to recognize events than event recognizers with waiting list 327.
Delay touch start marker 328;
delay touch end mark 330; and
touch cancel tab 332.
In some embodiments, the one or more event recognizers may be adapted to delay delivering one or more sub-events of the sequence of sub-events until after the event recognizer recognizes the event. This behavior reflects a delay incident. For example, consider a single flick gesture in a view, where multiple flick gestures are possible. In this case, the tap event becomes a "tap+delay" identifier. Essentially, when the event recognizer implements this behavior, the event recognizer will delay event recognition until it is determined that the sequence of sub-events does correspond to its event definition. This behavior may be appropriate when the recipient view fails to properly respond to the cancelled event. In some embodiments, the event recognizer will delay updating its event recognition state to its corresponding active participation view until the event recognizer determines that the sequence of sub-events does not correspond to its event definition. Delayed touch start marker 328, delayed touch end marker 330, and touch cancel marker 332 are provided to customize sub-event delivery techniques, as well as event identifier and view state information updating to specific needs.
In some embodiments, the identifier metadata and attributes 349 include the following or a subset or superset thereof:
state machine state/stage 351, which indicates the state of the state machine (e.g., 340) of the respective event recognizer instance (e.g., 365-2); state machine state/stage 351 may have various state values, such as "event possible", "event identification", "event failure", etc., as described below; alternatively or additionally, state machine state/stage 351 may have various stage values, such as "touch stage start," which may indicate that the touch data structure defines a new touch that is not referenced by the previous touch data structure; the "touch phase move" value may indicate that the defined touch has moved from a previous location; the "touch phase rest" value may indicate that the touch remains in the same location; the "touch phase end" value may indicate that the touch has ended (e.g., the user has lifted his/her finger from the surface of the multi-touch display); the "touch phase cancel" value may indicate that the touch has been cancelled by the device; the cancelled touch may be a touch that is not necessarily ended by the user but that the device has determined to be ignored; for example, the device may determine that a touch was inadvertently generated (i.e., due to placing the portable multi-touch enabled device in one pocket) and ignore the touch for that reason; each value of state machine state/stage 351 may be an integer (referred to herein as a "gesture recognizer state value");
Action-target pairs 354, where each pair identifies a target to which the respective event recognizer instance sent the recognized action message in response to recognizing the event or touch as a gesture or as part of a gesture;
a delegate 353, the delegate 353 being a reference to a corresponding delegate when the delegate is assigned to a respective event recognizer instance; when a delegate is not assigned to a corresponding event recognizer instance, the delegate 346 contains a null value; and
enable attribute 356, which indicates whether the corresponding event recognizer instance is enabled; in some implementations, when the respective event recognizer instance is not enabled (e.g., disabled), the respective event recognizer instance does not process the event or touch.
Additional information regarding gesture recognizers, including their status and attributes, can be found in U.S. patent application 14/290,931 filed on date 29, 5, 2014, which is incorporated herein by reference in its entirety.
Each touch data structure 361 may include various entries. In some implementations, the touch data structure can include data corresponding to at least touch-specific entries in event/touch metadata 351, such as the following or a subset or superset thereof:
First touch item 362 of the view;
"per touch information" entry 363, including "timestamp" information indicating the particular time (e.g., touch time) involved in touching the data structure; optionally, each touch information item 362 includes other information, such as the location of the corresponding touch; and
an optional "tap count" entry 364.
Thus, each touch data structure may define what happens to a corresponding touch (or other input source) at a particular time (e.g., whether the touch is stationary, moving, etc.), as well as other information associated with the touch (such as location). Thus, each touch data structure may define the state of a particular touch at a particular time. One or more touch data structures referencing the same time may be added to the touch event data structure, which may define the state of all touches that a particular view is receiving at a certain time (some touch data structures may also reference touches that have ended and are no longer being received, as described above). Over time, multiple touch event data structures may be sent to the software implementing the view to provide the software with continuous information describing touches occurring in the view.
The ability to handle complex touch-based gestures, optionally including multi-touch gestures, can increase the complexity of various software applications. In some cases, this additional complexity may be necessary to achieve advanced and desirable interface functionality. For example, a game may need to be able to handle multiple simultaneous touches occurring in different views, as games typically require multiple buttons to be pressed simultaneously, or to combine accelerometer data with touches on a touch-sensitive surface. However, some simpler applications and/or views do not require advanced interface functionality. For example, simple soft buttons (i.e., buttons displayed on a touch sensitive display) may operate satisfactorily with a single touch rather than a multi-touch function. In these cases, the underlying OS may send unnecessary or excessive touch data (e.g., multi-touch data) to software components associated with views intended to be operable with only a single touch (e.g., a single touch or tap of a soft button). Since the software component may need to process this data, it may need to feature all the complexity of the software application that processes multiple touches, even though it is associated with views that are related to only a single touch. This can increase the cost of software development for the device, as software components that are traditionally easy to program in a mouse interface environment (i.e., various buttons, etc.) can be much more complex in a multi-touch environment.
To reduce the complexity of recognizing complex touch-based gestures, delegation may be used to control the behavior of event recognizers according to some implementations. As described below, the delegate may determine, for example, whether the corresponding event recognizer (or gesture recognizer) may receive event (e.g., touch) information; whether the corresponding event recognizer (or gesture recognizer) can transition from an initial state (e.g., an event possible state) of the state machine to another state; and/or whether the corresponding event recognizer (or gesture recognizer) can recognize an event (e.g., touch) as a corresponding gesture at the same time without preventing other event recognizers (or gesture recognizers) from recognizing or being blocked by other event recognizers (or gesture recognizers) that recognize the event.
However, it should be appreciated that the foregoing discussion regarding evaluating and processing the complexity of user touches on a touch sensitive surface also applies to operating all forms of user inputs of the electronic device 102 with the input device 128, not all user inputs being initiated on a touch screen, e.g., coordinating mouse movements and mouse button presses with or without: single or multiple keyboard presses or holds, device rotations or other movements, user movements (such as taps, drags, scrolls, etc.), stylus inputs, verbal indications, detected eye movements, biometric inputs, detected user physiological changes, and/or any combination thereof on the touch pad, which may be used as inputs corresponding to events and/or sub-events defining the event to be identified.
Turning to event information flows, fig. 3D is a block diagram illustrating event information flows according to some embodiments.
In FIG. 3D, event dispatcher module 315 (e.g., in operating system 118 or application software 124) receives event information (e.g., for a touch event in view 378) and sends the event information to one or more applications (e.g., application 133-1, but not application 133-2 in this example). In some implementations, the application 133-1 includes multiple views (e.g., 374, 376, and 378 corresponding to view 317, FIG. 3B) in the view hierarchy 372 and multiple gesture recognizers (379-1 through 379-3) in the multiple views. Application 133-1 also includes one or more gesture handlers 319-i, which correspond to objects 383 (e.g., objects 383-1 and 383-2) in object-action pair 354. In some implementations, the event dispatcher module 315 receives hit view information from the hit view determination module 314 and sends the event information to the hit view (e.g., 378) or to event recognizers attached to the hit view (e.g., 379-1 and 379-2). Additionally or alternatively, the event dispatcher module 315 receives hit level information from the hit level determination module 352 and sends the event information to one or more event recognizers (e.g., 379-4) in the applications in the hit levels (e.g., 133-1 and 133-2) or in the hit level applications. Note that in the example shown in fig. 3D, the application 133-2 does not have a view that is a hit view for the received event information, and the event information is not sent to any of the gesture recognizers (e.g., gesture recognizer 379-4) of the application 133-2.
In some embodiments, one of the applications receiving the event information is a default application (e.g., 133-2 may be the default application). In some implementations, only a subset of the gesture recognizers in each receiving application are allowed (or configured) to receive event information. For example, in this example, gesture recognizer 379-3 in application 133-1 does not receive event information. Gesture recognizers that receive event information are referred to herein as receiving gesture recognizers. In FIG. 3D, the receiving gesture recognizers 379-1 and 379-2 receive event information and compare the received event information to corresponding gesture definitions 337 (FIG. 3B) in the receiving gesture recognizers. In FIG. 3D, gesture recognizers 379-1 and 379-4 have respective gesture definitions, and when these definitions match the received event information, the gesture recognizer with the matching gesture definition sends a respective action message (e.g., 381) to the corresponding gesture handler (e.g., 319-i). In this example, the gesture handler 319-j of the application 133-2 does not receive an action message corresponding to the event information scheduled by the event dispatcher module 315, because the event information is sent to the gesture recognizer in the hit view 378, the hit view 378 is not part of the application 133-2. In some implementations, the gesture handler is part of a module that is independent of the application, which module is available to a plurality of different applications as an application development framework, and when the application is compiled using the application development framework, the gesture handler will be included in the code of the application (e.g., the gesture handler is not part of an application-specific application core, but rather is part of an application that is independent of the application in a sense that it is available to a plurality of different applications). In other cases, such as touch input in the view of application 133-2, event information will be delivered to one or more gesture recognizers of application 133-2.
Attention is now directed to embodiments of a user interface ("UI") optionally implemented on the portable multifunction device 100.
Fig. 4A illustrates an exemplary user interface of an application menu on the portable multifunction device 100 in accordance with some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
signal strength indicator 402 for wireless communications (such as cellular signals and Wi-Fi signals);
time 404;
bluetooth indicator 405;
battery status indicator 406;
tray 408 with common application icons, such as:
an icon 416 labeled "phone" of the o phone module 138, the icon 416 optionally including an indicator 414 of the number of missed calls or voice messages;
an icon 418 labeled "mail" of the email client module 140, the icon 418 optionally including an indicator 410 of the number of unread emails;
an icon 420 labeled "browser" for the browser module 147; and
an icon 422 labeled "iPod" of the o video and music player module 152 (also referred to as iPod (trademark of Apple inc.) module 152); and
Icons of other applications, such as:
an icon 424 labeled "message" for the o IM module 141;
an icon 426 labeled "calendar" of the calendar module 148;
an icon 428 labeled "photo" of the image management module 144;
an icon 430 labeled "camera" of camera module 143;
an icon 432 labeled "online video" of the online video module 155;
an icon 434 labeled "stock market" for the o stock market desktop applet 149-2;
an icon 436 labeled "map" of the map module 154;
an icon 438 labeled "weather" for weather desktop applet 149-1;
an icon 440 labeled "clock" for the o alarm desktop applet 149-4;
an icon 442 labeled "fitness support" for fitness support module 142;
icon 444 labeled "notepad" for notepad module 153; and
an icon 446 for setting an application or module that provides access to settings of the device 100 and its various applications 136.
It should be noted that the iconic labels shown in fig. 4A are merely exemplary. For example, in some embodiments, the icon 422 of the video and music player module 152 is labeled "music" or "music player". Other labels are optionally used for various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 in fig. 3) having a touch-sensitive surface 451 (e.g., a tablet or touchpad 355 in fig. 3A) separate from the display 450. The device 300 also optionally includes one or more contact intensity sensors (e.g., one or more of the sensors 357) for detecting the intensity of contacts on the touch-sensitive surface 451 and/or one or more tactile output generators 359 for generating tactile outputs for a user of the device 300.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 in fig. 3) having a touch-sensitive surface 451 (e.g., a tablet or touchpad 355 in fig. 3) separate from the display 450. While many examples will be given later with reference to inputs on touch screen display 112 (where the touch sensitive surface and the display are combined), in some embodiments the device detects inputs on a touch sensitive surface separate from the display, as shown in fig. 4B. In some implementations, the touch-sensitive surface (e.g., 451 in fig. 4B) has a primary axis (e.g., 452 in fig. 4B) that corresponds to the primary axis (e.g., 453 in fig. 4B) on the display (e.g., 450). According to these implementations, the device detects contact with the touch-sensitive surface 451 at locations corresponding to respective locations on the display (e.g., 460 and 462 in fig. 4B) (e.g., 460 corresponds to 468 and 462 corresponds to 470 in fig. 4B). Thus, user inputs (e.g., contacts 460 and 462 and movement thereof) detected by the device on the touch-sensitive surface are used by the device to manipulate the user interface on the display (e.g., 450 in FIG. 4B) when the touch-sensitive surface (e.g., 451 in FIG. 4B) is separate from the display of the multifunction device. It should be appreciated that similar approaches are optionally used for other user interfaces described herein.
Additionally, while the following examples are primarily presented with reference to finger inputs (e.g., finger contacts, single-finger flick gestures, finger swipe gestures, etc.), it should be understood that in some embodiments one or more of these finger inputs are replaced by input from another input device (e.g., mouse-based input or stylus input). For example, a swipe gesture is optionally replaced with a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, a flick gesture is optionally replaced by a mouse click (e.g., instead of detection of contact, followed by ceasing to detect contact) when the cursor is over the position of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be appreciated that multiple computer mice are optionally used simultaneously, or that the mice and finger contacts are optionally used simultaneously.
As used herein, the term "focus selector" refers to an input element that indicates the current portion of a user interface with which a user is interacting. In some implementations that include a cursor or other position marker, the cursor acts as a "focus selector" such that when the cursor detects an input (e.g., presses an input) on a touch-sensitive surface (e.g., touch pad 355 in fig. 3 or touch-sensitive surface 451 in fig. 4B) above a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted according to the detected input. In some implementations including a touch screen display (e.g., touch sensitive display system 112 in fig. 1 or the touch screen in fig. 4A) that enables direct interaction with user interface elements on the touch screen display, the contact detected on the touch screen acts as a "focus selector" such that when an input (e.g., a press input by contact) is detected on the touch screen display at the location of a particular user interface element (e.g., button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations, the focus is moved from one area of the user interface to another area of the user interface without a corresponding movement of the cursor or movement of contact on the touch screen display (e.g., by moving the focus from one button to another using a tab key or arrow key); in these implementations, the focus selector moves in accordance with movement of the focus between different areas of the user interface. Regardless of the particular form that the focus selector takes, the focus selector is typically a user interface element (or contact on a touch screen display) that is controlled by the user to communicate a user-desired interaction with the user interface (e.g., by indicating to the device the element with which the user of the user interface desires to interact). For example, upon detection of a press input on a touch-sensitive surface (e.g., a touch pad or touch screen), the position of a focus selector (e.g., a cursor, contact, or selection box) over the respective button will indicate that the user desires to activate the respective button (rather than other user interface elements shown on the device display).
As used in this specification and in the claims, the term "intensity" of a contact on a touch-sensitive surface is the force or pressure (force per unit area) of the contact on the touch-sensitive surface (e.g., finger contact or stylus contact), or refers to a surrogate of the force or pressure of the contact on the touch-sensitive surface. The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., at least 256). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., weighted average or summation) to determine an estimated contact force. Similarly, the pressure-sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area and/or its variation detected on the touch-sensitive surface, the capacitance of the touch-sensitive surface in the vicinity of the contact and/or its variation and/or the resistance of the touch-sensitive surface in the vicinity of the contact and/or its variation are optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, surrogate measurements of contact force or pressure are directly used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to surrogate measurements). In some implementations, an alternative measurement of contact force or pressure is converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). The intensity of the contact is used as an attribute of the user input, allowing the user to access additional device functions that would otherwise not be readily accessible on a smaller-sized device for displaying affordances and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or physical/mechanical controls such as knobs or buttons).
In some implementations, the contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether the user has "clicked" on an icon). In some implementations, at least a subset of the intensity thresholds are determined according to software parameters (e.g., the intensity thresholds are not determined by activation thresholds of particular physical actuators, and may be adjusted without changing the physical hardware of the device 100). For example, without changing the touchpad or touch screen display hardware, the mouse "click" threshold of the touchpad or touch screen display may be set to any of a wide range of predefined thresholds. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more intensity thresholds in a set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting multiple intensity thresholds at once with a system-level click on an "intensity" parameter).
As used in the specification and claims, the term "characteristic intensity" of a contact refers to the characteristic of a contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. The characteristic intensity is optionally based on a predefined number of intensity samples or a set of intensity samples acquired during a predetermined period of time (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, 5 seconds, 10 seconds) relative to a predefined event (e.g., after detection of contact, before or after detection of lift-off of contact, before or after detection of start of movement of contact, before or after detection of end of contact, and/or before or after detection of decrease in intensity of contact). The characteristic intensity of the contact is optionally based on one or more of: maximum value of contact strength, average value of contact strength, value at the first 10% of contact strength, half maximum value of contact strength, 90% maximum value of contact strength, etc. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an operation. For example, the set of one or more intensity thresholds may include a first intensity threshold and a second intensity threshold. In this example, contact of the feature strength that does not exceed the first threshold results in a first operation, contact of the feature strength that exceeds the first strength threshold but does not exceed the second strength threshold results in a second operation, and contact of the feature strength that exceeds the second threshold results in a third operation. In some implementations, a comparison between the feature intensity and one or more intensity thresholds is used to determine whether to perform one or more operations (e.g., whether to perform the respective option or to forgo performing the respective operation) instead of being used to determine whether to perform the first operation or the second operation.
In some implementations, a portion of the gesture is identified for determining a feature strength. For example, the touch-sensitive surface may receive a continuous swipe contact that transitions from a starting position and reaches an ending position (e.g., a drag gesture) where the intensity of the contact increases. In this example, the characteristic intensity of the contact at the end position may be based on only a portion of the continuous swipe contact, rather than the entire swipe contact (e.g., only a portion of the swipe contact at the end position). In some implementations, a smoothing algorithm may be applied to the intensity of the swipe gesture prior to determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of the following: an unweighted moving average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some cases, these smoothing algorithms eliminate narrow spikes or depressions in the intensity of the swipe contact for the purpose of determining the characteristic intensity.
The user interface diagrams described herein optionally include various intensity diagrams that illustrate contact on a touch-sensitive surface relative to one or more intensity thresholds (e.g., contact detection intensity threshold IT 0 Light press intensity threshold IT L Deep press intensity threshold IT D (e.g., which is initially at least higher than I L ) And/or one (a)One or more other intensity thresholds (e.g., ratio I L Lower intensity threshold I H ) A) the current intensity. The intensity map is typically not part of the displayed user interface, but is provided to help explain the map. In some embodiments, the tap strength threshold corresponds to a strength of: at this intensity the device will perform the operations normally associated with clicking a button of a physical mouse or touch pad. In some embodiments, the deep press intensity threshold corresponds to an intensity of: at this intensity the device will perform an operation that is different from the operation normally associated with clicking a physical mouse or a button of a touch pad. In some embodiments, when the characteristic intensity is detected to be below the light press intensity threshold (e.g., and above the nominal contact detection intensity threshold IT 0 A contact that is lower than the nominal contact detection intensity threshold is no longer detected), the device will move the focus selector according to the movement of the contact over the touch-sensitive surface without performing an operation associated with the tap intensity threshold or the deep tap intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent across different sets of user interface drawings.
In some embodiments, the response of the device to the input detected by the device depends on a criterion based on the contact strength during the input. For example, for some "tap" inputs, the intensity of the contact exceeding a first intensity threshold during the input triggers a first response. In some embodiments, the response of the device to the input detected by the device depends on criteria including both contact strength during the input and time-based criteria. For example, for some "deep press" inputs, the intensity of the contact exceeding a second intensity threshold greater than the first intensity threshold of the light press triggers a second response whenever a delay time passes between the first intensity threshold being met and the second intensity threshold being met during the input. The duration of the delay time is typically less than 200ms (e.g., 40ms, 100ms, or 120ms, depending on the magnitude of the second intensity threshold, wherein the delay time increases as the second intensity threshold increases). This delay time helps to avoid accidental deep press inputs. As another example, for some "deep press" inputs, a period of reduced sensitivity will occur after the first intensity threshold is reached. During the period of reduced sensitivity, the second intensity threshold increases. This temporary increase in the second intensity threshold also helps to avoid accidental deep press inputs. For other deep press inputs, the response to detecting the deep press input is not dependent on a time-based criterion.
In some implementations, one or more of the input intensity threshold and/or corresponding outputs varies based on one or more factors, such as user settings, touch movement, input timing, application execution, rate at which intensity is applied, number of simultaneous inputs, user history, environmental factors (e.g., environmental noise), focus selector position, etc. Exemplary factors are described in U.S. patent application Ser. Nos. 14/399,606 and 14/624,296, which are incorporated herein by reference in their entireties.
For example, FIG. 4C illustrates a dynamic intensity threshold 480 that varies over time based in part on the intensity of the touch input 476 over time. The dynamic intensity threshold 480 is the sum of the two components: a first component 474 that decays over time after a predefined delay time p1 from when the touch input 476 was initially detected and a second component 478 that tracks the intensity of the touch input 476 over time. The initial high intensity threshold of the first component 474 reduces the inadvertent triggering of a "deep press" response while still allowing an immediate "deep press" response if the touch input 476 provides sufficient intensity. The second component 478 reduces the gradual intensity fluctuation through the touch input without inadvertently triggering a "deep press" response. In some implementations, a "deep press" response is triggered when the touch input 476 meets the dynamic intensity threshold 480 (e.g., at point 481 in fig. 4C).
FIG. 4D illustrates another dynamic intensity threshold 486 (e.g., intensity threshold I D ). Fig. 4D also shows two other intensity thresholds: first intensity threshold I H And a second intensity threshold I L . In FIG. 4D, although the touch input 484 meets the first intensity threshold I before time p2 H And a second intensity threshold I L But does not provide a response until a delay time p2 has elapsed at time 482. Also in FIG. 4D, dynamic intensityThreshold 486 decays over time, where the decay is at slave time 482 (trigger and second intensity threshold I L At the time of the associated response) has elapsed a predefined delay time p 1. This type of dynamic intensity threshold reduction is immediately triggered by a lower threshold intensity (such as a first intensity threshold I H Or a second intensity threshold I L ) Accidental triggering of the dynamic intensity threshold I after or simultaneously with the associated response D An associated response.
FIG. 4E shows another dynamic intensity threshold 492 (e.g., intensity threshold I D ). In FIG. 4E, after a delay time p2 has elapsed since the time touch input 490 was initially detected, the trigger and intensity threshold I L An associated response. Meanwhile, the dynamic intensity threshold 492 decays after a predefined delay time p1 has elapsed since the time the touch input 490 was initially detected. Thus, at trigger and intensity threshold I L Decreasing the intensity of the touch input 490 after the associated response, then increasing the intensity of the touch input 490 without releasing the touch input 490 may trigger the intensity threshold I D The associated response (e.g., at time 494), even when the intensity of the touch input 490 is below another intensity threshold (e.g., intensity threshold I L ) As does the case.
The contact characteristic intensity is never lower than the tap intensity threshold IT L The intensity increases to be between the tap intensity threshold IT L And the deep press intensity threshold IT D The intensity in between is sometimes referred to as a "tap" input. The characteristic intensity of the contact is from below the deep-push intensity threshold IT D Is increased to an intensity above the deep press intensity threshold IT D Is sometimes referred to as a "deep press" input. The contact characteristic intensity is never lower than the contact detection intensity threshold IT 0 Is increased to an intensity between the contact detection intensity threshold IT 0 And the tap intensity threshold IT L The intensity in between is sometimes referred to as detecting contact on the touch surface. The characteristic intensity of the contact is higher than the contact detection intensity threshold IT 0 Is reduced to an intensity below the contact detection intensity threshold IT 0 Is sometimes referred to as detecting the lift-off of a contact from the touch surface. In some embodimentsIn IT 0 Zero. In some embodiments, IT 0 Greater than zero. In some examples, a shaded circle or oval is used to represent the intensity of a contact on a touch-sensitive surface. In some examples, an unshaded circle or oval is used to represent a respective contact on the touch-sensitive surface without specifying the intensity of the respective contact.
In some implementations described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting a respective press input performed with a respective contact (or contacts), wherein the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or contacts) above a press input intensity threshold. In some implementations, the respective operation is performed in response to detecting that the intensity of the respective contact increases above a press input intensity threshold (e.g., the respective operation is performed on a "downstroke" of the respective press input). In some implementations, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below the press input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press input threshold (e.g., the respective operation is performed on an "upstroke" of the respective press input).
In some implementations, the device employs intensity hysteresis to avoid accidental inputs, sometimes referred to as "jitter," in which the device defines or selects a hysteresis intensity threshold that has a predefined relationship to the compression input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the compression input intensity threshold, or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the compression input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below a hysteresis intensity threshold corresponding to the press input intensity threshold, and the respective operation is performed in response to detecting that the intensity of the respective contact subsequently decreases below the hysteresis intensity threshold (e.g., the respective operation is performed on an "upstroke" of the respective press input). Similarly, in some embodiments, a press input is detected only when the device detects an increase in contact intensity from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press input intensity threshold and optionally a subsequent decrease in contact intensity to an intensity at or below the hysteresis intensity, and a corresponding operation is performed in response to detecting a press input (e.g., an increase in contact intensity or a decrease in contact intensity depending on the circumstances).
For ease of explanation, the description of operations performed in response to a press input associated with a press input intensity threshold or in response to a gesture comprising a press input is optionally triggered in response to detecting: the intensity of the contact increases above the compression input intensity threshold, the intensity of the contact increases from an intensity below the hysteresis intensity threshold to an intensity above the compression input intensity threshold, the intensity of the contact decreases below the compression input intensity threshold, or the intensity of the contact decreases below the hysteresis intensity threshold corresponding to the compression input intensity threshold. In addition, in examples where the operation is described as being performed in response to the intensity of the detected contact decreasing below a press input intensity threshold, the operation is optionally performed in response to the intensity of the detected contact decreasing below a hysteresis intensity threshold that corresponds to and is less than the press input intensity threshold. As described above, in some embodiments, the triggering of these operations also depends on the satisfaction of a time-based criterion (e.g., a delay time has elapsed between the satisfaction of the first intensity threshold and the satisfaction of the second intensity threshold).
User interface and associated process
Attention is now directed to embodiments of a user interface ("UI") and associated processes that may be implemented on an electronic device, such as portable multifunction device 100 or device 300, having a display, a touch-sensitive surface, and (optionally) one or more sensors for detecting the intensity of contact with the touch-sensitive surface.
Fig. 5A-5R illustrate exemplary user interfaces for drag and drop operations according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes described below with reference to fig. 7A-7F, 8A-8E, and 9A-9D. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having a touch sensitive display system 112. In such embodiments, the focus selector is optionally: a respective finger or stylus contact, a point of representation corresponding to the finger or stylus contact (e.g., a center of gravity of the respective contact or a point associated with the respective contact), or a center of gravity of two or more contacts detected on the touch-sensitive display system 112. However, in response to detecting contact on the touch-sensitive surface 451 when the user interface shown in the figures is displayed on the display 450 along with the focus selector, similar operations are optionally performed on a device having the display 450 and the separate touch-sensitive surface 451.
In fig. 5A-5R and in the description of methods 700, 800, and 900 below, touch or touch input and movement of touch or touch input is performed under control of a respective user by placement and/or movement on or near a) one or more fingers of a user of an electronic device or B) a touch screen (e.g., touch screen 112) or a touch pad (e.g., touch pad 355, sometimes also referred to as a touch pad) of a stylus of a similar tool.
Fig. 5A shows a user interface of the portable multifunction device 100 displayed on the touch screen 112 with a user interface area 502 for a first application (application a) and another user interface area 504 for a second application (application B). The interface area 502 for application a includes four objects, object 1, object 2, object 3, and object 4, and the user interface area 504 for application B includes a fifth object (object 5). In some embodiments, the region 502 of application A may be a view of application A. In some embodiments, the displayed representations of objects 1 through 4 in region 502 may also be views of application A. Similarly, region 504 of application B may be a view of application B, and the displayed representation of object 5 in region 504 may also be a view of application B.
As shown on the right side of fig. 5A, application a or one or more of the application a's views have interactive identifiers for drag operations, drop operations, and spring-loaded operations, indicating that supplemental gesture recognizers for drag gestures, drop gestures, and spring-loaded gestures may (or will) be added to one or more of the application a's views when applicable supplemental gesture recognizer addition criteria are met. In some embodiments, the interaction identifier is stored or included in view metadata 341 (fig. 3B) for each application view having such interaction identifiers. Also, in the example shown in fig. 5A, no supplemental gesture recognizer is added to any of the views of application a, nor is any gesture recognizer failure dependency established.
Fig. 5B shows a first touch (touch 1) on object 4 in the user interface shown in fig. 5A. Thus, the appearance of the object 4 in the user interface area 502 of the application a is updated. In addition, focus selector 506 now overlaps object 4, indicating the location of the first touch input.
If the first touch moves laterally (e.g., to location 505 in user interface area 502) without meeting predefined drag start recognition criteria (discussed in more detail below), the focus selector associated with the first touch moves (e.g., to location 505) without dragging the object (object 4) on which it was originally placed. As discussed below with reference to fig. 5D, this result may occur when the first touch is moved laterally for at least a minimum amount of time that the touch input must remain in contact with the touch-sensitive surface of the electronic device before remaining in the initial touch position, without substantial lateral movement before beginning the lateral movement, so as to meet predefined drag start recognition criteria.
As shown on the right side of fig. 5B, the time at which the first touch (touch 1) on the object 4 is detected is time=t 1 . Application a or its view continues to have interactive identifiers for drag operations, drop operations, and spring-loaded operations. Further, in the example shown in FIG. 5B, a supplemental gesture recognizer for recognizing the start of a drag, the start of a drag gesture, and the drag addition (which is a gesture for adding an object to an existing drag operation) has been added to correspond to An application view of object 4, or a view corresponding to user interface region 504. Further, a gesture recognizer failure dependency is established such that a Long Press (LP) gesture recognizer corresponding to object 4 or an application view for a view of user interface area 504 is failed depending on a Drag Start (DS) gesture recognizer. In some embodiments, this failure dependency is indicated by listing the drag gesture recognizer in the wait list 327 (FIG. 3B) of long press gesture recognizers.
Further discussion regarding interaction identifiers, supplemental gesture recognizers, and automatic or dynamic addition of supplemental gesture recognizers to an application or application view, and automatic establishment of fault dependencies between standard gesture recognizers and supplemental gesture recognizers is provided below with reference to fig. 6D, 6E, 7A-7F, 8A-8E, and 9A-9D.
FIG. 5C illustrates a user interface generated when a first touch (touch 1) is recognized by a deep press (e.g., based on intensity) or long press (e.g., based on time) gesture recognizer. In this example, in response to identifying the first touch as a long press gesture, a set of menu options associated with object 4 is displayed.
As shown on the right side of fig. 5C, the time at which the first touch (touch 1) on the object 4 is recognized as a long press gesture is time=t1+t2, where T1 is the time at which the first touch is first detected. In some implementations, T2 is a threshold amount of time that the touch input must remain in contact with the touch-sensitive surface of the electronic device without substantial lateral movement in order to meet predefined long press recognition criteria.
Fig. 5D illustrates the beginning of a drag gesture with a first touch (touch 1). Accordingly, the object 4 moves or drags with the first touch. At this point, the first touch and object 4 remains in the user interface area 502 of application A.
As shown on the right side of fig. 5D, the time at which the first touch (touch 1) on object 4 is recognized as a drag start gesture is time=t 1 +T3, where T 1 Is the time when the first touch was first detected. In some embodiments, T3 is at least that the touch input must remain with the electrons before starting the lateral movementThe touch-sensitive surface of the device is contacted without substantial lateral movement for a minimum amount of time in order to meet predefined drag onset recognition criteria. Conversely, if movement of touch 1 is to be detected before the time threshold (T3) has elapsed, touch 1 will move on the display (e.g., to position 505, as shown in fig. 5B) without initiating a drag gesture and thus without dragging object 4.
In some embodiments, after initially identifying the touch input (e.g., as a long touch) using a standard gesture recognizer, as shown in fig. 5C, the electronic device then identifies the touch input as a second gesture, such as a drag start gesture, using a supplemental gesture recognizer, as shown in fig. 5D. In such embodiments, T3 (fig. 5D) is a greater or longer time interval than T2 (fig. 5C).
As also shown on the right side of fig. 5D, a session ID (equal to 1 in this example) has been assigned to the first touch, or to an object dragged by the first touch. As described in more detail below, when a touch is identified as a drag gesture, a session ID is assigned to the touch, becoming a separate touch. The session ID corresponds to a set of supplemental gesture recognizers added to the application a or one or more of its views, which, once recognized as a drag gesture, control the processing of the first touch. Other drag gestures (e.g., concurrent or subsequent drag gestures) will obtain their own session ID that is different from the session ID of the drag that includes the first touch.
In these figures (e.g., fig. 5A-5R), separate touches (ovals with cross-hatched fill patterns and darker circumferences than separate touches) are depicted using different patterns (ovals with striped fill patterns) from conventional touches (also referred to as attachment touches or view-attached touches). In some implementations, the separated touches are separated from the standard gesture recognizers of the application view or user interface region that they traverse, and thus touch events (sometimes referred to herein as drag events) associated with the placement and/or movement of the separated touches are handled by the supplemental gesture recognizers associated with those views or user interface regions, but not by the standard gesture recognizers associated with those views or user interface regions. When predefined criteria are met, a supplemental gesture recognizer is dynamically added (e.g., by a system process) to the application or view. Touch events associated with the placement and/or movement of conventional touches (sometimes referred to herein as attachment touches or view-attached touches) are handled by standard gesture recognizers associated with those views or user interface regions. A standard gesture recognizer is a gesture recognizer that is typically associated with or assigned to a user interface area of an application view or application by an application.
In fig. 5E, the first touch moves to the user interface area 504 of application B with which object 4 is dragged. The right side of fig. 5E shows that application B or one or more of the views of application B has interactive identifiers for a put operation and a spring-loaded operation, indicating that a supplemental gesture recognizer for a put gesture and a spring-loaded gesture may (or will) be added to one or more of the views of application B when applicable supplemental gesture recognizer addition criteria are met. Also, in the example shown in fig. 5E, one or more supplemental gesture recognizers for the drop gesture have been added to one or more views of application B, gesture recognizer failure dependencies have not been established, and a session ID has been assigned (or continues to be assigned) to a drag operation that includes the first touch. Other examples of supplemental gesture recognizers added to a view and other examples of gesture recognizer failure dependencies are discussed below with respect to subsequent figures in the sequence (such as figure 5M).
In fig. 5F, a two-finger touch input is detected on object 4 (e.g., two simultaneously detected touches on the same object or view in an application). In some implementations, a two-finger touch is used to initiate a drag operation without having to wait for a timeout period before starting a lateral movement (e.g., drag) of the touch input.
In fig. 5G, the two-finger touch starts to move, with which the object 4 is dragged. Time in this figure is time=t 1 +ΔT, where T 1 Is the time when the first touch is detected for the first time, andΔt is a period shorter than the above-described T2 and T3 periods. In some embodiments or cases, Δt is near but greater than zero. Movement of the two-finger touch causes the touches to be separated and a session ID to be assigned to the two touch inputs (e.g., touch 1 and touch 2). As described in more detail below, upon determining that the touch meets predefined drag start criteria, both touches are separated. The interaction identifier, added gesture recognizer, and fault dependency are the same as in FIG. 5D. Recognition of the drag gesture is indicated in fig. 5G by a change in appearance of the dragged object (e.g., the perimeter is drawn with a thicker perimeter), and separation of the two touches is indicated in fig. 5G by the changing appearance of the two touches.
In FIG. 5H, the two-finger touch is moved from user interface area 502 of application A into user interface area 504 of application B. When a two-finger touch enters the user interface region 504 of application B, a drop gesture recognizer is added to the user interface region 504. As also shown in FIG. 5H, user interface area 504 has an interaction identifier for drag and drop spring-loaded interactions.
FIG. 5I illustrates a transition from the user interface shown in FIG. 5D that occurs when a second touch (touch 2) is detected. In this example, the second touch is on an object (object 2) in the user interface area 502 for application A. Since the first touch has separated, the second touch is processed independent of the first touch unless the second input is recognized as a drag add gesture. Thus, if the second touch is not recognized as a drag add gesture, the second touch is processed using the standard gesture recognizer for user interface area 502 of application A. In this example, object 1, object 2, and object 3 are elements of a vertically scrollable list in application A. When the second touch (touch 2) moves vertically downward, the scrollable list scrolls downward by an amount corresponding to the vertical movement amount of the second touch, as shown in fig. 5J. As shown, the movement of the second touch need not be perfectly vertical to cause scrolling. Scrolling of the scrollable list causes the additional element (object Z) to scroll into view at the top of the scrollable list that is displayed.
In fig. 5K, the second touch (touch 2) continues to be moved (e.g., by the user of device 100). In this example, movement of the second touch moves downward and horizontally into the user interface area 504 of application B. Since the second touch has not been recognized as a drag start gesture, even when the second touch moves into the user interface area 504, the second touch remains attached to application A or one or more views of application A, and thus touch events corresponding to the second touch continue to be delivered to application A, and in response to those touch events, the scrollable list in the user interface area 502 of application A continues to scroll according to the vertical amount of movement of the second touch. Continued scrolling of the scrollable list results in another additional element (object Y) scrolling into view at the top of the displayed scrollable list.
Referring again to FIG. 5I, in some embodiments, the drag add gesture is a tap or other distinguishing touch gesture performed on an object available for addition to the drag operation (e.g., an object within a predefined distance from an object that has participated in the drag operation or has been designated as an object available for addition to the drag gesture (e.g., by an application in which the object is displayed). If the second touch shown in FIG. 5I is recognized as a drag add gesture, then object 2 is added to the drag operation associated with the first touch, as shown in FIG. 5L. In FIGS. 5I and 5L, time t 2 Corresponds to the time when the second touch (touch 2) is first detected, and T4 is or corresponds to the maximum duration of the drag add gesture.
In fig. 5L, object 2 is stacked behind object 4, and when the first touch moves laterally on touch screen 112, the two objects (object 2 and object 4) are dragged. In some embodiments, the drag-add gesture needs to be a two-finger tap gesture, rather than the single-finger tap gesture shown in fig. 5I and 5L, while in some other embodiments, both the single-finger tap gesture and the two-finger tap gesture may be drag-add gestures, so long as they meet predefined drag criteria (e.g., requiring the user input to include a contact at a location corresponding to a user interface object that can be dragged, and requiring that the distance between the first touch and the second touch of the drag operation be identified be no greater than a predefined distance or proximity threshold).
In fig. 5M, object 4 and object 2 (shown only as shadows that they are stacked under object 4) have been dragged into user interface region 504 of application B. In this example, in response to a separate touch (touch 1) being dragged into the user interface 504 of application B, a drop and spring-loaded supplemental gesture recognizer is added to the user interface 504 of application B, and a fault dependency is further created or added such that the long press gesture recognizer relies on the fault of the spring-loaded gesture recognizer in order to recognize the long press gesture.
FIG. 5N illustrates an update of the user interface shown in FIG. 5K that occurs when the first touch is lifted off after object 4 and object 2 are dragged into user interface region 504 of application B. In fig. 5N, after lifting off the first touch, object 4 and object 2 are unstacked and positioned in user interface area 504 of application B. In some implementations, once the drag operation (or drop operation) has been completed and the corresponding touch has been lifted off, the supplemental gesture recognizer for handling the touch will be automatically removed, and some (or all) of the gesture recognizer dependencies corresponding to the removed supplemental gesture recognizer are also removed. Thus, in this example, FIG. 5N shows a collection of interaction identifiers, but the supplemental gesture recognizers listed in FIG. 5M are no longer listed in FIG. 5N, and similarly, the dependence of the Long Press (LP) gesture recognizer on the (supplemental) spring-loaded gesture recognizer shown in FIG. 5M is no longer present in FIG. 5N.
FIG. 5O represents a transition from the user interface shown in FIG. 5I after the second touch (touch 2) is recognized as a drag gesture. This is a second drag gesture that is independent of the first drag gesture of the first touch, and the corresponding touch (touch 2) is assigned a different session ID, e.g., session ID 2, followed by the first touch, which in this example is assigned session ID 1. In addition, the second touch is split when recognized as a touch gesture, thereby eliminating the need for a standard gesture recognizer for user interface region 502 to handle touch events generated by (or for) the second touch.
Fig. 5P shows a transition from the user interface shown in fig. 5O, in which the first touch (touch 1) and the second touch (touch 2) move in different directions, dragging (object 4) with the movement of the first touch and dragging object 2 with the movement of the second touch. The first touch and the second touch are separate touches that are independent and each touch is used to drag a different object or set of objects than the other.
FIG. 5Q illustrates a transition from the user interface illustrated in FIG. 5E after the first touch (touch 1) has moved down to object 5 in user interface region 504. As a result of this movement, object 4 is now positioned at least partially on object 5. In this example, the supplemental spring-loaded gesture recognizer has been added to a set of gesture recognizers associated with one or more views of application B. In some embodiments, upon detection of a trigger event or condition, a supplemental spring-loaded gesture recognizer is added to one or more views of application B, such as movement of a dragged object (e.g., object 4) over another object (e.g., object 5) capable of responding to a spring-loaded gesture.
FIG. 5R represents a transition from the user interface shown in FIG. 5Q after the first touch (touch 1) is placed, or the placement of an object dragged by the first touch (object 4) on object 5 is recognized as a spring-loaded gesture. As described in more detail below, recognition of the spring-loaded gesture is accomplished by one of the supplemental gesture recognizers added to the user interface region 504 (e.g., view of application B) based on the spring-loaded interaction ID and the presence of an added touch in the interface region 504 that triggers the one or more supplemental gesture recognizers. In response to the spring-loaded gesture, object 5, which is a folder object (sometimes referred to as a container or container object), is opened while object 4 continues to be dragged by the first touch (touch 1), as shown in FIG. 5R. Optionally, the user may continue to move the first touch in order to drag object 4 into the now open folder corresponding to object 5, and then drop object 4 into the folder corresponding to object 5 by lifting the first touch off touch screen 112 after object 4 is dragged into the interior of the open folder by the first touch.
It should be noted that unlike the scenario shown in fig. 5N, in fig. 5R the first touch continues to be in contact with touch screen 112, so a drop operation or even further spring-loading operation may still occur, and thus the drop and spring-loading supplemental gesture recognizer continues to be associated with one or more views of application B.
Touch events for touches are generated to represent touches over time. Touching has a phase selected from a predetermined set of phases: the touch start for the respective touch indicates the respective touch that has just been detected on the touch-sensitive surface, the touch movement for the respective touch indicates that the respective touch has moved on the touch-sensitive surface, the touch end for the respective touch indicates that the respective touch has stopped being detected on the touch-sensitive surface as part of the end gesture, and the touch cancel for the respective touch indicates that the respective touch has been determined as an unexpected touch or has otherwise been identified as a touch that should be ignored. In some implementations, touch events are processed by one or more gesture recognizers to recognize gestures (e.g., taps, swipes, drags, pinch, etc.) performed by touches detected on the touch-sensitive surface. In some implementations, when a touch cancel event is issued for a respective touch (i.e., a touch event for (or including) a touch having a phase equal to touch cancel), an application performing an operation based on the respective touch cancels the operation and reverts to a state prior to detection of the touch. In some implementations, when a touch cancellation operation is detected for a respective touch, one or more gesture recognizers that are evaluating the touch as part of a gesture are cancelled (e.g., assuming that cancellation of the touch would make it impossible to satisfy the gesture definition used by the gesture recognizer). In some implementations, the touch cancel event of the application in response to the respective touch is different than in response to the touch end event. For example, a tap input ending with a touch cancel event (e.g., a touch event for a touch, corresponding to the tap input, with a touch cancel phase) is ignored, while a tap input ending with a touch end event (e.g., a touch event for a touch, corresponding to the tap input, with a touch end phase) is processed as a tap. As another example, when a moving contact is detected that moves an object over a display, if a touch event with a touch end phase is detected after the object has moved across the display, the object will remain in the position to which it was moved or (optionally) proceed with some inertia, whereas if a touch event with a touch cancel phase is detected after the object has moved across the display, the movement of the object is reversed to a position where the object was displayed before the moving contact was detected. An example of a touch processed without separating the touch is described below with reference to fig. 6B.
Fig. 6A illustrates touch events and drag events generated over a period of time while a touch (sometimes referred to as touch input) moves from a user interface area for a first application (application a) to a user interface area for a second application (application B). In some embodiments, the detected touch corresponds to a contact of a user's finger or a stylus or the like on a touch-sensitive surface of the electronic device. Examples of such movements of the touch are shown in fig. 5B, 5D, and 5E. As represented by the "touch event" sequence in fig. 6A, a touch event is generated when a touch is initially detected prior to recognition of the touch input as a drag gesture. The sequence of touch events represents a touch evolving through a series of phases, such as start, move, and cancel. When a touch is initially detected in FIG. 5B, the touch is represented in a touch event having a touch start phase; when movement of the touch is detected in fig. 5D, the touch is represented in a touch event having a touch movement phase; then in FIG. 5D, when a touch is recognized as a drag gesture, the touch is represented in a touch event with a touch cancel phase. An instance of a touch event having a touch phase value corresponding to the phase or state of the touch is delivered to application a for processing, e.g., to update the user interface of application a. In some implementations, multiple instances of a touch event having a touch phase equal to "move" can be generated as the touch input moves.
When the touch input is recognized as a drag gesture, the touch event is canceled and a drag event is generated. Canceling the touch input for application a enables application a to reverse any operation performed in response to detecting the touch (e.g., when touch 1 is cancelled in fig. 5D (as a result of touch 1 being recognized as part of a drag gesture), the menu in fig. 5C displayed in response to detecting the touch on object 4 for a threshold amount of time ceases to be displayed) and ignore the touch for the purpose of recognizing the gesture performed by the touch attached. In some embodiments, the drag event is a drag event that is marked or processed as a split touch. In some embodiments, touch events delivered to an application, such as application a, include a touch list associated with application a or a view of application a, and the touch list indicates for each listed touch that the listed touch is an attached touch (sometimes referred to as a view-attached touch) or a detached touch.
The attachment touches delivered to application a are processed by a standard gesture recognizer associated with application a or a view of application a, as well as any supplemental gesture recognizers associated with application a or a view of application a (e.g., a gesture recognizer added to the respective view of application a due to an interaction identifier associated with the respective view of application a). On the other hand, a split touch delivered to application a is only processed by, and is therefore ignored by, the supplemental gesture recognizer associated with application a or the view of application a, and is not processed by the standard gesture recognizer associated with application a or the view of application a.
The touch associated with a drag event also has a series of phases, such as "enter", "in" and "exit". As the touch moves, an instance of a drag event is generated over time, where a touch in a first instance of the drag event has a phase equal to "enter" (sometimes referred to as a touch phase) and a touch in a subsequent instance of the drag event has a phase equal to "in" (e.g., corresponding to movement within a view or application view, sometimes referred to as a user interface region), and a touch in a final instance of the drag event has a phase equal to "exit", which is generated in response to lifting the touch or the touch off the respective view.
As also shown in fig. 6A, when a touch is split, a touch event identifies the touch as cancelled and the touch is included in a corresponding drag event, which identifies the touch as having an "enter" phase, all of which occur in response to identifying the corresponding touch as a drag gesture. Note that initially recognizing a touch as a drag gesture is sometimes referred to as recognition of a drag start gesture.
FIG. 6B is similar to FIG. 6A, but corresponds to movement of the touch from a start position in the user interface area of application A to an end position in the user interface area of application B. Since the touch is not split, e.g., because it is not recognized as a drag gesture, only touch event instances are generated and all touch event instances are delivered to application a because the attachment touch is delivered to the application or application view associated with the initial touch location of the attachment touch, even though the touch moves outside of the application or application view associated with the initial touch location according to some embodiments. The phase of a touch in a touch event generated in this example is "start" when the touch is initially detected, "move" when movement of the touch is detected, and then "end" when lift-off touch is detected.
Fig. 6C schematically illustrates the processing of a touch detected by a set of processes: a source process (process a) corresponding to a first application such as application a; a system drag management process that, in some embodiments, performs the contact/motion module 130 (fig. 1 and 3A) or a portion of the contact/motion module; a system touch management process corresponding to hit view determination module 314 (FIG. 3B); and a target process corresponding to a second application such as application B. As shown in fig. 6C, when one or more touches are detected and determined to perform a drag gesture, the system drag management process will begin a session of the one or more touches and send a session ID assigned to the one or more touches to the source process. The system drag management process also communicates information about the one or more touches to the system touch management, which splits the one or more touches and sends a touch cancel event or instruction to the source process. The touch cancellation event or instruction causes the standard gesture recognizer of the source process to transition to a terminal state and cease processing one or more split touches. In addition, when one or more touches move to a user interface area for a second application, system touch management communicates information about the separated one or more touches to a target process.
FIG. 6D illustrates an example of a gesture recognition hierarchy 600-A before any supplemental gesture recognizer is added to a set of gesture recognizers for one or more views of an application or application, and FIG. 6E illustrates an example of a corresponding gesture recognition hierarchy 600-B after one or more supplemental gesture recognizers have been added to a set of gesture recognizers for one or more views of an application or application. In these examples, prior to adding any supplemental gesture recognizers, a standard gesture recognizer for an application or application view (i.e., a gesture recognizer for an application or application view) (e.g., a gesture recognizer added to the view by the application itself to support various interactions with the application) includes a long press gesture recognizer (LongPress GR) for recognizing a touch gesture Based on a duration of the touch (e.g., for triggering a display menu such as the one shown in fig. 5C when a long press gesture is detected on object 4) without substantial lateral movement, and an Intensity-Based gesture recognizer (Intensity-Based GR) for recognizing a touch gesture Based on an Intensity of the touch gesture that satisfies a predefined Intensity threshold (e.g., for triggering an expanded view of displaying content corresponding to object 4 when a deep press gesture that satisfies the predefined Intensity threshold is detected on object 4). In the example shown in fig. 6E, the supplemental gesture recognizers added to the set of gesture recognizers for the application include a drag start gesture recognizer (DragStart GR) for recognizing a drag start gesture (e.g., for detecting a drag gesture as shown in fig. 5D, where object 4 is dragged in response to movement of touch 1).
As shown in fig. 6D, gesture recognition hierarchy 600-a indicates that a long press gesture recognizer is dependent on a failure of an intensity-based gesture recognizer before any supplemental gesture recognizer is added to a set of gesture recognizers for an application or application view. In some implementations, the fact that the long press gesture recognizer is dependent on a failure of the intensity-based gesture recognizer means that the long press gesture recognizer cannot recognize touch input unless the intensity-based gesture recognizer fails to recognize intensity-based touch input (e.g., a deep press). In some implementations, the fact that the long press gesture recognizer is dependent on a failure of the intensity-based gesture recognizer means that when the intensity-based gesture recognizer recognizes an intensity-based touch input (e.g., a deep press), the long press gesture recognizer cannot recognize the touch input. The intensity-based gesture recognizer is enabled to attempt to recognize the touch input when the intensity-based gesture recognizer fails to recognize the touch input. In some embodiments, this dependency is achieved by listing intensity-based gesture recognizers in a wait list 127 (FIG. 3C) of long-press gesture recognizers.
As shown in fig. 6E, gesture recognition hierarchy 600-B indicates that after adding a supplemental gesture recognizer to a set of gesture recognizers for an application, the long press gesture recognizer is a failure that depends on an intensity-based gesture recognizer and a drag start gesture recognizer, where the drag start gesture recognizer is a supplemental gesture recognizer added to a set of gesture recognizers for an application or application view. In addition, the drag start gesture recognizer is dependent on a failure of the intensity-based gesture recognizer. Thus, the long press gesture recognizer cannot recognize the touch input as a long press gesture until the intensity-based gesture recognizer fails to recognize the touch input (e.g., as a deep press) and the drag start gesture recognizer fails to recognize the touch input (e.g., as a drag gesture). In addition, the drag start gesture recognizer cannot recognize the touch input as a drag start gesture until the intensity-based gesture recognizer fails to recognize the touch input (e.g., as a deep press). In some embodiments, these fault dependencies are achieved by listing intensity-based gesture recognizers and drag-start gesture recognizers in the wait list 127 (fig. 3C) of long-press gesture recognizers, and by listing intensity-based gesture recognizers in the wait list 127 (fig. 3C) of drag-start gesture recognizers.
Drag-and-drop gesture recognition
Fig. 7A-7F are flowcharts illustrating a method 700 of processing a touch detected by an electronic device, including determining whether the touch corresponds to a drag gesture (sometimes referred to as a drag-and-drop gesture), and processing the touch according to the determination, according to some embodiments. The method 700 is performed at an electronic device (e.g., the device 300 of fig. 3 or the portable multifunction device 100 of fig. 1) having a display, a touch-sensitive surface, and optionally one or more sensors for detecting contact strength with the touch-sensitive surface. In some implementations, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some implementations, the display is separate from the touch-sensitive surface. Some operations in method 700 are optionally combined and/or the order of some operations is optionally changed.
As described below, the method 700 provides an intuitive way of processing touches detected by a device, which may or may not be a drag gesture. When a user wants to drag one or more objects from a first application view or area to another application view or area, the method reduces the amount, scope, and/or nature of input from the user, thereby creating a more efficient human-machine interface. For battery-powered electronic devices, enabling a user to perform drag-and-drop operations faster and more efficiently saves power and increases time during battery charging.
The device displays a user interface (702) for a first application program comprising a plurality of user interface objects on a display thereof. While displaying a user interface for a first application, the device receives information describing a first touch (e.g., one or more touch events) at the first application, the first touch being detected at a location on the touch-sensitive surface corresponding to a respective user interface object of a plurality of user interface objects in the first application (704). For example, the first touch may be touch 1 shown in fig. 5B.
In response to receiving the information describing the first touch, and in accordance with a determination (e.g., by the first application) that the first touch meets predefined criteria associated with initiating a respective type of interface operation (e.g., a drag-and-drop operation, sometimes referred to as a drag operation) (706): the device initiates a respective type of user interface operation (e.g., a first user interface operation) involving a respective user interface object (708).
In some implementations, the predefined criteria associated with initiating the respective type of interface operation are criteria associated with initiating a drag operation, sometimes referred to as drag criteria. In some implementations, the drag criteria include a timing criterion (e.g., a first criterion) that is met when the user input remains on the touch-sensitive surface for at least a predefined period of time and the intensity of the detected user input remains below an intensity threshold during the predefined period of time (e.g., a long press criterion). In some implementations, the drag criteria includes a lateral displacement criteria (e.g., a second criteria) that is met when the detected lateral displacement of the user input meets (e.g., meets or exceeds) a lateral displacement threshold. In some implementations, the drag criteria require that the timing criteria be met before the lateral displacement criteria are met (e.g., for an input to meet the drag criteria, the input must first meet the timing criteria and then meet the lateral displacement criteria).
The method 700 further includes, while continuing to detect the first touch on the touch-sensitive surface and while continuing to user interface operations involving the respective user interface object (710), receiving, at the first application, an instruction to ignore the first touch with respect to user interface operations other than the respective type of interface operation (e.g., drag, drop, or spring-loaded operation) (712); and ignoring, by the first application, the first touch with respect to user interface operations other than the corresponding type of interface operation (714). For example, ignoring the first touch by the first application is accomplished by stopping processing of the first touch, optionally in response to a touch cancel event. Thus, even if the user interface operation involving the user interface object continues, the first touch is regarded as a cancel touch. In some implementations, the first application stops processing the first touch (e.g., stops receiving information about) about user interface operations other than the respective type of interface operation.
In some implementations, upon detecting the first touch and before determining that the first touch meets predefined criteria associated with initiating the respective type of interface operation, the method includes processing the touch using one or more gesture recognizers associated with user interface operations other than the respective type of interface operation. In some implementations, initiating respective types of user interface operations involving respective user interface objects includes processing touches using gesture recognizers associated with the respective types of interface operations (e.g., drag gesture recognizers for processing drag inputs on touch-sensitive surfaces). In some implementations, there are multiple instances of the gesture recognizer. For example, the first touch is processed using an instance of a drag gesture recognizer.
In some implementations, the method 700 includes, prior to determining (e.g., via a system drag management process) that the first touch meets predefined criteria associated with initiating a respective type of interface operation, processing the first touch as a view-attached touch event (720), and in accordance with determining that the first touch meets predefined criteria associated with initiating a respective type of interface operation, processing the first touch as a separate touch event (e.g., by initiating a respective type of user interface operation) (722). Processing the touch input initially as view-attached touch events provides consistency in the processing of the touch input and enables a user to control the view to which the touch input is directed to provide a more intuitive interface for controlling the user interface. Processing the touch input as a separate touch event after determining that the touch input meets predefined criteria associated with a particular operation provides flexibility in processing the touch input for performing the particular operation while enabling the device to continue to receive and process additional touch input as normal touches to perform additional operations. Providing more consistent and intuitive control options while providing flexibility in receiving and processing multiple touch inputs at once, reducing the number of inputs required to perform an operation, enhancing the operability of the device, and making the user device interface more efficient (e.g., by helping the user provide inputs that more quickly obtain the desired results and reducing user errors in operating/interacting with the device), which additionally reduces power usage and increases battery life of the device by enabling the user to more quickly and efficiently use the device.
In some embodiments, the method 700 further comprises: before determining (e.g., via a system drag management process) that the first touch meets predefined criteria associated with initiating a respective type of interface operation, processing (724) the first touch as a view-attached touch event using one or more gesture recognizers associated with the first application (e.g., "standard gesture recognizers" discussed above with reference to fig. 5D, 5I, 5O, and 6D-6E); and, in accordance with a determination that the first user input meets predefined criteria associated with initiating a respective type of interface operation (726): stopping processing the first touch as a view-attached touch event using one or more gesture recognizers associated with the first application (728); and processing the first touch as a separate touch event (730). For example, in some implementations, a separate touch is processed using a drag gesture recognizer. View-attached touch inputs are processed using gesture recognizers associated with the application and processing of touch inputs is stopped using those gesture recognizers after determining that the touch inputs meet predefined criteria associated with a particular operation such that the gesture recognizers are available to process additional touch inputs. Providing additional capability to handle touch input enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide input to obtain the desired results faster and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the method 700 includes detecting movement of the first touch outside of an area corresponding to a user interface for the first application (732). For example, as shown in fig. 5D and 5E, the first touch moves from the user interface area of application a to the user interface area of application B. In such an embodiment, the method 700 further includes, in response to receiving information describing the first touch (734), in accordance with a determination (e.g., by the first application) that the first touch does not satisfy predefined criteria associated with initiating the respective type of interface operation (e.g., the first touch is not recognized as part of a drag operation during an initial portion of the gesture), continuing to provide the information describing the first touch to the first application (736) (e.g., by a different system procedure than the first application) even after the first touch moves outside of an area corresponding to the user interface for the first application and enters an area corresponding to the user interface for the second application. For example, in the examples shown in fig. 5I, 5J, and 5K, even after touch 2 moves outside of the area corresponding to the user interface for application a and enters the area corresponding to the user interface for application B, information about touch 2 continues to be provided to application a.
In some implementations, the first touch is processed using a hit view determined based on an initial contact location of the first touch, independent of movement of the first touch (e.g., even if the first touch moves to a location on the touch-sensitive surface that corresponds to a location in the user interface outside of the hit view).
In such an embodiment, the method 700 further includes, in response to receiving the information describing the first touch (734), in accordance with a determination that the first touch meets predefined criteria associated with initiating the respective type of interface operation (e.g., the first touch is identified as part of a drag operation during an initial portion of the gesture) (738): providing information describing the first touch to the first application (e.g., by a different system process than the first application) when the first touch is within an area corresponding to a user interface for the first application (740); and switching to providing information describing the first touch to the second application when the first touch moves outside of and into an area corresponding to a user interface for the first application (e.g., in place of the first application). Thus, operations 740 and 742 correspond to a process of separating touches.
In some implementations performing operations 734-742, the method 700 includes, upon detecting movement of the first touch, and in accordance with a determination that the first touch meets predefined criteria associated with initiating a respective type of interface operation (738): one or more respective regions corresponding to respective locations of the first touch are determined, wherein the one or more respective regions are associated with one or more respective applications (744), and when the first touch is within the respective region, information describing the first touch is provided to the respective application for the respective region (746). For example, as the touch input moves along the touch-sensitive surface, the device performs repeated hit area tests to determine a respective hit view for the respective location of the touch input. In some embodiments, the respective hit view is determined at predefined time intervals (sampling intervals of touch inputs). In some embodiments, the predefined time intervals occur sequentially at a rate of 10 or more time intervals per second. In some embodiments, the respective hit views correspond to different applications of the plurality of applications, such as application a and application B in fig. 5A-5R. For example, a first hit view corresponding to a first location of a first user input is in a first application (e.g., application a, fig. 5A-5R), and a second hit view corresponding to a second location of the first user input is in a second application (e.g., application B, fig. 5A-5R) different from the first application.
In some implementations performing operations 734-742, the method 700 includes, in accordance with a determination that the first touch meets predefined criteria associated with initiating the respective type of interface operation (738), providing information describing the first touch to the respective application for the respective region while providing information describing one or more additional touches that do not meet the predefined criteria associated with initiating the respective type of interface operation (748). For example, in some embodiments, information about a separate touch (e.g., a first touch) and information about a normal, view-attached touch (e.g., one or more additional touches) are provided to the respective application in the same data structure (e.g., a data structure having one or more touch lists) that is received and processed (e.g., by the first application) in the same processing frame.
In some implementations, the method 700 further includes, when user interface operations involving the respective user interface object continue (e.g., when drag operations continue, as shown, for example, in fig. 5I and 5J), receiving information (e.g., information about one or more touch events) describing a change in a second touch (e.g., touch 2 in fig. 5I and 5J) on the touch-sensitive surface at a location corresponding to the first application (750); and in response to detecting the change in the second touch, processing information describing the change in the second touch using one or more gesture recognizers associated with the first application without regard to the first touch (752). Note that the second touch may be detected before, simultaneously with, or after the first touch is detected. In such embodiments, the second touch is not affected by the first touch as long as it does not meet special case criteria, such as drag-add criteria.
In some implementations, as user interface operations involving the respective user interface objects continue, the first application receives information (e.g., one or more touch events) describing a second touch (e.g., a second touch detected before, simultaneously with, or after the detection of the first touch) on the touch-sensitive surface at a location corresponding to the first application; and in response to detecting the second touch, processing information describing the second touch, regardless of whether the first touch meets predefined criteria associated with initiating a corresponding type of interface operation. For example, the first application processes the current and/or future touches while continuing to detect the first touch as a drag operation.
In some implementations, when a first touch is detected on the touch-sensitive surface and when user interface operations involving the respective user interface objects continue (e.g., when a drag operation continues), the method 700 includes receiving, at the first application, information describing a second touch that is different from the first touch (e.g., as shown in fig. 5I and 5J, the second touch is detected on the touch-sensitive surface at a location corresponding to a second of the plurality of user interface objects in the first application; additionally, the second touch is detected while the first touch continues to be detected) (754). In these embodiments of method 700, in response to receiving information describing the second touch, and in accordance with a determination (e.g., by the first application) that the second touch meets predefined add criteria (sometimes referred to herein as drag add criteria) associated with the user interface operation (e.g., criteria for adding the user interface object to a drag operation) (756), method 700 includes receiving, at the first application, an instruction to ignore (e.g., stop processing) the second touch with respect to user interface operations other than the corresponding type of interface operation (e.g., drag, drop, or spring-loaded operation) (758); and ignoring, by the first application, the second touch with respect to the user interface operation other than the corresponding type of interface operation (e.g., treating the second touch as a touch with respect to cancellation of the user interface operation other than the drag-and-drop operation even if the user interface operation involving the user interface object such as the drag operation continues) (760). In some implementations, the method 700 includes processing the second touch according to a user interface operation involving the respective user interface object.
In some embodiments where method 700 includes operation 760, the predefined addition criteria includes a criterion that is met when a distance between a first touch (e.g., corresponding to a drag operation) and a second touch (e.g., for adding an object to the drag operation) is less than a predefined proximity threshold (762). Optionally, the predefined addition criteria requires that the distance between the first touch and the second touch is less than a predefined proximity threshold when lift-off of the second touch is detected.
As briefly explained above with reference to fig. 5O and 5P, two separate drag operations may be performed simultaneously. In some implementations, the method 700 includes receiving, at the first application, while displaying the user interface for the first application, information (e.g., one or more touch events) describing a third touch (e.g., different from the first touch) detected at a location on the touch-sensitive surface corresponding to a third phase of the user interface objects of the plurality of user interface objects in the first application (764). In such an embodiment, the method 700 further includes, in response to receiving the information describing the third touch, and in accordance with a determination (e.g., by the first application) that the third touch meets predefined criteria associated with initiating a respective type of interface operation (e.g., drag, drop, or spring-loading): a corresponding type of second user interface operation (e.g., different from the (first) user interface operation) involving the third phase application user interface object is initiated (768).
Further, such aspects of method 700 include, when a third touch continues to be detected on the touch-sensitive surface and when a second user interface operation involving a third phase of application user interface objects continues (770): receiving, at the first application, an instruction (772) regarding a third touch that is ignored (e.g., stopped processing the touch cancel event, optionally in response to the touch cancel event) for user interface operations other than the corresponding type of interface operation (e.g., drag, drop, or spring-loaded); and ignoring, by the first application, the third touch with respect to user interface operations other than the corresponding type of interface operation (774). For example, even if user interface operations involving a third user interface object continue, the third touch is treated as a cancel touch. In some implementations, a third touch is detected when the first touch is detected. In some embodiments, the second user interface operation (e.g., the second drag operation) is different from the first user interface operation (e.g., the first drag operation), as shown in fig. 5O and 5P, where two separate drag operations are performed simultaneously.
In some implementations, the method 700 includes, in response to receiving information describing the first touch, determining, by the first application, whether the first touch meets predefined criteria associated with initiating a respective type of interface operation (776). For example, method 700 optionally includes determining whether gesture recognition criteria associated with a respective gesture recognizer are met. In some implementations, the system process or the first application determines whether the touch should be separated. For example, referring to fig. 5B, 5C, and 5D, it may be determined that a touch (e.g., touch 1 initially received in fig. 5B) meets a drag start criterion, in which case the touch is separated as shown in fig. 5D, or does not meet the drag start criterion, in which case the touch is treated as an attach touch as shown in fig. 5C. As shown in fig. 6C, in some embodiments, whether to split a touch is determined by a system process, such as a system touch management process, which corresponds to hit view determination module 314 (fig. 3B). Alternatively, whether to split the touch is determined by an application having a user interface area corresponding to the initial position of the touch.
In some implementations, the instruction received at the first application to ignore the first touch with respect to the user interface operation other than the corresponding type of interface operation includes a touch cancel event (778). For example, see the touch cancellation event shown in FIG. 6A and related discussion above. In some embodiments, method 700 includes sending an instruction to a plurality of applications that ignore the first touch with respect to user interface operations other than the corresponding type of user interface operation, the plurality of applications including the first application (e.g., any application that is currently monitoring ("listening") for the first touch) that is receiving or has received information describing the first touch. In some implementations, the plurality of applications includes a home screen application (sometimes referred to as a "SpringBoard") that monitors for touches even when the home screen application is not actively displayed.
In some implementations, the method 700 includes, in response to receiving the touch cancellation event, aborting execution of a user interface operation other than the respective type of interface operation (e.g., a user interface operation involving the respective user interface object), and failing to identify (e.g., stop attempting to identify or cancel attempting to identify) the first touch as a gesture other than one of a predefined set of gestures associated with the respective type of interface operation (780). For example, the method 700 optionally cancels or reverses user interface behavior, such as object transformations and/or animations associated with operations other than drag-and-drop operations, and optionally sets a gesture recognizer for gestures other than drag-and-drop related gestures to a cancel state (or other terminal state) instead of a "gesture possible" state.
In some embodiments of method 700, initiating a respective type of user interface operation (e.g., a drag operation for dragging a respective user interface object) involving the respective user interface object (708) includes providing a unique session identifier (782) identifying the respective type of user interface operation and the respective user interface object to the first application. For example, in FIG. 5D, a session identifier has been assigned to the touch (touch 1), which will initiate a drag operation involving object 4, object 4 being a user interface object in user interface 502 for application A. In some embodiments, a session identifier (sometimes referred to as a "session ID") identifies a drag operation and content (dragged) involved in the drag operation. In some embodiments, each drag operation has a unique session identifier (e.g., concurrent but different drag operations may be distinguished based on their respective session identifiers). For example, as shown in fig. 5O and 5P, two separate drag operations are being performed, and each drag operation is assigned a different session ID.
In some embodiments of method 700, the unique session identifier is associated with additional information about the drag operation, including one or more of: the total number of user interface objects involved in the respective type of user interface operation, including the respective user interface object, and one or more content types (784) of the user interface objects involved in the respective type of user interface operation. For example, in some embodiments, the unique session identifier includes additional information, or the unique session identifier may be used to retrieve information from an application from which the system drag management process or drag operation originated. In some embodiments, if multiple drag operations are performed simultaneously, different drag operations have different identifiers, such that the drag operations may be tracked independently.
In some embodiments of method 700, the respective type of user interface operation has an associated operational phase value that describes a current phase of the user interface operation, and the phase value is selected from the group consisting of: exit, enter, and therein (786). The phase values of the touches in the drag event are discussed above with reference to FIG. 6A.
It should be understood that the particular order of the operations that have been described in fig. 7A-7F is merely exemplary and is not intended to suggest that the order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that the details of other processes described herein with respect to other methods described herein (e.g., method 800 and method 900) are equally applicable in a similar manner to method 700 described above with reference to fig. 7A-7F. For example, the interaction identifiers described below with reference to method 800 are associated with different views to determine which supplemental gesture recognizers to add to the respective views and the use of gesture recognition failure dependencies described below with reference to method 900 may also be applicable in a manner similar to method 700. For the sake of brevity, these details are not repeated here.
The operations in the above-described information processing method are optionally implemented by running one or more functional modules in an information processing apparatus such as a general-purpose processor (e.g., as described above with reference to fig. 1 and 3A) or a dedicated chip.
The operations described above with reference to fig. 7A-7F are optionally implemented by the components depicted in fig. 1 or 3A. For example, initiating a user interface operation such as a drag operation or a spring-loaded operation, as well as many other operations of method 700, are optionally implemented by contact/motion module 130 or event identifier module 312 thereof.
Adding supplemental gesture recognizers
Fig. 8A-8E are flowcharts illustrating a method 800 of processing touches detected by an electronic device, including adding a supplemental gesture recognizer to a view of an application in order to provide specialized processing for operations such as drag operations. The method 800 is performed at an electronic device (e.g., the device 300 of fig. 3 or the portable multifunction device 100 of fig. 1) having a display, a touch-sensitive surface, and optionally one or more sensors to detect an intensity of contact with the touch-sensitive surface. In some implementations, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some implementations, the display is separate from the touch-sensitive surface. Some operations in method 800 are optionally combined and/or the order of some operations is optionally changed.
As described below, the method 800 provides an intuitive way of processing touches detected by a device, which may or may not be a drag gesture. The method reduces the computational load imposed on the electronic device by touch gestures and other user inputs, improving device responsiveness, and thus reducing latency. Reducing the delay in responding to touch inputs also reduces false inputs by the user, such as repeated or modified touch inputs by the user while waiting for the device to respond to early inputs by the user. For example, when a user wants to drag one or more objects from a first application view or area to another application view or area, the improvement reduces the amount, scope, and/or nature of input from the user, thereby creating a more efficient human-machine interface. For battery-powered electronic devices, enabling a user to perform drag-and-drop operations faster and more efficiently saves power and increases time during battery charging.
The device displays, on its display, a user interface comprising a plurality of views including a first view associated with a first set of one or more standard gesture recognizers for processing touch input directed to the first view and a second view associated with a second set of one or more standard gesture recognizers for processing touch input directed to the second view. One or more of the plurality of views (e.g., the first view and/or the second view) are associated with a first interaction identifier (e.g., a drag, drop, or spring-loaded interaction identifier) indicating that a first set of one or more supplemental gesture identifiers can be added to the corresponding view (e.g., by an application independent module) (802).
As shown in fig. 5A-5R, the multiple views may include a user interface area 502 for application a and a user interface area 504 for application B. Many of fig. 5A-5R indicate examples of sets of interaction identifiers associated with a view of application a (e.g., interface region 502) or a view of application B (e.g., interface region 504). While these examples illustrate two views being associated with drag, drop, and spring-loaded interaction identifiers, in some embodiments or cases, some views may be associated with fewer interaction identifiers (e.g., only drag-drop interaction identifiers or none of these interaction identifiers) than the examples shown in fig. 5A-5R, and further, some views may be associated with additional interaction identifiers (e.g., interaction identifiers for additional types of operations associated with a particular multi-touch gesture).
FIG. 3B illustrates that view metadata 341 for a corresponding application view 317-2 includes interaction identifiers 347. In addition, the view metadata 341 of each view of the plurality of views 317 has its own set of interaction identifiers. In some embodiments, the interaction identifier associated with a particular view is a tag value that indicates which interaction identifiers of the predefined set of interaction identifiers are associated with (e.g., assigned to) the particular view. In some other embodiments, the interaction identifier associated with a particular view is an identifier value (e.g., an alphanumeric or multi-bit value), and the set of interaction identifiers stored for that particular view indicates which interaction identifiers are associated with (e.g., assigned to) that particular view. When an application has multiple views, each view may typically be associated with a corresponding set of interaction identifiers, and some views may have a different set of associated interaction identifiers than others of the views.
In some embodiments, possible interactions (e.g., types of interactions) include drag interactions (e.g., objects from a view may be picked and dragged on a user interface), drop interactions (e.g., objects dragged across a user interface may drop onto a view), and spring-loaded interactions (e.g., hovering over an icon of a view may cause the view to open while dragging the object across the user interface, in another example, hovering over an application icon on a home screen may cause the application itself to open if the application icon view supports spring-loaded interactions, in another example hovering over a folder icon may cause the folder to open, and in yet another example hovering over an object may cause a control of or a control associated with the object to be activated or deactivated (e.g., by switching the state of the control when predefined spring-loading criteria have been met).
Continuing with the description of the method 800, the method 800 further includes detecting, via the touch-sensitive surface, touch input at a location on the touch-sensitive surface corresponding to a respective view of the plurality of views (804); and in response to detecting the touch input via the touch-sensitive surface (806): and (while continuing to detect the touch input), in accordance with a determination that the touch input meets supplemental gesture recognizer addition criteria, wherein the supplemental gesture recognizer addition criteria include a criterion that is met when the respective view has a first interaction identification (808): adding a first set of one or more supplemental gesture recognizers associated with the first interaction identity to the respective view (810); touch input is processed (812) using one or more supplemental gesture recognizers and one or more standard gesture recognizers associated with the respective views. In some implementations, one or more supplemental gesture recognizers are attached to or associated with a respective view being run by the system based on determining whether an object in the view supports a particular interaction (determined by a drag, drop, or spring-loaded interaction identifier for the respective view).
The method 800 further includes, in response to detecting the touch input (806), in accordance with a determination that the touch input does not meet the supplemental gesture recognizer addition criteria, processing the touch input at the respective view (e.g., with one or more standard gesture recognizers associated with the respective view or with one or more supplemental gesture recognizers associated with other interaction identifiers) without adding a first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view (814).
In some implementations, the method 800 includes detecting a sequence of touch inputs including a first touch input that meets the supplemental gesture recognizer addition criteria and including a second touch input received after the first touch input that does not meet the supplemental gesture recognizer addition criteria. The first touch input is processed according to the aforementioned adding (first set of one or more supplementary gesture recognizers) (810) and processing (with the first set of one or more supplementary gesture recognizers) (812) operations, while the second touch input is processed according to the aforementioned processing (814) (without adding the first set of one or more supplementary gesture recognizers). In some implementations, the view receives a touch event and determines whether the view supports an interaction (e.g., a drag interaction, a drop interaction, or a spring-loaded interaction) associated with the touch event. In some embodiments, the respective interaction identifier of the view specifies whether the view supports the respective interaction. For example, the view receives touch events. If the view has a drag interaction identifier, indicating that the view allows the addition of a drag gesture recognizer (e.g., through a module separate from the application, such as a system drag management process) and the view supports drag interaction, the drag gesture recognizer is launched and added to the view (e.g., in conjunction with the view receiving a touch event). In another example, the view receives a drag input event. If the view has a drop interaction identifier, a drop gesture recognizer is launched and added to the view. In some implementations, if the view has a spring-loaded interaction identifier, a spring-loaded gesture recognizer is activated and added to the view. In some implementations, a single interaction identifier can be used to specify multiple interactions supported by the view (e.g., a single interaction identifier can specify that the view supports drop interactions and spring-loaded interactions).
In some embodiments of method 800, the supplemental gesture recognizer addition criteria include criteria that are met when the touch input is a predetermined type of touch input (e.g., a touch begins a drag interaction or a separate touch input for a spring-loaded interaction or a drop interaction) (820).
In some embodiments of method 800, the predetermined type of touch input is a separate touch into the respective view (822). For example, fig. 5D and 5E illustrate views into application B, touch input (touch 1) of interface region 504; FIG. 6A illustrates that the corresponding event is a drag event of a touch (or touch input) in a drag event with a phase value of "enter"; fig. 6C shows that a touch input into the view of application B causes the system touch management process to split the touches, so when the touch input (touch 1) enters the view of application B, the view receives the split touches.
In some implementations, the method 800 further includes detecting movement of the touch input outside of the respective view (824), and in response to detecting movement of the touch input outside of the respective view, and in accordance with a determination that the touch input is a separate touch exiting the respective view (826): a first set of one or more supplemental gesture recognizers associated with the first interaction identifier is removed from the respective view (828). For example, removing one or more supplemental gesture recognizers is performed when the touch input meets supplemental gesture recognizer removal criteria, wherein the supplemental gesture recognizer removal criteria include a criterion that is met when the touch input is a predetermined type of touch input exiting the respective view, such as a split touch. By avoiding the use of gesture recognizers that are no longer needed to process such inputs to process touch inputs, removing one or more supplemental gesture recognizers when supplemental gesture recognizer removal criteria are met makes the electronic device more efficient, thereby reducing latency in response to touch inputs. Further, reducing the delay in responding to touch inputs reduces false inputs by the user, such as repeated or modified touch inputs by the user while waiting for the device to respond to early inputs by the user.
In some implementations of method 800, or in some cases, the touch input corresponds to a drag interaction (830). For example, a drag interaction is an interaction in which an object from a view is picked up and dragged over a user interface of one or more applications.
In some embodiments of the method 800, the first interaction identifier is a drag interaction identifier (832). Further, the first set of one or more supplemental gesture recognizers includes a drag start gesture recognizer that includes, for recognition of a drag start gesture: requiring that the location of the touch input corresponds to the respective view for at least a predefined period of time; and requiring that the change in the position of the touch input satisfies a lateral displacement threshold (e.g., while continuing to detect the touch input) after the position of the touch input corresponds to the position of the respective view for at least a predefined period of time 834.
In some such embodiments (as described above with respect to 832 and 834), the method 800 further includes, in response to the recognition of the drag start gesture, the device (e.g., a system drag management process or application) performing a drag start action in the user interface indicating that the drag operation has started (836). For example, execution of the drag start action (836) may include moving the user interface object in accordance with movement of the contact, and/or animating a change in appearance of the user interface object, e.g., giving the appearance that the user interface object has lifted in the z-direction from the displayed user interface.
In some implementations, the drag start gesture recognizer includes one or more of the following requirements so that the touch input meets predefined drag start criteria:
1) A timing criterion that is met when the touch input remains on the touch-sensitive surface for at least a predefined period of time;
2) Intensity requirements (sometimes referred to as long press criteria) that are met when the detected user input intensity remains below an intensity threshold during a predefined period of time;
3) A lateral displacement criterion that is met when the detected lateral displacement of the user input meets (e.g., meets or exceeds) a lateral displacement threshold;
4) Requiring that the timing criterion be met before the lateral displacement criterion (e.g., for an input that meets the drag criterion, the input must first meet the timing criterion and then meet the displacement criterion);
and/or
5) Requiring the touch input to include a single contact, or requiring the touch input to include two concurrent contacts (e.g., two different contacts detected simultaneously) on the touch-sensitive surface.
Those of ordinary skill in the art will readily appreciate that the drag gesture recognizer may include other combinations of the foregoing requirements (e.g., a single contact with timing requirements and lateral displacement requirements, or two contacts with lateral displacement requirements but without timing requirements, or various other such combinations).
In some embodiments of the method 800, the first interaction identifier is a drag interaction identifier (832). Further, the first set of one or more supplemental gesture recognizers includes a drag add gesture recognizer that includes requiring the touch input to include one or more contacts (e.g., a single contact or two different contacts detected simultaneously) whose locations correspond to respective views in order to recognize a drag add gesture (838); and in response to the recognition of the drag-add gesture, the device performs a drag-add start action in the user interface, the drag-add start action indicating that the drag-add operation has started (840).
In some implementations, the drag-add gesture recognizer includes requiring that the touch input include two concurrent contacts and requiring that two concurrent contacts be stopped being detected within a predefined period of time (e.g., a two-finger flick gesture). In some implementations, the drag add gesture recognizer includes requiring that the location of the touch input correspond to the respective view for at least a predefined period of time. In some other embodiments, the drag-add gesture recognizer includes requiring that the touch input include one or two concurrent contacts, stopping detection for a predefined period of time (e.g., one or two finger tap gestures), and requiring that one or two concurrent contacts occur within a predefined distance of a previous touch input that has been recognized as a drag gesture, or within a predefined distance of an object being dragged by the previous touch input in a drag operation. In some implementations, the drag start action includes animating the user interface object (e.g., corresponding to a touch input) as if the user interface object had been lifted in the z-direction from the displayed user interface.
In some implementations of method 800, the touch input corresponds to a drop interaction (850). In some implementations, the drop interaction occurs when an object that has been dragged on the user interface of one or more applications using the touch input recognized as the drag gesture is dropped onto the view. Examples of drop interactions are discussed above with reference to fig. 5Q and 5R.
In some implementations of the method 800, wherein the touch input corresponds to a drop interaction (850), the first interaction identifier is a drop interaction identifier (852), and the first set of one or more supplemental gesture recognizers includes a drop gesture recognizer that includes a requirement that a location of the touch input corresponds to a respective view (854) when the touch input is stopped from being detected. In such embodiments, as described with respect to 852 and 854, the method 800 includes, in response to the recognition of the drop gesture, the device performing a drop action in the user interface indicating that the drop operation has been initiated (856). Optionally, the drop action includes animating the user interface object (e.g., the object dragged by the touch input) as if the user interface object had been dropped into the displayed user interface in the z-direction from above the displayed user interface (858).
In some implementations of method 800, the touch input corresponds to a spring-loaded interaction (860). In some embodiments, after dragging the object over the user interface, a spring-loaded interaction occurs, hovering the dragged object over an icon of the view, which causes the view to open. In another example of a spring-loaded interaction, if the application icon view supports spring-loaded interaction, hovering the dragged object over the application icon on the home screen causes the application itself to open. In another example of a spring-loaded interaction, hovering a dragged object over a folder icon causes the folder to open and a new view of the open folder is displayed. And in yet another example, hovering the dragged object over the object causes the control of the object or a control associated with the object to be activated or deactivated (e.g., by toggling the state of the control when predefined spring-loading criteria have been met).
In some implementations of the method 800, wherein the touch input corresponds to a spring-loaded interaction (860), the first interaction identifier is a spring-loaded interaction identifier (862), and the first set of one or more supplemental gesture recognizers includes a spring-loaded gesture recognizer that includes a requirement that a location of the touch input corresponds to a respective view for at least a predefined period of time in order to recognize a spring-loaded gesture (864). In such implementations, the method 800 also includes, in response to the recognition of the spring-loaded gesture (866), the application performing a spring-loaded action including one or more of: the control (e.g., the control corresponding to the view) is activated as if the control has been selected (e.g., tapped or clicked), and a container (e.g., folder or album) in which the content may be stored is opened (868). For example, fig. 5Q and 5R illustrate examples of spring-loaded actions (sometimes referred to as spring-loaded operations) in which a first object (object 4) moves over a container object (also referred to as a folder) (object 5) by dragging a touch input (touch 1) of the first object. Once the spring-loaded gesture is recognized, the container object (object 5) is opened and its contents are displayed (fig. 5R).
In some implementations of the method 800, the spring-loaded gesture recognizer further includes requiring a speed of touch input on the touch-sensitive surface to be below a predefined speed threshold (870). For example, in the example shown in fig. 5Q, the touch input (touch 1) dragging the object (object 4) has a speed equal to or near zero, hovering substantially over the object 5 while the user waits for the object 5 to open in response to a spring-loaded gesture.
In some implementations of the method 800 in which the input corresponds to a spring-loaded interaction (e.g., in which 860-870 are operable), the method 800 includes changing the appearance of the respective view in accordance with a determination that the speed of the touch input on the touch-sensitive surface satisfies a predefined speed threshold (872). For example, changing the appearance of the respective view includes displaying an indication that the respective view supports spring-loaded interaction (e.g., an animation of an object over which the touch input hovers, or a change in color or thickness of an outer boundary or perimeter of the object, or a change in background color or blur or other display characteristic of the object) prior to recognizing the spring-loaded gesture or prior to performing the spring-loaded operation. In some embodiments, as a result of the change in appearance, the respective view is displayed in its first transformation of the immediately preceding visual appearance.
In some embodiments of method 800, a set of one or more standard gesture recognizers associated with a respective view are associated with the respective view by an application that includes the respective view, and a first set of one or more supplemental gesture recognizers are added to the respective view by a system process (e.g., a system drag management process) (872). For example, the application specifies standard gesture recognizers associated with the respective views, and the system process adds a supplemental gesture recognizer to the respective view based on the activity identifiers associated with the respective views and satisfaction of respective trigger conditions associated with those activity identifiers or alternatively satisfaction of trigger conditions associated with a set of supplemental gesture recognizers corresponding to each of the activity identifiers for the respective views.
In some implementations of the method 800, the method 800 includes detecting, via the touch-sensitive surface, a second touch input at a second location on the touch-sensitive surface corresponding to the respective view (884); in response to detecting the second touch input via the touch-sensitive surface (886): in accordance with a determination that the second touch input meets supplemental gesture recognizer addition criteria, wherein the supplemental gesture recognizer addition criteria includes a criterion (888) met when the respective view has a first interaction identifier: adding instances of a first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective views (890); and processing the second touch input (892) with one or more instances of the supplemental gesture recognizer and one or more standard gesture recognizers associated with the respective views. For example, the same set of operations performed on the first touch input when they meet the supplemental gesture recognizer addition criteria are performed with respect to the second touch input when they meet the supplemental gesture recognizer addition criteria. As described above, in some embodiments, a separate session identifier is associated with or assigned to each of the two touch inputs, e.g., as discussed above with reference to fig. 50 and 5P.
It should be understood that the particular order of the operations that have been described in fig. 8A-8E is merely exemplary and is not intended to suggest that the order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that the details of other processes described herein with reference to other methods described herein (e.g., methods 700 and 900) are likewise applicable in a similar manner to method 800 described above with reference to fig. 8A-8E. For example, handling touch inputs meeting predefined criteria as split touches, using an automatically added supplemental gesture recognizer to recognize drag, drop, and spring-loaded gestures described above with reference to method 700, and using gestures described below with reference to method 900 to recognize fault dependencies may also be applicable in a manner similar to method 800. For the sake of brevity, these details are not repeated here.
The operations in the above-described information processing method are optionally implemented by running one or more functional modules in an information processing apparatus such as a general-purpose processor (e.g., as described above with reference to fig. 1 and 3A) or a dedicated chip.
The operations described above with reference to fig. 8A-8E are optionally implemented by the components depicted in fig. 1 or 3A. For example, initiating a user interface operation such as a drag operation or a spring-loaded operation, as well as many other operations of method 800, are optionally implemented by contact/motion module 130 or event identifier module 312 thereof.
Automatically establishing fault dependencies between gesture recognizers
Fig. 9A-9D are flowcharts illustrating a method 900 of processing touches detected by an electronic device, including establishing a fault requirement between one or more supplemental gesture recognizers that provide specialized processing for operating such as drag operations and one or more standard gesture recognizers established by an application program for processing touch inputs in one or more views of the application program. The method 900 is performed at an electronic device (e.g., the device 300 of fig. 3 or the portable multifunction device 100 of fig. 1) having a display, a touch-sensitive surface, and optionally one or more sensors to detect a contact strength with the touch-sensitive surface. In some implementations, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some implementations, the display is separate from the touch-sensitive surface. Some operations in method 900 are optionally combined and/or the order of some operations is optionally changed.
As described below, the method 900 provides an intuitive way of handling touches detected by a device, which may or may not be a drag gesture. The method reduces the computational load imposed on the electronic device by touch gestures and other user inputs, improving device responsiveness, and thus reducing user delays and erroneous inputs. For example, when a user wants to drag one or more objects from a first application view or area to another application view or area, the improvement reduces the amount, scope, and/or nature of input from the user, thereby creating a more efficient human-machine interface. For battery-powered electronic devices, enabling a user to perform drag-and-drop operations faster and more efficiently saves power and increases time during battery charging.
The device displays a user interface of the application on its display (902), and detects user input (e.g., touch input) corresponding to a portion of the user interface of the application associated with the plurality of gesture recognizers when the user interface of the application is displayed, wherein: a first set of one or more first standard gesture recognizers of the plurality of gesture recognizers is associated with (e.g., assigned to) a portion of the user interface by the application; a second set of one or more supplemental gesture recognizers of the plurality of gesture recognizers is associated with (e.g., assigned to) the portion of the user interface through a system process (e.g., an application-independent process); and a first fault involving a first standard gesture recognizer (e.g., in a first set of one or more standard gesture recognizers) and a first supplemental gesture recognizer (e.g., in a second set of one or more supplemental gesture recognizers) requires that a portion of the user interface be associated with (e.g., assigned to) the portion of the user interface by a system process (e.g., an application independent process) (904). The method 900 further includes, in response to detecting the user input, processing the user input in accordance with the first standard gesture recognizer, the first supplemental gesture recognizer, and the first fault requirement (906). As discussed above with reference to fig. 6D and 6E, the failure requirements between gesture recognizers may be represented by a gesture recognition hierarchy, such as hierarchy 600-a or 600-B.
In some embodiments of the method 900, a first set of one or more (e.g., application-specific) standard gesture recognizers and a second set of one or more (e.g., system-process-specific) gesture recognizers are arranged in a gesture recognition hierarchy, and the first failure requirement includes requiring the first respective gesture recognizer to fail in order for the second respective gesture recognizer to successfully recognize the gesture, wherein the first respective gesture recognizer is at a higher level in the gesture recognition hierarchy than the second respective gesture recognizer (908). For example, referring to the example shown in fig. 6E, a first respective gesture recognizer (e.g., intensity-based gesture recognizer) is a parent level of a second respective gesture recognizer (e.g., long press gesture recognizer) in gesture recognition hierarchy 600-B, or in other words, a second respective gesture recognizer (e.g., long press gesture recognizer) is a child level of a first respective gesture recognizer (intensity-based gesture recognizer) in gesture recognition hierarchy 600-B.
In some implementations, one or more gesture recognizers (e.g., system process-specified gesture recognizers) can be inserted into a gesture recognition hierarchy of an existing set of gesture recognizers (e.g., application-specified gesture recognizers). For example, the transition from gesture recognition hierarchy 600-A of FIG. 6D to gesture recognition hierarchy 60-B of FIG. 6E is caused by the insertion of a drag start gesture recognizer into gesture recognition hierarchy 600-A of FIG. 6D. In some embodiments, gesture recognizers may be added to the gesture recognition hierarchy at a higher level than existing gesture recognizers in the hierarchy (e.g., drag start gesture recognizers are inserted at a higher level than long press gesture recognizers) such that gesture recognizers that require addition using existing gesture recognizers to handle the corresponding user input cannot recognize the corresponding user input (e.g., in the scenario shown in fig. 6E, the drag start gesture recognizer must fail in order for the long press gesture recognizer to recognize the corresponding user input).
In some implementations, the gesture recognizer may be added to (e.g., inserted into) the gesture recognition hierarchy at a level between two existing gesture recognizers. For example, with respect to a gesture recognition hierarchy that does not include a drag start gesture recognizer (e.g., before adding the drag start gesture recognizer to the gesture recognition hierarchy), processing a long press gesture requires determining that the intensity-based gesture recognizer is unable to recognize the gesture. In some implementations, as shown in fig. 6E, the drag start gesture recognizer is added to the hierarchy between the intensity-based gesture recognizer and the long press gesture recognizer (e.g., the long press gesture recognizer is lower than the drag start gesture recognizer, which in turn is lower than the intensity-based gesture recognizer). In some such implementations (e.g., after adding the drag start gesture recognizer to the gesture recognition hierarchy), processing the long press gesture requires determining an intensity-based gesture recognizer, and then the drag start gesture recognizer fails to recognize user input (e.g., the user input corresponds to neither an intensity-based gesture nor a drag start gesture). In some implementations, gesture recognizers may be added to the gesture recognition hierarchy at a lower level than existing gesture recognizers in the hierarchy such that processing the corresponding user input using the added gesture recognizer requires that the existing gesture recognizer not recognize the corresponding user input (e.g., processing a drag start gesture requires that the intensity-based gesture recognizer fail to recognize the user input).
In some implementations of method 900, the first failure requirement includes requiring the first supplemental gesture recognizer to fail to recognize the user input (e.g., touch input) in order for the first standard gesture recognizer to successfully recognize the gesture (910). For example, in the example shown in FIG. 6E, the drag start gesture recognizer must fail in order for the long press gesture recognizer to succeed. In another example not shown in fig. 6E, the drag add gesture recognizer must fail in order for the flick gesture recognizer to succeed.
In some implementations of the method 900, the first supplemental gesture recognizer is a drag add gesture recognizer and the first standard gesture recognizer is a flick gesture recognizer. See, for example, fig. 6E.
In some embodiments of method 900, to identify drag additionsGesturing, dragging the add gesture recognizer includes requiring the user to input a contact included at a location corresponding to a respective user interface element in the user interface, and lifting the contact off (e.g., T4 in fig. 5L) for a predefined period of time (914). For example, in such embodiments, the user input identified as a drag add gesture includes the same element as a flick gesture. In some implementations, the drag-adding gesture recognizer further includes requiring the user input to include a second contact (e.g., a contact corresponding to a drag operation). In some implementations, the drag-adding gesture recognizer further includes requiring a distance between the contact and the second contact to be less than a predefined proximity threshold. For example, a "tap" (e.g., a contact with lift-off) is used to add a corresponding user interface element to an ongoing drag operation (e.g., a second contact). Optionally, the corresponding user interface element is added to the "nearby" ongoing drag operation (e.g., within a proximity threshold). For example, the sequence of fig. 5I, 5L shows a drag add gesture performed by a second touch (touch 2) on object 2, which results in a drag operation that adds object 2 to correspond to the first touch (touch 1). In fig. 5I and 5L, time t 2 Corresponds to the time the second touch is first detected, and T4 corresponds to the aforementioned predefined period of time, which is (or corresponds to) the maximum duration of the drag add gesture.
In some embodiments of method 900, processing the user input according to the first standard gesture recognizer, the first supplemental gesture recognizer, and the first fault requirement includes (916): in accordance with a determination (e.g., by a system process) that content associated with (e.g., content of or represented by) the respective user interface element cannot be added to the drag operation, failing to identify user input using the drag-add gesture recognizer, and processing the user input using the tap gesture recognizer (918); and in accordance with a determination that content associated with the respective user interface element can be added to the drag operation, identifying a drag add gesture and adding the content to the drag operation (920). In some embodiments, to make this determination, the system process queries the application to determine whether the application supports adding content (e.g., content associated with the respective user interface element) to a drag operation (e.g., whether an application or application view including the respective user interface element supports drag interaction).
In some implementations of the method 900, the first supplemental gesture recognizer is a drag start gesture recognizer and the first standard gesture recognizer is a long press gesture recognizer (922). Such an implementation is represented by, or at least consistent with, the gesture recognizer hierarchy discussed above with reference to FIG. 6E.
In some implementations of the method 900, processing the user input according to the first standard gesture recognizer, the first supplemental gesture recognizer, and the first fault requirement is performed in accordance with determining that the device meets the compact display standard. For example, if a device (e.g., a smart phone) has a display size below a predefined threshold, the drag start gesture recognizer must fail before the long press gesture recognizer can recognize the gesture.
In some embodiments of method 900, in accordance with a determination that the device (e.g., tablet computer) does not meet the compact display criteria (926), the first failure requirement includes a requirement that the user input be successfully recognized as a first gesture (e.g., long press) by the first standard gesture recognizer be canceled (e.g., subsequently ignored) in accordance with the user input being successfully recognized as a second gesture (e.g., drag start) by the first supplemental gesture recognizer (928). In such an embodiment, processing (see 906) the user input according to the first standard gesture recognizer, the first supplemental gesture recognizer, and the first fault requirement includes: the user input is recognized as a first gesture (e.g., a long press gesture) using the standard gesture recognizer (932), and after the user input is recognized as the first gesture using the standard gesture recognizer (934): the user input is recognized as a second gesture (e.g., a drag start gesture) using the supplemental gesture recognizer (936), and the user input is recognized as a first gesture (e.g., the first (long press) gesture is cancelled) using the standard gesture recognizer (938).
In some implementations, identifying the user input as a first gesture (e.g., a long press gesture) using a standard gesture identifier (932) includes, as shown in fig. 5C, performing an operation in a user interface associated with the first gesture, such as displaying a menu of options in response to identifying the long press gesture. In some implementations, using the standard gesture recognizer to cease recognizing the user input as the first gesture (938) optionally but generally includes ceasing to perform the operation associated with the first gesture. For example, in the example shown in fig. 5C, ceasing to perform the operation associated with the first gesture includes ceasing to display the displayed menu of options (see fig. 5C) in response to identifying the long press gesture (e.g., ceasing to display the transition from fig. 5C to fig. 5D). Further, in some embodiments, identifying the user input as a second gesture (e.g., a drag start gesture) using the supplemental gesture identifier (936) includes performing an operation in the user interface associated with the second gesture, such as displaying an animation of the user interface object lifted from the user interface in the z-direction in response to identifying the drag start gesture, to indicate a start of the drag session.
In some embodiments of the method 900, the first failure requirement includes requiring the first standard gesture recognizer to fail in order for the first supplemental gesture recognizer to successfully recognize the gesture (e.g., the intensity-based gesture recognizer must fail in order for the drag start gesture recognizer to succeed, as shown in gesture recognizer hierarchy 600-B of fig. 6E) (940).
In some embodiments of the method 900, such as those consistent with the requirement 940, the first standard gesture recognizer is an intensity-based gesture recognizer (e.g., a gesture recognizer requiring touch input to meet an intensity criterion, such as requiring the characteristic intensity of the contact to increase above an intensity threshold in order for the gesture recognizer to successfully recognize the gesture), and the first supplemental gesture recognizer is a drag start gesture recognizer (942). See the discussion above of FIG. 6E, which graphically depicts a corresponding gesture recognizer hierarchy 600-B.
In some embodiments of the method 900, the first supplemental gesture recognizer is a drag start gesture recognizer, and to recognize the drag start gesture, the drag start gesture recognizer includes: requiring the user input to include contact at a location corresponding to a respective user interface element in the user interface for at least a predefined period of time; and requiring that the change in the contact location satisfies a lateral displacement threshold (e.g., while continuing to detect contact) after the location of the contact corresponds to the respective user interface element for at least a predefined period of time (944). In such implementations, the touch input identified as the drag start gesture initially remains at one location, typically on or over the object (e.g., touch 1 on object 4, fig. 5D), for at least a predefined period of time before the movement and satisfaction of the lateral displacement threshold (e.g., by laterally moving at least a certain number of pixels).
In some embodiments of the method 900, the first supplemental gesture recognizer is a drag start gesture recognizer, and to recognize the drag start gesture, the drag start gesture recognizer includes: requiring the user to input two concurrent contacts (e.g., two different contacts detected simultaneously, as shown in fig. 5F) on the touch-sensitive surface at locations corresponding to respective user interface elements in the user interface (e.g., object 4 in user interface area 502, fig. 5F); and requiring the change in position of the two concurrent contacts to meet the lateral displacement threshold (946). In some embodiments, for user input that includes two concurrent contacts on a user interface element, the drag start gesture recognizer does not include a requirement that user input be detected for at least a predefined period of time. In such implementations, once the two-finger input is in contact with the touch-sensitive surface, use of the two-finger input enables the user to begin a lateral displacement phase of the drag gesture. In some implementations, the drag start gesture recognizer includes requiring two concurrent contacts to remain within a second predefined proximity threshold of each other (e.g., the two concurrent contacts move substantially together rather than away from each other).
In some embodiments of the method 900, the first supplemental gesture recognizer is a drag-add gesture recognizer, and to recognize the drag-add gesture, the drag-add gesture recognizer includes requiring user input to include two concurrent contacts at locations corresponding to respective user interface elements in the user interface, and lift-off of the two concurrent contacts within a predefined period of time (e.g., the user input is a two-finger flick gesture, rather than the single-finger flick gesture discussed above with reference to fig. 5I and 5L) (948). In some implementations, the drag-adding gesture recognizer further includes requiring the user input to include a third contact (e.g., corresponding to a drag operation that begins before the drag-adding gesture). In some implementations, the drag-adding gesture recognizer further includes requiring that the distance from the third contact to the closest two contacts be less than a predefined proximity threshold. When these requirements are met, a two-finger drag add gesture (e.g., two concurrent contacts) is used to add the corresponding user interface element to the ongoing drag operation (e.g., a third contact). Accordingly, the corresponding user interface element is added to the "nearby" ongoing drag operation (e.g., within the proximity threshold).
It should be understood that the particular order of the operations that have been described in fig. 9A-9D is merely exemplary and is not intended to suggest that the order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that the details of other processes described herein in connection with other methods described herein (e.g., methods 800 and 900) are likewise applicable in a similar manner to method 900 described above in connection with fig. 9A-9D. For example, the interaction identifiers described above with reference to method 800 are associated with different views to determine which supplemental gesture recognizers to add to the respective views, and the use of automatically added supplemental gesture recognizers to recognize drag, drop, and spring-loaded gestures to process touch input meeting predefined criteria as separate touches described above with reference to method 700 may also be applicable in a manner similar to method 900. For the sake of brevity, these details are not repeated here.
The operations in the above-described information processing method are optionally implemented by running one or more functional modules in an information processing apparatus such as a general-purpose processor (e.g., as described above with reference to fig. 1 and 3A) or a dedicated chip.
The operations described above with reference to fig. 7A-7F are optionally implemented by the components depicted in fig. 1 or 3A. For example, initiating a user interface operation such as a drag operation or a spring-loaded operation, as well as many other operations of method 700, are optionally implemented by contact/motion module 130 or event identifier module 312 thereof.
The foregoing description, for purposes of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention and various described embodiments with various modifications as are suited to the particular use contemplated.

Claims (23)

1. A method for touch input processing, comprising:
at an electronic device having a display and a touch-sensitive surface:
displaying a user interface on the display comprising a plurality of views, the plurality of views comprising:
a first view associated with a first set of one or more standard gesture recognizers for processing touch input directed to the first view; and
A second view associated with a second set of one or more standard gesture recognizers for processing touch input directed to the second view;
wherein one or more of the plurality of views are associated with a first interaction identifier indicating that a first set of one or more supplemental gesture recognizers may be added to the corresponding view;
detecting, via the touch-sensitive surface, a touch input on the touch-sensitive surface at a location corresponding to a respective view of the plurality of views; and
in response to detecting the touch input via the touch-sensitive surface:
in accordance with a determination that the touch input meets supplemental gesture recognizer addition criteria, wherein the supplemental gesture recognizer addition criteria include a criterion that is met when the respective view has the first interaction identifier:
adding the first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view; and
processing the touch input with the one or more standard gesture recognizers and the one or more supplemental gesture recognizers associated with the respective view; and is also provided with
In accordance with a determination that the touch input does not meet the supplemental gesture recognizer addition criteria, the touch input at the respective view is processed without adding the first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view.
2. The method of claim 1, wherein the supplemental gesture recognizer addition criteria includes a criterion that is met when the touch input is a predetermined type of touch input.
3. The method of claim 2, wherein the predetermined type of touch input is a split touch into the respective view.
4. A method according to claim 3, comprising:
detecting movement of the touch input outside the respective view;
in response to detecting the movement of the touch input outside of the respective view:
in accordance with a determination that the touch input is a split touch exiting the respective view:
the first set of one or more supplemental gesture recognizers associated with the first interaction identifier are removed from the respective view.
5. The method of claim 1, wherein the touch input corresponds to a drag interaction.
6. The method according to claim 5, wherein:
the first interaction identifier is a drag interaction identifier;
the first set of one or more supplemental gesture recognizers includes a drag start gesture recognizer that includes, for recognition of a drag start gesture:
the location of the touch input corresponds to a requirement that the respective view last for at least a predefined period of time; and
after the position of the touch input corresponds to the respective view for at least the predefined period of time, the change in position of the touch input meets a requirement of a lateral displacement threshold; and
in response to recognition of the drag start gesture, the device performs a drag start action in the user interface indicating that a drag operation has started.
7. The method according to claim 5, wherein:
the first interaction identifier is a drag interaction identifier;
the first set of one or more supplemental gesture recognizers includes a drag add gesture recognizer that includes a requirement that the touch input include one or more contacts whose locations correspond to the respective views in order to recognize a drag add gesture; and
In response to the recognition of the drag-add gesture, the device performs a drag-add start action in the user interface that indicates that a drag-add operation has started.
8. The method of claim 1, wherein the touch input corresponds to a drop interaction.
9. The method according to claim 8, wherein:
the first interaction identifier is a drop interaction identifier;
the first set of one or more supplemental gesture recognizers includes a drop gesture recognizer that includes a requirement that a location of the touch input corresponds to the respective view when the touch input is stopped being detected; and
in response to recognition of the drop gesture, the device performs a drop action in the user interface indicating that a drop operation has been initiated.
10. The method of claim 1, wherein the touch input corresponds to a spring-loaded interaction.
11. The method according to claim 10, wherein:
the first interaction identifier is a spring-loaded interaction identifier;
the first set of one or more supplemental gesture recognizers includes a spring-loaded gesture recognizer that includes a requirement that a location of the touch input corresponds to the respective view for at least a predefined period of time in order to recognize a spring-loaded gesture; and
In response to the recognition of the spring-loaded gesture, the application performs a spring-loaded action including one or more of: activating a control is as if the control had been selected, and opening a container in which content can be stored.
12. The method of claim 11, wherein the spring-loaded gesture recognizer further comprises a requirement that a speed of the touch input on the touch-sensitive surface be below a predefined speed threshold.
13. The method of claim 12, comprising changing an appearance of the respective view in accordance with a determination that a speed of the touch input on the touch-sensitive surface meets the predefined speed threshold.
14. The method of claim 1, wherein the set of one or more standard gesture recognizers associated with the respective view are associated with the respective view by an application that includes the respective view, and wherein the first set of one or more supplemental gesture recognizers are added to the respective view by a system process.
15. The method according to claim 1, comprising:
detecting, via the touch-sensitive surface, a second touch input on the touch-sensitive surface at a second location corresponding to the respective view; and
In response to detecting the second touch input via the touch-sensitive surface:
in accordance with a determination that the second touch input meets the supplemental gesture recognizer addition criteria, wherein the supplemental gesture recognizer addition criteria include a criterion that is met when the respective view has the first interaction identifier:
adding an instance of the first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view; and
the second touch input is processed with instances of the one or more standard gesture recognizers and the one or more supplemental gesture recognizers associated with the respective views.
16. An electronic device, comprising:
a display;
a touch sensitive surface;
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for:
displaying a user interface on the display comprising a plurality of views, the plurality of views comprising:
a first view associated with a first set of one or more standard gesture recognizers for processing touch input directed to the first view; and
A second view associated with a second set of one or more standard gesture recognizers for processing touch input directed to the second view;
wherein one or more of the plurality of views are associated with a first interaction identifier indicating that a first set of one or more supplemental gesture recognizers may be added to the corresponding view; detecting touch input via the touch-sensitive surface at a location on the touch-sensitive surface corresponding to a respective view of the plurality of views; and
in response to detecting the touch input via the touch-sensitive surface:
in accordance with a determination that the touch input meets supplemental gesture recognizer addition criteria, wherein the supplemental gesture recognizer addition criteria include a criterion that is met when the respective view has the first interaction identifier:
adding the first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view; and
processing the touch input with the one or more standard gesture recognizers and the one or more supplemental gesture recognizers associated with the respective view; and is also provided with
In accordance with a determination that the touch input does not meet the supplemental gesture recognizer addition criteria, the touch input at the respective view is processed without adding the first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view.
17. The electronic device of claim 16, wherein the one or more programs include instructions for performing the method of any of claims 2-15.
18. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display and a touch-sensitive surface, cause the electronic device to:
displaying a user interface on the display comprising a plurality of views, the plurality of views comprising:
a first view associated with a first set of one or more standard gesture recognizers for processing touch input directed to the first view; and
a second view associated with a second set of one or more standard gesture recognizers for processing touch input directed to the second view;
Wherein one or more of the plurality of views are associated with a first interaction identifier indicating that a first set of one or more supplemental gesture recognizers may be added to the corresponding view; detecting touch input via the touch-sensitive surface at a location on the touch-sensitive surface corresponding to a respective view of the plurality of views; and
in response to detecting the touch input via the touch-sensitive surface:
in accordance with a determination that the touch input meets supplemental gesture recognizer addition criteria, wherein the supplemental gesture recognizer addition criteria include a criterion that is met when the respective view has the first interaction identifier:
adding the first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view; and
processing the touch input with the one or more standard gesture recognizers and the one or more supplemental gesture recognizers associated with the respective view; and is also provided with
In accordance with a determination that the touch input does not meet the supplemental gesture recognizer addition criteria, the touch input at the respective view is processed without adding the first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view.
19. The computer readable storage medium of claim 18, wherein the one or more programs comprise instructions, which when executed by the electronic device, cause the electronic device to perform the method of any of claims 2-15.
20. An electronic device, comprising:
a display;
a touch sensitive surface;
means for displaying a user interface on the display comprising a plurality of views, the plurality of views comprising:
a first view associated with a first set of one or more standard gesture recognizers for processing touch input directed to the first view; and
a second view associated with a second set of one or more standard gesture recognizers for processing touch input directed to the second view;
wherein one or more of the plurality of views are associated with a first interaction identifier indicating that a first set of one or more supplemental gesture recognizers may be added to the corresponding view;
means for detecting touch input via the touch-sensitive surface at a location on the touch-sensitive surface corresponding to a respective view of the plurality of views; and
Means, responsive to detecting the touch input via the touch-sensitive surface, for performing operations comprising:
in accordance with a determination that the touch input meets supplemental gesture recognizer addition criteria, wherein the supplemental gesture recognizer addition criteria include a criterion that is met when the respective view has the first interaction identifier:
adding the first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view; and
processing the touch input with the one or more standard gesture recognizers and the one or more supplemental gesture recognizers associated with the respective view; and is also provided with
In accordance with a determination that the touch input does not meet the supplemental gesture recognizer addition criteria, the touch input at the respective view is processed without adding the first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view.
21. The electronic device of claim 20, comprising means for performing the method of any of claims 2 to 15.
22. An information processing apparatus for use in an electronic device having a display and a touch-sensitive surface, comprising:
means for displaying a user interface on the display comprising a plurality of views, the plurality of views comprising:
a first view associated with a first set of one or more standard gesture recognizers for processing touch input directed to the first view; and
a second view associated with a second set of one or more standard gesture recognizers for processing touch input directed to the second view;
wherein one or more of the plurality of views are associated with a first interaction identifier indicating that a first set of one or more supplemental gesture recognizers may be added to the corresponding view;
means for detecting touch input via the touch-sensitive surface at a location on the touch-sensitive surface corresponding to a respective view of the plurality of views; and
means, operative responsive to detecting the touch input via the touch-sensitive surface, for performing operations comprising:
in accordance with a determination that the touch input meets supplemental gesture recognizer addition criteria, wherein the supplemental gesture recognizer addition criteria include a criterion that is met when the respective view has the first interaction identifier:
Adding the first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view; and
processing the touch input with the one or more standard gesture recognizers and the one or more supplemental gesture recognizers associated with the respective view; and is also provided with
In accordance with a determination that the touch input does not meet the supplemental gesture recognizer addition criteria, the touch input at the respective view is processed without adding the first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view.
23. The information processing apparatus according to claim 22, comprising means for performing the method according to any one of claims 2 to 15.
CN201880031229.8A 2017-05-16 2018-05-16 Apparatus, method and graphical user interface for touch input processing Active CN110651242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911262546.3A CN111026306B (en) 2017-05-16 2018-05-16 Apparatus, method and graphical user interface for touch input processing

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US201762507096P 2017-05-16 2017-05-16
US62/507,096 2017-05-16
DKPA201770371 2017-05-24
DKPA201770371A DK179979B1 (en) 2017-05-16 2017-05-24 Devices, methods, and graphical user interfaces for touch input processing
US15/917,467 2018-03-09
US15/917,467 US10409477B2 (en) 2017-05-16 2018-03-09 Devices, methods, and graphical user interfaces for touch input processing
PCT/US2018/032917 WO2018213414A1 (en) 2017-05-16 2018-05-16 Devices, methods, and graphical user interfaces for touch input processing

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201911262546.3A Division CN111026306B (en) 2017-05-16 2018-05-16 Apparatus, method and graphical user interface for touch input processing

Publications (2)

Publication Number Publication Date
CN110651242A CN110651242A (en) 2020-01-03
CN110651242B true CN110651242B (en) 2023-07-11

Family

ID=64274873

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201880031229.8A Active CN110651242B (en) 2017-05-16 2018-05-16 Apparatus, method and graphical user interface for touch input processing
CN201911262546.3A Active CN111026306B (en) 2017-05-16 2018-05-16 Apparatus, method and graphical user interface for touch input processing

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201911262546.3A Active CN111026306B (en) 2017-05-16 2018-05-16 Apparatus, method and graphical user interface for touch input processing

Country Status (3)

Country Link
EP (1) EP3593235A1 (en)
CN (2) CN110651242B (en)
WO (1) WO2018213414A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10877660B2 (en) 2018-06-03 2020-12-29 Apple Inc. Devices and methods for processing inputs using gesture recognizers
DE102019003997A1 (en) * 2019-06-07 2020-12-10 Drägerwerk AG & Co. KGaA Input system and method for controlling an electromedical device
US11960668B1 (en) 2022-11-10 2024-04-16 Honeywell International Inc. Cursor management methods and systems for recovery from incomplete interactions
CN117608424B (en) * 2024-01-24 2024-04-12 江苏锦花电子股份有限公司 Touch knob screen management and control system and method based on Internet of things

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017027623A1 (en) * 2015-08-10 2017-02-16 Apple Inc. Devices, methods, and graphical user interfaces for adjusting user interface objects

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5564112A (en) * 1993-10-14 1996-10-08 Xerox Corporation System and method for generating place holders to temporarily suspend execution of a selected command
KR101690164B1 (en) * 2005-03-04 2016-12-27 애플 인크. Multi-functional hand-held device
US8285499B2 (en) * 2009-03-16 2012-10-09 Apple Inc. Event recognition
US9684521B2 (en) * 2010-01-26 2017-06-20 Apple Inc. Systems having discrete and continuous gesture recognizers
US9250788B2 (en) * 2009-03-18 2016-02-02 IdentifyMine, Inc. Gesture handlers of a gesture engine
US8698845B2 (en) * 2010-01-06 2014-04-15 Apple Inc. Device, method, and graphical user interface with interactive popup views
US9519356B2 (en) * 2010-02-04 2016-12-13 Microsoft Technology Licensing, Llc Link gestures
US20120133579A1 (en) * 2010-11-30 2012-05-31 Microsoft Corporation Gesture recognition management
WO2012108668A2 (en) * 2011-02-10 2012-08-16 삼성전자 주식회사 Portable device comprising a touch-screen display, and method for controlling same
US20130038552A1 (en) * 2011-08-08 2013-02-14 Xtreme Labs Inc. Method and system for enhancing use of touch screen enabled devices
US20130346912A1 (en) * 2012-06-20 2013-12-26 Alexander Buening Method And System To Launch And Manage An Application On A Computer System Having A Touch Panel Input Device
KR102203885B1 (en) * 2013-04-26 2021-01-15 삼성전자주식회사 User terminal device and control method thereof
US10684740B2 (en) * 2013-11-04 2020-06-16 Facebook, Inc. Intervention conditions
US20150153897A1 (en) * 2013-12-03 2015-06-04 Microsoft Corporation User interface adaptation from an input source identifier change
US20160062636A1 (en) * 2014-09-02 2016-03-03 Lg Electronics Inc. Mobile terminal and control method thereof
US10048856B2 (en) * 2014-12-30 2018-08-14 Microsoft Technology Licensing, Llc Configuring a user interface based on an experience mode transition
US10095396B2 (en) * 2015-03-08 2018-10-09 Apple Inc. Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object
US10101882B2 (en) * 2015-06-05 2018-10-16 Apple Inc. Movement between multiple views
US9846535B2 (en) * 2015-06-05 2017-12-19 Apple Inc. Devices and methods for processing touch inputs over multiple regions of a touch-sensitive surface
US9961239B2 (en) * 2015-06-07 2018-05-01 Apple Inc. Touch accommodation options
US9880735B2 (en) * 2015-08-10 2018-01-30 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017027623A1 (en) * 2015-08-10 2017-02-16 Apple Inc. Devices, methods, and graphical user interfaces for adjusting user interface objects

Also Published As

Publication number Publication date
CN111026306A (en) 2020-04-17
CN111026306B (en) 2021-02-02
EP3593235A1 (en) 2020-01-15
WO2018213414A1 (en) 2018-11-22
CN110651242A (en) 2020-01-03

Similar Documents

Publication Publication Date Title
US11977726B2 (en) Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object
US11556201B2 (en) Device, method, and user interface for processing intensity of touch contacts
US11954323B2 (en) Devices, methods, and graphical user interfaces for initiating a payment action in a messaging session
US11747975B2 (en) Devices, methods, and graphical user interfaces for touch input processing
AU2022224714B2 (en) Devices and methods for interacting with an application switching user interface
CN107430488B (en) Activity-based thresholds and feedback
JP2017152027A (en) Device, method, and graphical user interface for moving and dropping user interface object
WO2016196508A1 (en) Formatting content for a reduced-size user interface
CN110651242B (en) Apparatus, method and graphical user interface for touch input processing
US20220253189A1 (en) Devices and Methods for Interacting with an Application Switching User Interface
US10540071B2 (en) Device, method, and graphical user interface for displaying a zoomed-in view of a user interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant