CN109769396B - Apparatus, method and graphical user interface for displaying an affordance over a background - Google Patents

Apparatus, method and graphical user interface for displaying an affordance over a background Download PDF

Info

Publication number
CN109769396B
CN109769396B CN201880001526.8A CN201880001526A CN109769396B CN 109769396 B CN109769396 B CN 109769396B CN 201880001526 A CN201880001526 A CN 201880001526A CN 109769396 B CN109769396 B CN 109769396B
Authority
CN
China
Prior art keywords
affordance
content
range
appearance
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880001526.8A
Other languages
Chinese (zh)
Other versions
CN109769396A (en
Inventor
W·S·万
C·G·卡鲁纳姆尼
M·阿朗索鲁伊斯
B·西查诺斯基
B·E·尼尔森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DKPA201770711A external-priority patent/DK179931B1/en
Application filed by Apple Inc filed Critical Apple Inc
Priority to CN202311082973.XA priority Critical patent/CN117032541A/en
Priority to CN201910756761.2A priority patent/CN110456979B/en
Priority to CN202111363213.7A priority patent/CN114063842A/en
Publication of CN109769396A publication Critical patent/CN109769396A/en
Application granted granted Critical
Publication of CN109769396B publication Critical patent/CN109769396B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

An electronic device having a display and a touch-sensitive surface, the electronic device displaying content and affordances on the display, wherein: the affordance is displayed on a portion of the content; determining a value of a same display attribute of the affordance based on a value of a display attribute of a portion of the content that displays the affordance; and allowing the value of the display attribute of the content to vary within a first range of values and constraining the value of the display attribute of the affordance to vary within a second range of values that is less than the first range of values; while displaying the content and the affordance, the device detecting a change in appearance of the content displaying the affordance; and in response to detecting the change in appearance of the content displaying the affordance, the device changes the appearance of the affordance, including: increasing the value of the display attribute of the affordance in accordance with a magnitude of change in the value of the display attribute of the content and the second range of values in accordance with a determination that the value of the display attribute of the content has decreased; and in accordance with a determination that the value of the display attribute of the content has increased, decreasing the value of the display attribute of the affordance in accordance with a magnitude of change in the value of the display attribute of the content and the second range of values.

Description

Apparatus, method and graphical user interface for displaying an affordance over a background
Technical Field
The present invention relates generally to electronic devices having a display and a touch-sensitive surface, including but not limited to electronic devices that display virtual affordances (e.g., controls, indicators, visual guides, etc.) on their touch-sensitive display screen.
Background
The use of touch-sensitive surfaces as input devices for computers and other electronic computing devices has grown significantly in recent years. Exemplary touch-sensitive surfaces include touch pads and touch screen displays. Such surfaces are widely used to manipulate user interfaces on displays and objects therein. Exemplary user interface objects include digital images, video, text, icons, and control elements (such as buttons) as well as other graphics.
Electronic computing devices typically display virtual controls or visual guides on their displays. For example, keyboards, menus, dialog boxes, alarms, and other controls (e.g., via touch input) may be activated and manipulated to cause operations to be performed on a portable electronic device (e.g., a smart phone, tablet, or notebook). The indicators and visual guides may be overlaid on a background (e.g., a user interface of an application or a user interface of an operating system) providing visual cues associated with a particular region of the background or screen regarding the types of inputs that may be provided and/or the types of operations that may be performed.
Existing methods for displaying controls, indicators, and visual guides can be cumbersome and inefficient. For example, controls, indicators, and visual guides may cause unnecessary interference to a user when the user manipulates the user interface, or be insufficiently sharp or sharp relative to the background, thereby causing user errors and confusion when the user interacts with the device, which may also negatively impact the power consumption of the device. This latter consideration is particularly important in battery-powered devices.
In addition, certain types of affordances are displayed on a wide variety of contexts and content, sometimes without movement for extended periods of time. Thus, the display may exhibit ghost images (or afterimages) that show an enabling representation after a period of use. Reducing and eliminating display afterimages is a long-standing challenge for display device manufacturers. Some existing approaches to solving this problem, such as introducing a screen saver or flashing icons, are not satisfactory solutions because they have side effects (e.g., causing eye strain, distraction, etc.) and in many cases lack efficacy (e.g., being available only when the device is idle).
Disclosure of Invention
Thus, there is a need for an electronic device with a highlighted but less distracting affordance (e.g., virtual space, indicators, and visual guides) that helps provide a sufficient degree of visual distinctions to guide a user to provide the required input to achieve the desired result without unnecessarily distracting the user from the content displayed on the application or system user interface. In addition, for some affordances that are displayed for extended periods of time without movement, it is desirable to display the affordance in a manner that reduces or eliminates screen afterimages.
In addition, as the background content changes over time, whether automatically or in response to user input, the appearance of the affordance also needs to be dynamically adapted in order to continue to remain efficient and effective for the purposes described above. In particular, affordances that serve as controls or visual guides for gestures that trigger some common system functions are displayed in many different scenarios (e.g., on user interfaces of different applications and operating systems). Sometimes, the context of the affordance in a given context (e.g., scrollable content, rapidly changing content, or unpredictable content) also changes dynamically, further requiring that the appearance of the affordance continuously adapt to the change in appearance of the context after the affordance is initially displayed on the context. Examples of such affordances are system-level affordances for indicating a starting region of a home/multitasking gesture, which result in dismissing a currently displayed application user interface and displaying a home screen user interface, or dismissing a cover-like system information interface (e.g., a notification center or lock screen user interface) and displaying a previously displayed user interface (e.g., an application user interface or a home screen).
In addition, sometimes, an operational scenario change occurs in the application after the affordance is displayed on the application user interface, which can qualitatively change the likelihood that the user will interact with or require visual guidance of the affordance. In this case, a balance between maintaining affordance highlighting and making the affordance less distracting needs to be adjusted to maintain the effectiveness and efficiency of the user interface.
In addition, for some affordances that are displayed for extended periods of time without movement, it is desirable to display the affordance in a manner that reduces or eliminates screen afterimages.
The above-described need requires new methods and interfaces to display affordances and adjust the appearance of affordances (e.g., virtual controls, indicators, and visual guides) over the background. Such devices, methods, and interfaces can reduce the cognitive burden on the user and create a more efficient human-machine interface. In addition, such devices, methods, and interfaces can reduce or eliminate screen afterimages, thereby reducing device maintenance costs and extending device life.
In some embodiments, the device is a desktop computer. In some embodiments, the device is portable (e.g., a notebook, tablet, or handheld device). In some embodiments, the device is a personal electronic device (e.g., a wearable electronic device such as a watch). In some embodiments, the device has a touch pad. In some implementations, the device has a touch sensitive display (also referred to as a "touch screen" or "touch screen display"). In some embodiments, the device has a Graphical User Interface (GUI), one or more processors, memory, and one or more modules, a program or set of instructions stored in the memory for performing a plurality of functions. In some embodiments, the user interacts with the GUI primarily through stylus and/or finger contacts and gestures on the touch-sensitive surface. In some embodiments, the functions optionally include image editing, drawing, rendering, word processing, spreadsheet making, gaming, telephony, video conferencing, email, instant messaging, workout support, digital photography, digital video recording, web browsing, digital music playing, note recording, digital video playing, and system level operations (such as displaying home screens, locking devices, displaying system level notification screens, displaying system level control panel user interfaces, etc.). Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
According to some embodiments, a method comprises: at a device having a display and a touch-sensitive surface: displaying the content and affordance on a display, wherein: an affordance is displayed over a portion of the content; determining a value of the same display attribute of the affordance based on a value of a display attribute of the portion of content on which the affordance is displayed; and allowing the value of the display attribute of the content to vary within a first range of values and constraining the value of the display attribute of the affordance to vary within a second range of values that is less than the first range of values; detecting a change in appearance of the content displaying the affordance while the content and the affordance are displayed; and in response to detecting a change in appearance of the content of the display affordance, changing the appearance of the affordance, comprising: increasing the value of the display attribute of the affordance in accordance with the magnitude of the change in the value of the display attribute of the content and the second range of values in accordance with determining that the value of the display attribute of the content has decreased; and decreasing the value of the display attribute of the affordance in accordance with the magnitude of the change in the value of the display attribute of the content and the second range of values in accordance with a determination that the value of the display attribute of the content has increased.
According to some embodiments, a method comprises: at a device having a display and a touch-sensitive surface: displaying a user interface of the application program; when a user interface of an application is displayed in a first mode, an affordance having a first appearance is displayed on the user interface, wherein: the affordance is displayed on a portion of the user interface and values of a set of one or more display attributes of the affordance having a first appearance vary according to a first set of one or more rules according to variations in values of a set of one or more display attributes of the portion of the user interface underlying the affordance; detecting a request to transition from displaying the user interface in the first mode to displaying the user interface in the second mode while displaying an affordance having a first appearance on a portion of the user interface displayed in the first mode; and in response to detecting the request: displaying the user interface in a second mode; and displaying an affordance having a second appearance on the user interface displayed in a second mode, wherein: according to a second set of one or more rules different from the first set of one or more rules, the values of the set of one or more display attributes of the affordance having the second appearance change according to changes in the values of the set of one or more display attributes of a portion of the user interface underlying the affordance;
According to some embodiments, a method comprises: at a device having a display and a touch-sensitive surface: displaying the content and affordance on a display, wherein: an affordance is displayed over a portion of the content; determining a value of the same display attribute of the affordance based on a value of a display attribute of the portion of content on which the affordance is displayed; and allowing the value of the display attribute of the content to vary within a first range of values, and constraining the value of the display attribute of the affordance to vary within a affordance appearance range of values that is less than the first range of values; detecting a change in appearance of the content of the display affordance when the content and the affordance are displayed and when the affordance appearance value range is a second value range; and in response to detecting a change in appearance of the content of the display affordance, changing the appearance of the affordance, comprising: according to the determined content appearance change meeting the range switching standard: transferring the affordance appearance value range to a third value range, wherein the third value range is different from the second value range and the third value range is less than the first value range; and changing the value of the same display attribute of the affordance in accordance with the value of the display attribute of the content of the display affordance, wherein the display attribute of the affordance is constrained to vary within the affordance appearance value range; and in accordance with a determination that the change in appearance of the content does not satisfy the range switching criteria, changing the value of the same display attribute of the affordance in accordance with the value of the display attribute of the content of the display affordance while maintaining the affordance appearance value range as the second value range.
According to some embodiments, an electronic device includes: a display; a touch sensitive surface; optionally one or more memories for detecting the intensity of contact with the touch-sensitive surface; optionally one or more tactile output generators; one or more processors; and a memory storing one or more programs; the one or more programs are configured to be executed by the one or more processors, and the one or more programs include instructions for performing or causing the performance of the operations of any of the methods described herein. According to some embodiments, a non-transitory computer-readable storage medium has stored therein instructions that, when executed by an electronic device having a display, a touch-sensitive surface, optionally one or more sensors for detecting contact strength with the touch-sensitive surface, and optionally one or more tactile output generators, cause the device to perform operations of any of the methods described herein or cause operations of any of the methods described herein to be performed. According to some embodiments, a graphical user interface located on an electronic device with a display, a touch-sensitive surface, optionally one or more sensors to detect contact strength with the touch-sensitive surface, optionally one or more tactile output generators, a memory, and one or more processors to execute one or more programs stored in the memory, includes one or more elements displayed in any of the methods described herein, updated in response to an input. According to some embodiments, an electronic device includes: a display, a touch-sensitive surface, optionally one or more sensors for detecting intensity of contact with the touch-sensitive surface, and optionally one or more tactile output generators. And means for performing or causing the operations of any one of the methods described herein. According to some embodiments, an information processing device for use in an electronic device having a display, a touch-sensitive surface, optionally one or more sensors for detecting contact strength with the touch-sensitive surface, and optionally one or more tactile output generators, comprises means for performing or causing to be performed the operations of any of the methods described herein.
Accordingly, improved methods and interfaces for navigating between user interfaces and interacting with control objects are provided to electronic devices having a display, a touch-sensitive surface, optionally one or more sensors for detecting contact strength with the touch-sensitive surface, optionally one or more tactile output generators, optionally one or more device orientation sensors, and optionally an audio system, thereby improving the effectiveness, efficiency, and user satisfaction of the devices. Such methods and interfaces may supplement or replace conventional methods for displaying affordances on a background.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following detailed description taken in conjunction with the following drawings, in which like reference numerals designate corresponding parts throughout the several views.
The patent or patent application contains at least one drawing in color. The patent office will provide copies of this patent or patent application publication with one or more color drawings at the request and payment of the necessary fee.
Fig. 1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
FIG. 1B is a block diagram illustrating exemplary components for event processing according to some embodiments.
Fig. 2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
Fig. 4A illustrates an exemplary user interface for an application menu on a portable multifunction device in accordance with some embodiments.
Fig. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface separate from a display in accordance with some embodiments.
Fig. 5A-5D illustrate exemplary user interfaces including affordances having appearances that are adapted to the appearance of a background, in accordance with some embodiments.
Fig. 5E illustrates a filter for generating the appearance of the affordances in fig. 5A-5D based on underlying content, in accordance with some embodiments.
Fig. 5F illustrates an exemplary inversion curve for performing the inversion illustrated in fig. 5E, according to some embodiments.
Fig. 5G-5K illustrate changes in appearance of affordances of a first affordance appearance type (e.g., a "dark" affordance type) in accordance with some embodiments.
Fig. 5L-5P illustrate changes in appearance of affordances of a second affordance appearance type (e.g., a "bright" affordance type) in accordance with some embodiments.
Fig. 5Q illustrates the difference in appearance of two types of affordances with the same change in background, in accordance with some embodiments.
FIG. 5R illustrates a display attribute of an affordance versus a range of values and inverse of underlying content implying that the affordance and the affordance represent the type of appearance in accordance with some embodiments.
Fig. 5S-5 AA illustrate a user interface including an affordance having a change in appearance responsive to a background and a change in an operational mode associated with the background, in accordance with some embodiments.
Fig. 5 AB-5 AC illustrate differences in appearance of affordances of a background in different modes of operation, according to some embodiments.
FIG. 5AD illustrates a user interface including an affordance that dynamically switches between affordance appearance types based on underlying content over time, in accordance with some embodiments.
FIG. 5AE illustrates a range of values and inverse relationship of display properties of an affordance versus a type of affordance appearance, a type of impliedly affordance appearance, and transitioning underlying content of the affordance appearance type in accordance with some embodiments.
FIG. 5AF illustrates a gradual transition of affordance appearance types from impliedly affordable appearance types to brightly affordable appearance types through multiple transitions, in accordance with some embodiments.
Fig. 5AG to 5AK are enlarged copies of the inverse relationship of the display attribute of the affordance and the underlying content of the different affordance types shown in fig. 5 AF.
Fig. 6A-6C are flowcharts illustrating methods of changing the appearance of an affordance as a function of changes in the appearance of underlying content in accordance with some embodiments.
Fig. 7A-7E are flowcharts illustrating methods of changing the appearance of an affordance according to changes in the appearance of underlying content and changes in the mode of a user interface displaying the affordance in accordance with some aspects.
Fig. 8A-8F are flowcharts illustrating methods of changing the appearance of an affordance and the type of affordance appearance as a function of changes in the appearance of underlying content in accordance with some embodiments.
Detailed Description
Affordances displayed according to conventional methods are often visually distracting and may clutter the user interface. In addition, the appearance of the affordance is typically fixed and does not adapt to changes occurring in the underlying content or to changes in the operating mode of the underlying user interface. The following embodiments disclose a way to display and change the appearance of an affordance based on changes in underlying content, where the inversion of the display properties of underlying content provides a basis for determining the value of the same display properties of the affordance. Further, the value range of the display attribute of the affordance is constrained to be a sub-range of the value range of the display attribute of the underlying content. For example, different ranges of values of the luminance of the affordances are assigned different affordances appearance types (e.g., implying that the affordance appearance type and the light affordance appearance type) that are selected for different overall luminance levels of the background (e.g., dark background and light background). This promotes the visibility of the affordance against a changing background without undue interference to the user. In some embodiments, depending on the mode of operation of the user interface in which the affordance is displayed, the appearance of the affordance varies with the appearance of the background according to different sets of rules, providing a way to adjust the balance between the need to maintain the visibility of the affordance and the need to reduce undue distraction caused by the affordance as the operating context of the affordance changes. In some embodiments, both the appearance of the affordance and the type of affordance change in accordance with changes in appearance of the underlying content (including temporal changes and cumulative changes over time).
The following fig. 1A-1B, 2, and 3 provide a description of an exemplary device. Fig. 4A-4B, 5A-5D, 5G-5P, 5S-5 AA, and 5AD illustrate exemplary user interfaces having affordances that change their appearance according to changes in appearance of underlying content, according to some embodiments. Fig. 5E, 5Q, 5R, 5AB, 5AC, and 5 AE-5 AK illustrate the difference in appearance of affordances according to some embodiments and the range of lower appearance values used to generate the affordances shown in fig. 5A-5D, 5G-5P, 5S-5 AA, and 5 AD. Fig. 6A-6C, 7A-7E, and 8A-8F are flowcharts of methods of displaying and adjusting the appearance of affordances, according to some embodiments. The user interfaces, affordances, and value ranges shown in fig. 4A-4B and 5A-5 AK are used to illustrate the processes in fig. 6A-6C, 7A-7E, and 8A-8F.
Exemplary apparatus
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. Numerous specific details are set forth in the following detailed description in order to provide a thorough understanding of the various described embodiments. It will be apparent, however, to one of ordinary skill in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
It will also be understood that, although the terms "first," "second," etc. may be used herein to describe various elements in some cases, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first contact may be named a second contact, and similarly, a second contact may be named a first contact without departing from the scope of the various described embodiments. The first contact and the second contact are both contacts, but they are not the same contact unless the context clearly indicates otherwise.
The terminology used in the description of the various illustrated embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and in the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term "if" is optionally interpreted to mean "when … …" or "at … …" or "responsive to determination" or "responsive to detection" depending on the context. Similarly, the phrase "if determined … …" or "if detected [ stated condition or event ]" is optionally interpreted to mean "upon determining … …" or "in response to determining … …" or "upon detecting [ stated condition or event ]" or "in response to detecting [ stated condition or event ]" depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and related processes for using such devices are described herein. In some embodiments, the device is a portable communication device, such as a mobile phone, that also includes other functions, such as PDA and/or music player functions. Exemplary embodiments of the portable multifunction device include, but are not limited to, those from Apple inc (Cupertino, california)iPod/>And->An apparatus. Other portable electronic devices, such as a laptop or tablet computer having a touch-sensitive surface (e.g., a touch screen display and/or a touchpad), are optionally used. It should also be appreciated that in some embodiments, the device is not a portable communication device, but rather a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad).
In the following discussion, an electronic device including a display and a touch-sensitive surface is described. However, it should be understood that the electronic device optionally includes one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.
The device typically supports various applications, such as one or more of the following: note taking applications, drawing applications, presentation applications, word processing applications, website creation applications, disk editing applications, spreadsheet applications, gaming applications, telephony applications, video conferencing applications, email applications, instant messaging applications, workout support applications, photo management applications, digital camera applications, digital video camera applications, web browsing applications, digital music player applications, and/or digital video player applications.
The various applications executing on the device optionally use at least one generic physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device are optionally adjusted and/or changed for different applications and/or within the respective applications. In this way, the common physical architecture of the devices (such as the touch-sensitive surface) optionally supports various applications with a user interface that is intuitive and transparent to the user.
Attention is now directed to embodiments of a portable device having a touch sensitive display. Fig. 1A is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display system 112 is sometimes referred to as a "touch screen" for convenience and is sometimes referred to simply as a touch-sensitive display. Device 100 includes memory 102 (which optionally includes one or more computer-readable storage media), memory controller 122, one or more processing units (CPUs) 120, peripheral interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, input/output (I/O) subsystem 106, other input or control devices 116, and external ports 124. The apparatus 100 optionally includes one or more optical sensors 164. The device 100 optionally includes one or more intensity sensors 165 for detecting intensity of contact on the device 100 (e.g., a touch-sensitive surface, such as the touch-sensitive display system 112 of the device 100). Device 100 optionally includes one or more tactile output generators 167 for generating tactile outputs on device 100 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 112 of device 100 or touch pad 355 of device 300). These components optionally communicate via one or more communication buses or signal lines 103.
As used in this specification and in the claims, the term "haptic output" refers to a physical displacement of a device relative to a previous position of the device, a physical displacement of a component of the device (e.g., a touch sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a centroid of the device, to be detected by a user with a user's feel. For example, in the case where the device or component of the device is in contact with a touch-sensitive surface of the user (e.g., a finger, palm, or other portion of the user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a haptic sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or touch pad) is optionally interpreted by a user as a "press click" or "click-down" of a physically actuated button. In some cases, the user will feel a tactile sensation, such as "press click" or "click down", even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moved. As another example, movement of the touch-sensitive surface may optionally be interpreted or sensed by a user as "roughness" of the touch-sensitive surface, even when the smoothness of the touch-sensitive surface is unchanged. While such interpretation of touches by a user will be limited by the user's individualized sensory perception, many sensory perceptions of touches are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click down," "click up," "roughness"), unless stated otherwise, the haptic output generated corresponds to a physical displacement of the device or component thereof that would generate that sensory perception of a typical (or ordinary) user. Using haptic output to provide haptic feedback to a user enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the haptic output pattern specifies characteristics of the haptic output, such as the magnitude of the haptic output, the shape of the motion waveform of the haptic output, the frequency of the haptic output, and/or the duration of the haptic output.
When the device generates haptic outputs having different haptic output patterns (e.g., via one or more haptic output generators that move a movable mass to generate the haptic outputs), the haptic outputs may produce different haptic sensations in a user holding or touching the device. While the user's senses are based on the user's perception of the haptic output, most users will be able to recognize changes in the waveform, frequency, and amplitude of the device-generated haptic output. Thus, the waveform, frequency, and amplitude may be adjusted to indicate to the user that different operations have been performed. As such, having characteristics (e.g., size, material, weight, stiffness, smoothness, etc.) designed, selected, and/or arranged to simulate objects in a given environment (e.g., a user interface including graphical features and objects, a simulated physical environment having virtual boundaries and virtual objects, a real physical environment having physical boundaries and physical objects, and/or a combination of any of the above); behavior (e.g., oscillation, displacement, acceleration, rotation, stretching, etc.); and/or interactive (e.g., bump, adhere, repel, attract, rub, etc.) haptic output patterns will in some cases provide helpful feedback to the user that reduces input errors and improves the efficiency of the user's operation of the device. In addition, the haptic output is optionally generated to correspond to feedback independent of the simulated physical characteristic (such as input threshold or object selection). Such haptic output will in some cases provide helpful feedback to the user, which reduces input errors and improves the efficiency of the user's operation of the device.
In some embodiments, a haptic output with a suitable haptic output pattern serves as a cue for the occurrence of an event of interest in the user interface or behind a screen in the device. Examples of events of interest include activation of an affordance (e.g., a real or virtual button, or toggle switch) provided on a device or in a user interface, success or failure of a requested operation, reaching or crossing a boundary in a user interface, entering a new state, switching input focus between objects, activating a new mode, reaching or crossing an input threshold, detecting or recognizing a type of input or gesture, and so forth. In some embodiments, a haptic output is provided to act as a warning or cue of an impending event or result that would occur unless a change of direction or interrupt input was detected in time. In other cases, the haptic output is also used to enrich the user experience, improve the accessibility of the device to users with visual or movement difficulties or other accessibility requirements, and/or improve the efficiency and functionality of the user interface and/or device. Optionally comparing the haptic output with audio input and/or visual user interface changes, which further enhances the user's experience when the user interacts with the user interface and/or device and facilitates better transmission of information about the state of the user interface and/or device, and which reduces input errors and improves the efficiency of the user's operation of the device.
It should be understood that the device 100 is merely one example of a portable multifunction device, and that the device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown in fig. 1A are implemented in hardware, software, firmware, or any combination thereof (including one or more signal processing circuits and/or application specific integrated circuits).
Memory 102 optionally includes high-speed random access memory, and also optionally includes non-volatile memory, such as one or more disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to the memory 102 by other components of the device 100, such as the one or more CPUs 120 and the peripheral interface 118, is optionally controlled by a memory controller 122.
The peripheral interface 118 may be used to couple input and output peripherals of the device to the memory 102 and the one or more CPUs 120. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in the memory 102 to perform various functions of the device 100 and process data.
In some embodiments, peripheral interface 118, one or more CPUs 120, and memory controller 122 are optionally implemented on a single chip, such as chip 104. In some other embodiments, they are optionally implemented on separate chips.
The RF (radio frequency) circuit 108 receives and transmits RF signals, also referred to as electromagnetic signals. The RF circuitry 108 converts/converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 108 optionally includes well known circuitry for performing these functions including, but not limited to, an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and the like. RF circuitry 108 optionally communicates via wireless communication with networks such as the internet (also known as the World Wide Web (WWW)), intranets, and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), and other devices. The wireless communication optionally uses any of a variety of communication standards, protocols, and technologies including, but not limited to, global system for mobile communications (GSM), enhanced data GSM scenarios (EDGE), high Speed Downlink Packet Access (HSDPA), high Speed Uplink Packet Access (HSUPA), evolution-only data (EV-DO), HSPA, hspa+, dual cell HSPA (DC-HSPDA), long Term Evolution (LTE), near Field Communications (NFC), wideband code division multiple access (W-CDMA), code Division Multiple Access (CDMA), time Division Multiple Access (TDMA), bluetooth, wireless fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11ac, IEEE 802.11ax, IEEE 802.11b, IEEE 802.11g, and/or IEEE 802.11 n), voice over internet protocol (VoIP), wi-MAX, email protocols (e.g., internet Message Access Protocol (IMAP) and/or Post Office Protocol (POP)), messages (e.g., extensible message handling and presence protocol (XMPP), session initiation protocol (sime), message and presence with extension (sime), instant messaging and presence protocol (SMS), instant messaging and/or SMS (SMS) protocols, or any other suitable communication protocols not yet developed herein.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between the user and device 100. Audio circuitry 110 receives audio data from peripheral interface 118, converts the audio data to electrical signals, and transmits the electrical signals to speaker 111. The speaker 111 converts electrical signals into sound waves that are audible to humans. The audio circuit 110 also receives electrical signals converted from sound waves by the microphone 113. The audio circuitry 110 converts the electrical signals into audio data and transmits the audio data to the peripheral interface 118 for processing. The audio data is optionally retrieved from and/or transmitted to the memory 102 and/or the RF circuitry 108 by the peripheral interface 118. In some embodiments, the audio circuit 110 also includes a headset jack (e.g., 212 in fig. 2). The headset jack provides an interface between the audio circuit 110 and a removable audio input/output peripheral, such as an output-only headset or a headset having both an output (e.g., a monaural headset or a binaural headset) and an input (e.g., a microphone).
The I/O subsystem 106 couples input/output peripheral devices on the device 100, such as the touch-sensitive display system 112 and other input or control devices 116, to the peripheral device interface 118. The I/O subsystem 106 optionally includes a display controller 156, an optical sensor controller 158, an intensity sensor controller 159, a haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive electrical signals from/transmit electrical signals to other input or control devices 116. Other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and the like. In some alternative implementations, one or more input controllers 160 are optionally coupled to (or not coupled to) any of the following: a keyboard, an infrared port, a USB port, a stylus, and/or a pointing device such as a mouse. One or more buttons (e.g., 208 in fig. 2) optionally include an up/down button for volume control of speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206 in fig. 2).
The touch sensitive display system 112 provides an input interface and an output interface between the device and the user. The display controller 156 receives electrical signals from the touch sensitive display system 112 and/or transmits electrical signals to the touch sensitive display system 112. The touch sensitive display system 112 displays visual output to a user. Visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively, "graphics"). In some implementations, some or all of the visual outputs correspond to user interface objects. Examples of user interactive graphical user interface objects include, but are not limited to, buttons, sliders, icons, selectable menu items, switches, hyperlinks, or other user interface controls. In some implementations, some or all of the visual outputs correspond to indicators and visual guides that provide visual cues of the kind of input and/or operation associated with different areas of a user interface or screen. Examples of indicators and visual guides include, but are not limited to, arrows, bars, covers, spotlights, or other visually distinct areas or shapes designed to provide visual cues to a user. As used herein, the term "affordance" refers to user-interactive graphical user interface objects and/or indicators and visual guides displayed on a background (e.g., an application user interface or a portion of a system user interface).
The touch sensitive display system 112 has a touch sensitive surface, sensor, or set of sensors that receives input from a user based on haptic and/or tactile contact. The touch-sensitive display system 112 and the display controller 156 (along with any associated modules and/or sets of instructions in the memory 102) detect contact (and any movement or interruption of the contact) on the touch-sensitive display system 112 and translate the detected contact into interactions with user interface objects (e.g., one or more soft keys, icons, web pages, or images) that are displayed on the touch-sensitive display system 112. In some implementations, the point of contact between the touch-sensitive display system 112 and the user corresponds to a user's finger or stylus.
Touch-sensitive display system 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, but in other embodiments other display technologies are used. Touch sensitive display system 112 and display controller 156 optionally detect contact and any movement or interruption thereof using any of a variety of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch sensitive display system 112. In some embodiments, projected mutual capacitance sensing techniques are used, such as those from Apple inc (Cupertino, california) iPod/>And->The technology found in (a) is provided.
The touch sensitive display system 112 optionally has a video resolution in excess of 100 dpi. In some implementations, the touch screen video resolution exceeds 400dpi (e.g., 500dpi, 800dpi, or greater). The user optionally uses any suitable object or appendage, such as a stylus, finger, or the like, to contact the touch sensitive display system 112. In some embodiments, the user interface is designed to work with finger-based contacts and gestures, which may not be as accurate as stylus-based input due to the large contact area of the finger on the touch screen. In some embodiments, the device translates the finger-based coarse input into a precise pointer/cursor position or command for performing the action desired by the user.
In some embodiments, the device 100 optionally includes a touch pad (not shown) for activating or deactivating particular functions in addition to the touch screen. In some embodiments, the touch pad is a touch sensitive area of the device that, unlike the touch screen, does not display visual output. The touch pad is optionally a touch-sensitive surface separate from the touch-sensitive display system 112 or an extension of the touch-sensitive surface formed by the touch screen.
The apparatus 100 also includes a power system 162 for powering the various components. The power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in the portable device.
The apparatus 100 optionally further comprises one or more optical sensors 164. FIG. 1A shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The one or more optical sensors 164 optionally include a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The one or more optical sensors 164 receive light projected through the one or more lenses from the environment and convert the light into data representing an image. In conjunction with imaging module 143 (also referred to as a camera module), one or more optical sensors 164 optionally capture still images and/or video. In some embodiments, the optical sensor is located on the back of the device 100 opposite the touch sensitive display system 112 on the front of the device, enabling the touch screen to be used as a viewfinder for still image and/or video image acquisition. In some embodiments, another optical sensor is located on the front of the device to acquire an image of the user (e.g., for self-timer shooting, for video conferencing while the user views other video conference participants on a touch screen, etc.).
The apparatus 100 optionally further comprises one or more contact intensity sensors 165. FIG. 1A shows a contact intensity sensor coupled to an intensity sensor controller 159 in the I/O subsystem 106. The one or more contact strength sensors 165 optionally include one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other strength sensors (e.g., sensors for measuring force (or pressure) of a contact on a touch-sensitive surface). One or more contact strength sensors 165 receive contact strength information (e.g., pressure information or a surrogate for pressure information) from the environment. In some implementations, at least one contact intensity sensor is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on a rear of the device 100 opposite the touch sensitive display system 112 located on a front of the device 100.
The device 100 optionally further includes one or more proximity sensors 166. Fig. 1A shows a proximity sensor 166 coupled to the peripheral interface 118. Alternatively, the proximity sensor 166 is coupled to the input controller 160 in the I/O subsystem 106. In some implementations, the proximity sensor turns off and disables the touch-sensitive display system 112 when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call).
The device 100 optionally further comprises one or more tactile output generators 167. FIG. 1A shows a haptic output generator coupled to a haptic feedback controller 161 in the I/O subsystem 106. In some embodiments, the one or more tactile output generators 167 include one or more electroacoustic devices such as speakers or other audio components; and/or electromechanical devices for converting energy into linear motion such as motors, solenoids, electroactive polymers, piezoelectric actuators, electrostatic actuators, or other tactile output generating means (e.g., means for converting an electrical signal into a tactile output on a device). The one or more haptic output generators 167 receive haptic feedback generation instructions from the haptic feedback module 133 and generate haptic output on the device 100 that can be perceived by a user of the device 100. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., inward/outward of the surface of device 100) or laterally (e.g., backward and forward in the same plane as the surface of device 100). In some embodiments, at least one tactile output generator sensor is located on a rear of the device 100 opposite the touch sensitive display system 112 located on a front of the device 100.
The device 100 optionally further includes one or more accelerometers 168. Fig. 1A shows accelerometer 168 coupled to peripheral interface 118. Alternatively, accelerometer 168 is optionally coupled with input controller 160 in I/O subsystem 106. In some implementations, information is displayed in a portrait view or a landscape view on a touch screen display based on analysis of data received from the one or more accelerometers. The device 100 optionally includes a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) in addition to the accelerometer 168 for obtaining information about the position and orientation (e.g., longitudinal or lateral) of the device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or instruction set) 128, a contact/motion module (or instruction set) 130, a graphics module (or instruction set) 132, a haptic feedback module (or instruction set) 133, a text input module (or instruction set) 134, a Global Positioning System (GPS) module (or instruction set) 135, and an application program (or instruction set) 136. Further, in some embodiments, memory 102 stores device/global internal state 157, as shown in fig. 1A and 3. The device/global internal state 157 includes one or more of the following: an active application state indicating which applications (if any) are currently active; display status, which indicates what applications, views, or other information occupy various areas of the touch-sensitive display system 112; sensor status, including information obtained from various sensors of the device and other input or control devices 116; and position and/or orientation information about the position and/or pose of the device.
Operating system 126 (e.g., iOS, darwin, RTXC, LINUX, UNIX, OSX, WINDOWS or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.), and facilitates communication between the various hardware and software components.
The communication module 128 facilitates communication with other devices through one or more external ports 124 and also includes various software components for processing data received by the RF circuitry 108 and/or the external ports 124. External port 124 (e.g., universal Serial Bus (USB), firewire, etc.) is adapted to be coupled directly to other devices or indirectly via a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external ports are some of the same as Apple inc (Cupertino, california)iPod/>The same or similar and/or compatible multi-pin (e.g., 30-pin) connectors as used in iPod devices. In some embodiments, the external port is some +.o. with Apple inc (Cupertino, california)>iPod/>The same or similar and/or compatible lighting connectors as those used in iPod devices.
The contact/motion module 130 optionally detects contact with the touch-sensitive display system 112 (in conjunction with the display controller 156) and other touch-sensitive devices (e.g., a touchpad or physical click wheel). The contact/motion module 130 includes various software components for performing various operations related to contact detection (e.g., by a finger or stylus), such as determining whether contact has occurred (e.g., detecting a finger press event), determining the strength of the contact (e.g., the force or pressure of the contact, or a substitute for the force or pressure of the contact), determining whether there is movement of the contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger drag events), and determining whether the contact has ceased (e.g., detecting a finger lift event or contact disconnection). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining a velocity (magnitude), a speed (magnitude and direction), and/or an acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single-finger contacts or stylus contacts) or simultaneous multi-point contacts (e.g., "multi-touch"/multi-finger contacts). In some embodiments, the contact/motion module 130 and the display controller 156 detect contact on the touch pad.
The contact/motion module 130 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different movements, timings, and/or intensities of the detected contacts). Thus, gestures are optionally detected by detecting a particular contact pattern. For example, detecting a single-finger tap gesture includes detecting a finger-down event, and then detecting a finger-up (lift-off) event at the same location (or substantially the same location) as the finger-down event (e.g., at an icon location). As another example, detecting a finger swipe gesture on a touch-sensitive surface includes detecting a finger press event, then detecting one or more finger drag events, and then detecting a finger lift (lift off) event. Similarly, taps, swipes, drags, and other gestures of the stylus are optionally detected by detecting a particular contact pattern of the stylus.
In some embodiments, detecting the finger tap gesture is dependent on a length of time between detecting the finger press event and the finger lift event, but independent of a finger contact strength between the finger press event and the finger lift event. In some embodiments, in accordance with a determination that the length of time between the finger press event and the finger lift event is less than a predetermined value (e.g., less than 0.1, 0.2, 0.3, 0.4, or 0.5 seconds), a flick gesture is detected regardless of whether the intensity of the finger contact during the flick reaches a given intensity threshold (greater than a nominal contact detection intensity threshold), such as a light press or a deep press intensity threshold. Thus, a finger tap gesture may satisfy a particular input criteria that does not require the characteristic intensity of the contact to satisfy a given intensity threshold to satisfy the particular input criteria. For clarity, finger contact in a flick gesture is typically required to meet a nominal contact detection intensity threshold below which no contact is detected to detect a finger press event. Similar analysis applies to detecting a flick gesture by a stylus or other contact. In the case where the device is capable of detecting finger or stylus contact hovering over a touch sensitive surface, the nominal contact detection intensity threshold optionally does not correspond to physical contact between the finger or stylus and the touch sensitive surface.
The same concepts apply in a similar manner to other types of gestures. For example, swipe gestures, pinch gestures, spread gestures, and/or long press gestures may optionally be detected based on satisfaction of criteria that are independent of the intensity of contacts included in the gesture or do not require the contacts of the performed gesture to reach an intensity threshold in order to be recognized. For example, a swipe gesture is detected based on an amount of movement of one or more contacts; the zoom gesture is detected based on movement of the two or more contacts toward each other; the amplification gesture is detected based on movement of the two or more contacts away from each other; and the long press gesture is detected based on a duration of contact on the touch-sensitive surface having less than a threshold amount of movement. Thus, statement that a particular gesture recognition criterion does not require that the contact intensity meet a respective intensity threshold to meet the particular gesture recognition criterion means that the particular gesture recognition criterion can be met when a contact in the gesture does not meet the respective intensity threshold, and can also be met if one or more contacts in the gesture meet or exceed the respective intensity threshold. In some embodiments, a flick gesture is detected based on determining that a finger press event and a finger lift event are detected within a predefined time period, regardless of whether the contact is above or below a respective intensity threshold during the predefined time period, and a swipe gesture is detected based on determining that the contact movement is greater than a predefined magnitude, even though the contact is above the respective intensity threshold at the end of the contact movement. Even in implementations where the detection of gestures is affected by the intensity of the contact performing the gesture (e.g., the device detects a long press faster when the intensity of the contact is above an intensity threshold, or the device delays the detection of a tap input when the intensity of the contact is higher), the detection of these gestures does not require the contact to reach a certain intensity threshold (e.g., even if the amount of time required to recognize the gesture changes) as long as the criteria for recognizing the gesture can be met without the contact reaching the certain intensity threshold.
In some cases, the contact strength threshold, duration threshold, and movement threshold are combined in various different combinations in order to create a heuristic to distinguish between two or more different gestures for the same input element or region, such that multiple different interactions with the same input element can provide a richer set of user interactions and responses. Statement that a set of specific gesture recognition criteria does not require that the intensity of the contact meet a respective intensity threshold to meet the specific gesture recognition criteria does not preclude simultaneous evaluation of other intensity-related gesture recognition criteria to identify other gestures having criteria that are met when the gesture includes a contact having an intensity above the respective intensity threshold. For example, in some cases, a first gesture recognition criterion of a first gesture (which does not require the intensity of a contact to meet a respective intensity threshold to meet the first gesture recognition criterion) competes with a second gesture recognition criterion of a second gesture (which depends on the contact reaching the respective intensity threshold). In such a competition, if the second gesture recognition criteria of the second gesture is satisfied by the first criteria, the gesture is optionally not recognized as satisfying the first gesture recognition criteria of the first gesture. For example, if the contact reaches a corresponding intensity threshold before the contact moves by a predefined amount of movement, a deep press gesture is detected instead of a swipe gesture. Conversely, if the contact moves a predefined amount of movement before the contact reaches the corresponding intensity threshold, a swipe gesture is detected instead of a deep press gesture. Even in this case, the first gesture recognition criterion of the first gesture still does not require that the intensity of the contact satisfies the respective intensity threshold to satisfy the first gesture recognition criterion, because if the contact remains below the respective intensity threshold until the gesture ends (e.g., a swipe gesture having a contact that does not increase in intensity above the respective intensity threshold), the gesture will be recognized by the first gesture recognition criterion as a swipe gesture. Thus, a particular gesture recognition criterion that does not require the intensity of the contact to meet the respective intensity threshold to meet the particular gesture recognition criterion will (a) in some cases ignore the intensity of the contact relative to the intensity threshold (e.g., for a flick gesture) and/or (B) in some cases fail to meet the particular gesture recognition criterion (e.g., for a long press gesture) in the sense that the intensity of the contact relative to the intensity threshold (e.g., for a long press gesture) is still dependent on if a competing set of intensity-related gesture recognition criteria (e.g., for a long press gesture that competes for recognition with a deep press gesture) recognizes the input as corresponding to the intensity-related gesture before the particular gesture recognition criterion recognizes the gesture.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch-sensitive display system 112 or other displays, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual attribute) of the displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including without limitation text, web pages, icons (such as user interface objects including soft keys), digital images, video, animation, and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphic module 132 receives one or more codes for designating graphics to be displayed from an application program or the like, and also receives coordinate data and other graphic attribute data together if necessary, and then generates screen image data to output to the display controller 156.
Haptic feedback module 133 includes various software components for generating instructions (e.g., instructions used by haptic feedback controller 161) to generate haptic output at one or more locations on device 100 using haptic output generator 167 in response to user interaction with device 100.
Text input module 134, which is optionally a component of graphics module 132, provides a soft keyboard for entering text in various applications (e.g., contacts 137, email 140, IM 141, browser 147, and any other application requiring text input).
The GPS module 135 determines the location of the device and provides such information for use in various applications (e.g., to the phone 138 for location-based dialing, to the camera 143 as picture/video metadata, and to applications that provide location-based services such as weather desktops, page-on-the-earth desktops, and map/navigation desktops).
The application 136 optionally includes the following modules (or sets of instructions) or a subset or superset thereof:
contact module 137 (sometimes referred to as an address book or contact list);
a telephone module 138;
video conferencing module 139;
email client module 140;
an Instant Messaging (IM) module 141;
a fitness support module 142;
a camera module 143 for still and/or video images;
an image management module 144;
browser module 147;
calendar module 148;
a desktop applet module 149, optionally including one or more of: weather desktop applet 149-1, stock desktop applet 149-2, calculator desktop applet 149-3, alarm desktop applet 149-4, dictionary desktop applet 149-5 and other desktop applets obtained by the user, and user created desktop applet 149-6;
A desktop applet creator module 150 for forming a user-created desktop applet 149-6;
search module 151;
a video and music player module 152, optionally consisting of a video player module and a music player module;
notepad module 153;
map module 154; and/or
An online video module 155.
Examples of other applications 136 optionally stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In connection with the touch sensitive display system 112, the display controller 156, the contact module 130, the graphics module 132, and the text input module 134, the contact module 137 includes executable instructions for managing an address book or contact list (e.g., in the application internal state 192 of the contact module 137 stored in the memory 102 or the memory 370), including: adding names to address books; deleting one or more names from the address book; associating one or more telephone numbers, one or more email addresses, one or more physical addresses, or other information with the name; associating the image with the name; classifying and classifying names; providing a telephone number and/or email address to initiate and/or facilitate communication via telephone 138, video conference 139, email 140, or instant message 141; etc.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, phone module 138 includes executable instructions for: inputting a character sequence corresponding to the telephone numbers, accessing one or more telephone numbers in the address book 137, modifying the inputted telephone numbers, dialing the corresponding telephone numbers, conducting a conversation, and disconnecting or hanging up when the conversation is completed. As described above, wireless communication optionally uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, one or more optical sensors 164, optical sensor controller 158, contact module 130, graphics module 132, text input module 134, contact list 137, and telephony module 138, videoconferencing module 139 includes executable instructions to initiate, conduct, and terminate a videoconference between a user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions for creating, sending, receiving, and managing emails in response to user instructions. In conjunction with the image management module 144, the email client module 140 makes it very easy to create and send emails with still or video images captured by the camera module 143.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, instant message module 141 includes executable instructions for: inputting a character sequence corresponding to an instant message, modifying previously inputted characters, transmitting a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for phone-based instant messages or using XMPP, SIMPLE, apple push notification services (apls) or IMPS for internet-based instant messages), receiving an instant message, and viewing the received instant message. In some implementations, the transmitted and/or received instant message optionally includes graphics, photos, audio files, video files, and/or other attachments supported in an MMS and/or Enhanced Messaging Service (EMS). As used herein, "instant message" refers to both telephone-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, APNs or IMPS).
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and video and music player module 152, workout support module 142 includes executable instructions for creating a workout (e.g., with time, distance, and/or calorie burn targets); communication with fitness sensors (in sports equipment and smart watches); receiving fitness sensor data; calibrating a sensor for monitoring fitness; selecting and playing music for exercise; and displaying, storing and transmitting the fitness data.
In conjunction with touch-sensitive display system 112, display controller 156, one or more optical sensors 164, optical sensor controller 158, contact module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions for: capturing still images or videos (including video streams) and storing them in the memory 102, modifying features of the still images or videos, and/or deleting the still images or videos from the memory 102.
In conjunction with the touch-sensitive display system 112, the display controller 156, the contact module 130, the graphics module 132, the text input module 134, and the camera module 143, the image management module 144 includes executable instructions for arranging, modifying (e.g., editing), or otherwise manipulating, labeling, deleting, presenting (e.g., in a digital slide or album), and storing still images and/or video images.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, touch module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions for browsing the internet (including searching, linking to, receiving, and displaying web pages or portions thereof, and attachments and other files linked to web pages) according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, email client module 140, and browser module 147, calendar module 148 includes executable instructions for creating, displaying, modifying, and storing calendars and data associated with calendars (e.g., calendar entries, to-do items, etc.) according to user instructions.
In conjunction with the RF circuitry 108, the touch-sensitive display system 112, the display system controller 156, the contact module 130, the graphics module 132, the text input module 134, and the browser module 147, the desktop applet module 149 is a mini-application (e.g., weather desktop applet 149-1, stock desktop applet 149-2, calculator desktop applet 149-3, alarm clock desktop applet 149-4, and dictionary desktop applet 149-5) optionally downloaded and used by a user, or a mini-application created by a user (e.g., user created desktop applet 149-6). In some embodiments, the desktop applet includes an HTML (hypertext markup language) file, a CSS (cascading style sheet) file, and a JavaScript file. In some embodiments, the desktop applet includes an XML (extensible markup language) file and a JavaScript file (e.g., yahoo.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, desktop applet creator module 150 includes executable instructions for creating an applet (e.g., turning a user-specified portion of a web page into the applet).
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions for searching for text, music, sound, images, video, and/or other files in memory 102 that match one or more search criteria (e.g., one or more user-specified search terms) according to user instructions.
In conjunction with the touch-sensitive display system 112, the display system controller 156, the contact module 130, the graphics module 132, the audio circuit 110, the speaker 111, the RF circuit 108, and the browser module 147, the video and music player module 152 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats (such as MP3 or AAC files), as well as executable instructions for displaying, presenting, or otherwise playing back video (e.g., on the touch-sensitive display system 112 or on an external display wirelessly connected via the external port 124). In some embodiments, the device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple inc.).
In conjunction with touch-sensitive display system 112, display controller 156, touch module 130, graphics module 132, and text input module 134, notepad module 153 includes executable instructions for creating and managing notepads, backlog, and the like in accordance with user instructions.
In conjunction with the RF circuitry 108, the touch-sensitive display system 112, the display system controller 156, the contact module 130, the graphics module 132, the text input module 134, the GPS module 135, and the browser module 147, the map module 154 includes executable instructions for receiving, displaying, modifying, and storing maps and data associated with maps (e.g., driving directions, data of stores and other points of interest at or near a particular location, and other location-based data) according to user instructions.
In conjunction with the touch sensitive display system 112, the display system controller 156, the contact module 130, the graphics module 132, the audio circuit 110, the speaker 111, the RF circuit 108, the text input module 134, the email client module 140, and the browser module 147, the online video module 155 includes executable instructions that allow a user to access, browse, receive (e.g., by streaming and/or downloading), play back (e.g., on the touch screen 112 or on an external display connected wirelessly or via the external port 124), send emails with links to particular online videos, and otherwise manage online videos in one or more file formats such as H.264. In some embodiments, the instant messaging module 141 is used to send links to particular online videos instead of the email client module 140.
Each of the modules and applications identified above corresponds to a set of executable instructions for performing one or more of the functions described above, as well as the methods described in the present disclosure (e.g., computer-implemented methods and other information processing methods described herein). These modules (i.e., sets of instructions) need not be implemented in separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 102 optionally stores a subset of the modules and data structures described above. Further, memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device in which the operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or touch pad. By using a touch screen and/or a touch pad as the primary input control device for operating the device 100, the number of physical input control devices (e.g., push buttons, dials, etc.) on the device 100 is optionally reduced.
A predefined set of functions performed solely by the touch screen and/or touch pad optionally includes navigation between user interfaces. In some embodiments, the touchpad, when touched by a user, navigates the device 100 from any user interface displayed on the device 100 to a main menu, home menu, or root menu. In such implementations, a touch pad is used to implement a "menu button". In some other embodiments, the menu buttons are physical push buttons or other physical input control devices rather than touch pads.
FIG. 1B is a block diagram illustrating exemplary components for event processing according to some embodiments. In some embodiments, memory 102 (in FIG. 1A) or memory 370 (in FIG. 3) includes event sorter 170 (e.g., in operating system 126) and corresponding applications 136-1 (e.g., any of the aforementioned applications 136, 137-155, 380-390).
The event classifier 170 receives the event information and determines the application view 191 of the application 136-1 and the application 136-1 to which the event information is to be delivered. The event sorter 170 includes an event monitor 171 and an event dispatcher module 174. In some implementations, the application 136-1 includes an application internal state 192 that indicates one or more current application views that are displayed on the touch-sensitive display system 112 when the application is active or executing. In some embodiments, the device/global internal state 157 is used by the event classifier 170 to determine which application(s) are currently active, and the application internal state 192 is used by the event classifier 170 to determine the application view 191 to which to deliver event information.
In some implementations, the application internal state 192 includes additional information, such as one or more of the following: restoration information to be used when the application 136-1 resumes execution, user interface state information indicating information being displayed by the application 136-1 or ready for display by the application 136-1, a state queue for enabling a user to return to a previous state or view of the application 136-1, and a repeat/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripheral interface 118. The event information includes information about sub-events (e.g., user touches on the touch sensitive display system 112 as part of a multi-touch gesture). The peripheral interface 118 transmits information it receives from the I/O subsystem 106 or sensors, such as a proximity sensor 166, one or more accelerometers 168, and/or microphone 113 (via audio circuitry 110). The information received by the peripheral interface 118 from the I/O subsystem 106 includes information from the touch-sensitive display system 112 or touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to peripheral interface 118 at predetermined intervals. In response, the peripheral interface 118 transmits event information. In other embodiments, the peripheral interface 118 transmits event information only if there is a significant event (e.g., an input above a predetermined noise threshold is received and/or an input exceeding a predetermined duration is received).
In some implementations, the event classifier 170 also includes a hit view determination module 172 and/or an active event identifier determination module 173.
When the touch sensitive display system 112 displays more than one view, the hit view determination module 172 provides a software process for determining where within one or more views a sub-event has occurred. The view is made up of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected optionally corresponds to a level of programming within the application's programming or view hierarchy. For example, the lowest horizontal view in which a touch is detected is optionally referred to as a hit view, and the set of events identified as being correctly entered is optionally determined based at least in part on the hit view of the initial touch that begins a touch-based gesture.
Hit view determination module 172 receives information related to sub-events of the touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 172 identifies the hit view as the lowest view in the hierarchy that should handle sub-events. In most cases, the hit view is the lowest level view in which the initiating sub-event (i.e., the first sub-event in the sequence of sub-events that form the event or potential event) occurs. Once the hit view is identified by the hit view determination module, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as a hit view.
The activity event recognizer determination module 173 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some implementations, the active event identifier determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, the activity event recognizer determination module 173 determines that all views including the physical location of the sub-event are actively engaged views, and thus determines that all actively engaged views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely localized to an area associated with one particular view, the higher view in the hierarchy will remain the actively engaged view.
The event dispatcher module 174 dispatches event information to an event recognizer (e.g., event recognizer 180). In embodiments that include an active event recognizer determination module 173, the event dispatcher module 174 delivers event information to the event recognizers determined by the active event recognizer determination module 173. In some embodiments, the event dispatcher module 174 stores event information in an event queue that is retrieved by the corresponding event receiver module 182.
In some embodiments, the operating system 126 includes an event classifier 170. Alternatively, the application 136-1 includes an event classifier 170. In another embodiment, the event classifier 170 is a stand-alone module or part of another module stored in the memory 102, such as the contact/motion module 130.
In some embodiments, the application 136-1 includes a plurality of event handlers 190 and one or more application views 191, where each application view includes instructions for processing touch events that occur within a respective view of the user interface of the application. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, the respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of the event recognizers 180 are part of a separate module, such as a user interface toolkit (not shown) or a higher level object from which the application 136-1 inherits methods and other properties. In some implementations, the respective event handlers 190 include one or more of the following: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or invokes data updater 176, object updater 177, or GUI updater 178 to update the application internal state 192. Alternatively, one or more of application views 191 include one or more corresponding event handlers 190. Additionally, in some implementations, one or more of the data updater 176, the object updater 177, and the GUI updater 178 are included in a respective application view 191.
The corresponding event identifier 180 receives event information (e.g., event data 179) from the event classifier 170 and identifies events from the event information. Event recognizer 180 includes event receiver 182 and event comparator 184. In some embodiments, event recognizer 180 further includes at least a subset of both metadata 183 and event delivery instructions 188 (which optionally include sub-event delivery instructions).
The event receiver 182 receives event information from the event sorter 170. The event information includes information about sub-events such as touches or touch movements. The event information also includes additional information, such as the location of the sub-event, according to the sub-event. When a sub-event relates to movement of a touch, the event information optionally also includes the rate and direction of the sub-event. In some embodiments, the event includes rotation of the device from one orientation to another orientation (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about a current orientation of the device (also referred to as a device pose).
The event comparator 184 compares the event information with predefined event or sub-event definitions and determines an event or sub-event or determines or updates the state of the event or sub-event based on the comparison. In some embodiments, event comparator 184 includes event definition 186. Event definition 186 includes definitions of events (e.g., a predefined sequence of sub-events), such as event 1 (187-1), event 2 (187-2), and other events. In some implementations, sub-events in event 187 include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition of event 1 (187-1) is a double click on the displayed object. For example, the double click includes a first touch (touch start) for a predetermined period of time on the displayed object, a first lift-off (touch end) for a predetermined period of time, a second touch (touch start) for a predetermined period of time on the displayed object, and a second lift-off (touch end) for a predetermined period of time. In another example, the definition of event 2 (187-2) is a drag on the display object. For example, dragging includes touching (or contacting) on the displayed object for a predetermined period of time, movement of the touch on the touch-sensitive display system 112, and lift-off of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some implementations, the event definitions 187 include definitions of events for respective user interface objects. In some implementations, the event comparator 184 performs a hit test to determine which user interface object is associated with the sub-event. For example, in an application view that displays three user interface objects on touch-sensitive display system 112, when a touch is detected on touch-sensitive display system 112, event comparator 184 executes hit Test to determine which of the three user interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 190, the event comparator uses the results of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects an event handler associated with the sub-event and the object that triggered the hit test.
In some implementations, the definition of the respective event 187 also includes a delay action that delays delivery of the event information until it has been determined whether the sequence of sub-events does or does not correspond to an event type of the event recognizer.
When the respective event recognizer 180 determines that the sequence of sub-events does not match any of the events in the event definition 186, the respective event recognizer 180 enters an event impossible, event failed, or event end state after which subsequent sub-events of the touch-based gesture are ignored. In this case, the other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the ongoing touch-based gesture.
In some embodiments, the respective event recognizer 180 includes metadata 183 with configurable attributes, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to the actively engaged event recognizer. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate how event recognizers interact or are able to interact with each other. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to different levels in a view or programmatic hierarchy.
In some implementations, when one or more particular sub-events of an event are identified, the respective event identifier 180 activates an event handler 190 associated with the event. In some implementations, the respective event identifier 180 delivers event information associated with the event to the event handler 190. The activate event handler 190 is different from sending (and deferring) sub-events to the corresponding hit view. In some embodiments, event recognizer 180 throws a flag associated with the recognized event, and event handler 190 associated with the flag captures the flag and performs a predefined process.
In some implementations, the event delivery instructions 188 include sub-event delivery instructions that deliver event information about the sub-event without activating the event handler. Instead, the sub-event delivery instructions deliver the event information to an event handler associated with the sub-event sequence or to an actively engaged view. Event handlers associated with the sequence of sub-events or with the actively engaged views receive the event information and perform a predetermined process.
In some embodiments, the data updater 176 creates and updates data used in the application 136-1. For example, the data updater 176 updates telephone numbers used in the contacts module 137 or stores video files used in the video or music player module 152. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, the object updater 177 creates a new user interface object or updates the location of the user interface object. GUI updater 178 updates the GUI. For example, the GUI updater 178 prepares the display information and sends the display information to the graphics module 132 for display on a touch-sensitive display.
In some embodiments, event handler 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, the data updater 176, the object updater 177, and the GUI updater 178 are included in a single module of the respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It should be appreciated that the above discussion regarding event handling of user touches on a touch sensitive display also applies to other forms of user inputs that utilize an input device to operate the multifunction device 100, not all of which are initiated on a touch screen. For example, mouse movements and mouse button presses optionally in conjunction with single or multiple keyboard presses or holds; contact movement on the touchpad, such as tap, drag, scroll, etc.; stylus input; movement of the device; verbal instructions; detected eye movement; inputting biological characteristics; and/or any combination thereof is optionally used as input corresponding to sub-events defining the event to be identified.
Fig. 2 illustrates a portable multifunction device 100 with a touch screen (e.g., touch-sensitive display system 112 of fig. 1A) in accordance with some embodiments. The touch screen optionally displays one or more graphics within a User Interface (UI) 200. In these embodiments, as well as other embodiments described below, a user can select one or more of these graphics by making a gesture on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figures) or one or more styluses 203 (not drawn to scale in the figures). In some embodiments, selection of one or more graphics will occur when a user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up and/or down), and/or scrolling of a finger that has been in contact with the device 100 (right to left, left to right, up and/or down). In some implementations or in some cases, inadvertent contact with the graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture that swipes over an application icon optionally does not select the corresponding application.
The device 100 optionally also includes one or more physical buttons, such as a "home" button, or a menu button 204. As previously described, menu button 204 is optionally used to navigate to any application 136 in a set of applications that are optionally executed on device 100. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on a touch screen display.
In some embodiments, the device 100 includes a touch screen display, a menu button 204 (sometimes referred to as a home button 204), a press button 206 for powering the device on/off and for locking the device, one or more volume adjustment buttons 208, a Subscriber Identity Module (SIM) card slot 210, a headset jack 212, and a docking/charging external port 124. Pressing button 206 is optionally used to turn on/off the device by pressing the button and holding the button in the pressed state for a predefined time interval; locking the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or unlock the device or initiate an unlocking process. In some embodiments, the device 100 also accepts voice input through the microphone 113 for activating or deactivating certain functions. The device 100 also optionally includes one or more contact intensity sensors 165 for detecting intensity of contacts on the touch-sensitive display system 112 and/or one or more tactile output generators 167 for generating tactile outputs for a user of the device 100.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, the device 300 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child learning toy), a gaming system, or a control device (e.g., a home controller or an industrial controller). The device 300 generally includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. Communication bus 320 optionally includes circuitry (sometimes referred to as a chipset) that interconnects and controls communications between system components. The device 300 includes an input/output (I/O) interface 330 with a display 340, typically a touch screen display. The I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and a touchpad 355, a tactile output generator 357 (e.g., similar to one or more tactile output generators 167 described above with reference to fig. 1A) for generating tactile outputs on the device 300, sensors 359 (e.g., optical sensors, acceleration sensors, proximity sensors, touch-sensitive sensors, and/or contact intensity sensors similar to one or more contact intensity sensors 165 described above with reference to fig. 1A). Memory 370 includes high speed random access memory, such as DRAM, SRAM, DDRRAM or other random access solid state memory device; and optionally includes non-volatile memory such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 370 optionally includes one or more storage devices located remotely from CPU 310. In some embodiments, memory 370 stores programs, modules, and data structures similar to those stored in memory 102 of portable multifunction device 100 (fig. 1A), or a subset thereof. Furthermore, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk editing module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (fig. 1A) optionally does not store these modules.
Each of the above identified elements in fig. 3 is optionally stored in one or more of the previously mentioned memory devices. Each of the identified modules corresponds to a set of instructions for performing the functions described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures described above. Further, memory 370 optionally stores additional modules and data structures not described above.
Attention is now directed to embodiments of a user interface ("UI") optionally implemented on the portable multifunction device 100.
Fig. 4A illustrates an exemplary user interface of an application menu on the portable multifunction device 100 in accordance with some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
one or more signal strength indicators for one or more wireless communications, such as cellular signals and Wi-Fi signals;
Time;
bluetooth indicator;
battery status indicator;
tray 408 with common application icons, such as:
icon 416 of phone module 138 marked "phone", optionally including an indicator 414 of the number of missed calls or voice messages;
an icon 418 of the email client module 140 marked "mail" optionally including an indicator 410 of the number of unread emails;
icon 420 of browser module 147 marked "browser"; and
icon 422 labeled "music" for video and music player module 152; and
icons of other applications, such as:
icon 424 marked "message" for IM module 141;
icon 426 of calendar module 148 marked "calendar";
icon 428 marked "photo" of image management module 144;
icon 430 marked "camera" for camera module 143;
icon 432 of online video module 155 marked "online video";
icon 434 labeled "stock market" for stock market desktop applet 149-2;
icon 436 marked "map" of map module 154;
Icon 438 marked "weather" for weather desktop applet 149-1;
icon 440 marked "clock" for alarm desktop applet 149-4;
icon 442 labeled "fitness support" for fitness support module 142;
icon 444 labeled "notepad" for notepad module 153; and
icon 446 for setting an application or module, which provides access to the settings of device 100 and its various applications 136.
It should be noted that the iconic labels shown in fig. 4A are merely exemplary. For example, other labels are optionally used for various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 in fig. 3) having a touch-sensitive surface 451 (e.g., tablet or touchpad 355 in fig. 3) separate from the display 450. The device 300 also optionally includes one or more contact intensity sensors (e.g., one or more of the sensors 357) for detecting the intensity of contacts on the touch-sensitive surface 451 and/or one or more tactile output generators 359 for generating tactile outputs for a user of the device 300.
While many examples will be given later with reference to inputs on touch screen display 112 (where the touch sensitive surface and the display are combined), in some embodiments the device detects inputs on a touch sensitive surface separate from the display, as shown in fig. 4B. In some implementations, the touch-sensitive surface (e.g., 451 in fig. 4B) has a primary axis (e.g., 452 in fig. 4B) that corresponds to the primary axis (e.g., 453 in fig. 4B) on the display (e.g., 450). According to these implementations, the device detects contact with the touch-sensitive surface 451 at locations corresponding to respective locations on the display (e.g., 460 and 462 in fig. 4B) (e.g., 460 corresponds to 468 and 462 corresponds to 470 in fig. 4B). Thus, user inputs (e.g., contacts 460 and 462 and movement thereof) detected by the device on the touch-sensitive surface are used by the device to manipulate the user interface on the display (e.g., 450 in FIG. 4B) when the touch-sensitive surface (e.g., 451 in FIG. 4B) is separate from the display of the multifunction device. It should be appreciated that similar approaches are optionally used for other user interfaces described herein.
Additionally, while the following examples are primarily presented with reference to finger inputs (e.g., finger contacts, single-finger flick gestures, finger swipe gestures, etc.), it should be understood that in some embodiments one or more of these finger inputs are replaced by input from another input device (e.g., mouse-based input or stylus input). For example, a swipe gesture is optionally replaced with a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, a flick gesture is optionally replaced by a mouse click (e.g., instead of detection of contact, followed by ceasing to detect contact) when the cursor is over the position of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be appreciated that multiple computer mice are optionally used simultaneously, or that the mice and finger contacts are optionally used simultaneously.
As used herein, the term "focus selector" refers to an input element for indicating the current portion of a user interface with which a user is interacting. In some implementations that include a cursor or other position marker, the cursor acts as a "focus selector" such that when the cursor detects an input (e.g., presses an input) on a touch-sensitive surface (e.g., touch pad 355 in fig. 3 or touch-sensitive surface 451 in fig. 4B) over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted according to the detected input. In some implementations including a touch screen display (e.g., touch sensitive display system 112 in fig. 1A or the touch screen in fig. 4A) that enables direct interaction with user interface elements on the touch screen display, the contact detected on the touch screen acts as a "focus selector" such that when an input (e.g., a press input by contact) is detected on the touch screen display at the location of a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations, the focus is moved from one area of the user interface to another area of the user interface without a corresponding movement of the cursor or movement of contact on the touch screen display (e.g., by moving the focus from one button to another using a tab key or arrow key); in these implementations, the focus selector moves in accordance with movement of the focus between different areas of the user interface. Regardless of the particular form that the focus selector takes, the focus selector is typically a user interface element (or contact on a touch screen display) that is controlled by the user to communicate a user-desired interaction with the user interface (e.g., by indicating to the device the element with which the user of the user interface desires to interact). For example, upon detection of a press input on a touch-sensitive surface (e.g., a touchpad or touch screen), the position of a focus selector (e.g., a cursor, contact, or selection box) over a respective button will indicate that the user desires to activate the respective button (rather than other user interface elements shown on the device display).
As used in this specification and in the claims, the term "intensity" of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of the contact on the touch-sensitive surface (e.g., finger contact or stylus contact), or refers to a surrogate (surrogate) of the force or pressure of the contact on the touch-sensitive surface. The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., at least 256). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., weighted average or summation) to determine an estimated contact force. Similarly, the pressure-sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area and/or its variation detected on the touch-sensitive surface, the capacitance of the touch-sensitive surface in the vicinity of the contact and/or its variation and/or the resistance of the touch-sensitive surface in the vicinity of the contact and/or its variation are optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, surrogate measurements of contact force or pressure are directly used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to surrogate measurements). In some implementations, an alternative measurement of contact force or pressure is converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). The intensity of the contact is used as an attribute of the user input, allowing the user to access additional device functions that would otherwise not be readily accessible on a smaller-sized device for displaying affordances and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or physical/mechanical controls such as knobs or buttons).
In some implementations, the contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether the user has "clicked" on an icon). In some implementations, at least a subset of the intensity thresholds are determined according to software parameters (e.g., the intensity thresholds are not determined by activation thresholds of particular physical actuators, and may be adjusted without changing the physical hardware of the device 100). For example, without changing the touchpad or touch screen display hardware, the mouse "click" threshold of the touchpad or touch screen display may be set to any of a wide range of predefined thresholds. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more intensity thresholds in a set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting multiple intensity thresholds at once with a system-level click on an "intensity" parameter).
As used in the specification and claims, the term "characteristic intensity" of a contact refers to the characteristic of a contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. The characteristic intensity is optionally based on a predefined number of intensity samples or a set of intensity samples acquired during a predetermined period of time (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, 5 seconds, 10 seconds) relative to a predefined event (e.g., after detection of contact, before or after detection of lift-off of contact, before or after detection of start of movement of contact, before or after detection of end of contact, and/or before or after detection of decrease in intensity of contact). The characteristic intensity of the contact is optionally based on one or more of: maximum value of contact intensity, average value of contact intensity, value at the first 10% of contact intensity, half maximum value of contact intensity, 90% maximum value of contact intensity, value generated by low-pass filtering contact intensity over a predefined period of time or from a predefined time, etc. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an operation. For example, the set of one or more intensity thresholds may include a first intensity threshold and a second intensity threshold. In this example, contact of the feature strength that does not exceed the first threshold results in a first operation, contact of the feature strength that exceeds the first strength threshold but does not exceed the second strength threshold results in a second operation, and contact of the feature strength that exceeds the second threshold results in a third operation. In some implementations, a comparison between the feature intensity and one or more intensity thresholds is used to determine whether to perform one or more operations (e.g., whether to perform the respective option or to forgo performing the respective operation) instead of being used to determine whether to perform the first operation or the second operation.
In some implementations, a portion of the gesture is identified for determining a feature strength. For example, the touch-sensitive surface may receive a continuous swipe contact that transitions from a starting position and reaches an ending position (e.g., a drag gesture) where the intensity of the contact increases. In this example, the characteristic intensity of the contact at the end position may be based on only a portion of the continuous swipe contact, rather than the entire swipe contact (e.g., only a portion of the swipe contact at the end position). In some implementations, a smoothing algorithm may be applied to the intensity of the swipe gesture prior to determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of the following: an unweighted moving average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some cases, these smoothing algorithms eliminate narrow spikes or depressions in the intensity of the swipe contact for the purpose of determining the characteristic intensity.
The user interface diagrams described herein optionally include various intensity diagrams that illustrate a contact on a touch-sensitive surface relative to one or more intensity thresholds (e.g., contact detection intensity threshold IT 0 Light press intensity threshold IT L Threshold IT of deep compression strength D (e.g., at least initially above IT L ) And/or one or more other intensity thresholds (e.g., ratio IT L Low intensity threshold IT H ) A) the current intensity. The intensity map is typically not part of the displayed user interface, but is provided to help explain the map. In some embodiments, the tap strength threshold corresponds toStrength of (c): at this intensity the device will perform the operations normally associated with clicking a button of a physical mouse or touch pad. In some embodiments, the deep compression intensity threshold corresponds to an intensity of: at this intensity the device will perform an operation that is different from the operation normally associated with clicking a physical mouse or a button of a touch pad. In some embodiments, when the characteristic intensity is detected to be below the light press intensity threshold (e.g., and above the nominal contact detection intensity threshold IT 0 A contact that is lower than the nominal contact detection intensity threshold is no longer detected), the device will move the focus selector according to the movement of the contact over the touch-sensitive surface without performing an operation associated with the tap intensity threshold or the deep-tap intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent across different sets of user interface drawings.
In some embodiments, the response of the device to the input detected by the device depends on a criterion based on the contact strength during the input. For example, for some "tap" inputs, the intensity of the contact exceeding a first intensity threshold during the input triggers a first response. In some embodiments, the response of the device to the input detected by the device depends on criteria including both contact strength during the input and time-based criteria. For example, for some "deep press" inputs, the intensity of the contact exceeding a second intensity threshold greater than the first intensity threshold of the light press triggers a second response whenever a delay time passes between the first intensity threshold being met and the second intensity threshold being met during the input. The duration of the delay time is typically less than 200ms (milliseconds) (e.g., 40ms, 100ms, or 120ms, depending on the magnitude of the second intensity threshold, wherein the delay time increases as the second intensity threshold increases). This delay time helps to avoid accidentally recognizing a deep press input. As another example, for some "deep press" inputs, a period of reduced sensitivity will occur after the first intensity threshold is reached. During the period of reduced sensitivity, the second intensity threshold increases. This temporary increase in the second intensity threshold also helps to avoid accidental deep press inputs. For other deep press inputs, the response to detecting the deep press input is not dependent on a time-based criterion.
In some implementations, one or more of the input intensity thresholds and/or corresponding outputs vary based on one or more factors (such as user settings, touch motions, input timing, application operation, rate at which intensity is applied, number of simultaneous inputs, user history, environmental factors (e.g., contextual noise), focus selector position, etc., exemplary factors are described in U.S. patent application Ser. Nos. 14/399,606 and 14/624,296, which are incorporated herein by reference in their entirety.
For example, in some implementations, the dynamic intensity threshold varies over time based in part on the intensity of the touch input over time. The dynamic intensity threshold is the sum of the two components: a first component that decays over time after a predefined delay time p1 from when the touch input was initially detected, and a second component that tracks the intensity of the touch input over time. The initial high intensity threshold of the first component reduces the accidental triggering of a "deep press" response while still allowing an immediate "deep press" response if the touch input provides sufficient intensity. The second component reduces the gradual intensity fluctuation through the touch input without inadvertently triggering a "deep press" response. In some implementations, a "deep press" response is triggered at a point in time when the touch input meets a dynamic intensity threshold.
In another example, in some embodiments, a dynamic intensity threshold (e.g., intensity threshold I D ) With two other intensity thresholds (first intensity threshold IT H And a second intensity threshold I L ) Used in combination. In some implementations, although the touch input satisfies the first intensity threshold IT before time p2 H And a second intensity threshold IT L But does not provide a response until the delay time p2 has elapsed. In some embodiments, the dynamic intensity threshold decays over time, wherein decays from the second intensity threshold IT L The point in time at which the associated response is triggered (e.g., time p 2) begins at a point in time after a predefined delay time p1 has elapsed. This type of motionThe state intensity threshold decrease is immediately followed by triggering and a lower threshold intensity (such as a first intensity threshold IT H Or a second intensity threshold I L ) Accidental triggering of the dynamic intensity threshold IT after or simultaneously with the associated response D An associated response.
In another example, in some embodiments, the trigger and intensity threshold IT is triggered after a delay time p2 has elapsed since the touch input was initially detected L An associated response. Meanwhile, after a predefined delay time p1 has elapsed since the touch input was initially detected, a dynamic intensity threshold (e.g., intensity threshold I D ) Attenuation. Thus, at trigger and intensity threshold I L The associated response may be followed by a decrease in the intensity of the touch input, followed by an increase in the intensity of the touch input without releasing the touch input, which may trigger an intensity threshold IT D The associated response even when the intensity of the touch input is below another intensity threshold (e.g., intensity threshold I L ) As does the case.
The contact characteristic intensity is lower than the light pressing intensity threshold IT L Is increased to a strength between the light press strength threshold IT L And a deep compression strength threshold IT D The intensity in between is sometimes referred to as a "soft press" input. The characteristic intensity of the contact is never lower than the deep-press intensity threshold IT D Is increased to an intensity above the deep compression intensity threshold IT D Sometimes referred to as a "deep press" input. The contact characteristic intensity is never lower than the contact detection intensity threshold IT 0 Is increased to an intensity between the contact detection intensity threshold IT 0 And a tap pressure intensity threshold IT L The intensity in between is sometimes referred to as detecting contact on the touch surface. The characteristic intensity of the contact is higher than the contact detection intensity threshold IT 0 Is reduced to an intensity below the contact detection intensity threshold IT 0 Is sometimes referred to as detecting the lift-off of a contact from the touch surface. In some embodiments, IT 0 Zero. In some embodiments, IT 0 Greater than zero in some illustrations, a shaded circle or oval is used to represent the intensity of a contact on a touch-sensitive surface. In some illustrations, a circle or ellipse without shading is used to represent a touch-sensitive surfaceWithout specifying the intensity of the respective contact.
In some implementations described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting a respective press input performed with a respective contact (or contacts), wherein the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or contacts) above a press input intensity threshold. In some implementations, the respective operation is performed in response to detecting that the intensity of the respective contact increases above a press input intensity threshold (e.g., the respective operation is performed on a "downstroke" of the respective press input). In some implementations, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below the press input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press input threshold (e.g., the respective operation is performed on an "upstroke" of the respective press input).
In some implementations, the device employs intensity hysteresis to avoid accidental inputs, sometimes referred to as "jitter," in which the device defines or selects a hysteresis intensity threshold that has a predefined relationship to the compression input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the compression input intensity threshold, or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the compression input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below a hysteresis intensity threshold corresponding to the press input intensity threshold, and the respective operation is performed in response to detecting that the intensity of the respective contact subsequently decreases below the hysteresis intensity threshold (e.g., the respective operation is performed on an "upstroke" of the respective press input). Similarly, in some embodiments, a press input is detected only when the device detects an increase in contact intensity from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press input intensity threshold and optionally a subsequent decrease in contact intensity to an intensity at or below the hysteresis intensity, and a corresponding operation is performed in response to detecting a press input (e.g., an increase in contact intensity or a decrease in contact intensity, depending on the context).
For ease of explanation, the description of operations performed in response to a press input associated with a press input intensity threshold or in response to a gesture comprising a press input is optionally triggered in response to detecting: the intensity of the contact increases above the compression input intensity threshold, the intensity of the contact increases from an intensity below the hysteresis intensity threshold to an intensity above the compression input intensity threshold, the intensity of the contact decreases below the compression input intensity threshold, or the intensity of the contact decreases below the hysteresis intensity threshold corresponding to the compression input intensity threshold. In addition, in examples where the operation is described as being performed in response to the intensity of the detected contact decreasing below a press input intensity threshold, the operation is optionally performed in response to the intensity of the detected contact decreasing below a hysteresis intensity threshold that corresponds to and is less than the press input intensity threshold. As described above, in some embodiments, the triggering of these operations also depends on the satisfaction of a time-based criterion (e.g., a delay time has elapsed between the satisfaction of the first intensity threshold and the satisfaction of the second intensity threshold).
User interface and associated process
Attention is now directed to embodiments of a user interface ("UI") and associated processes that may be implemented on an electronic device, such as portable multifunction device 100 or device 300, having a display, a touch-sensitive surface, and (optionally) one or more sensors for detecting the intensity of contact with the touch-sensitive surface.
Fig. 5A-5D illustrate an exemplary user interface having an affordance (e.g., home affordance) that indicates a gesture start region on a touch-sensitive display screen for a navigation gesture (e.g., a gesture for navigating to a home screen user interface) in accordance with some embodiments. In some embodiments, the methods shown herein are also used to display affordances for controlling or providing guidance regarding other functions or operations of the device. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 6A-6C, 7A-7E, and 8A-8F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having a touch sensitive display system 112. In such embodiments, the focus selector is optionally: a respective finger or stylus contact, a representative point corresponding to the finger or stylus contact (e.g., the center of gravity of the respective contact or a point associated with the respective contact), or the center of gravity of two or more contacts detected on the touch-sensitive display system 112.
For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device without a home button, and a gesture that meets predefined criteria is used to dismiss the currently displayed user interface and display a home screen user interface. Although not shown in fig. 5A-5D, in some embodiments, a home button (e.g., a mechanical button, a solid button, or a virtual button) is included on the device and is used to dismiss the currently displayed user interface and display a home screen user interface. (e.g., in response to Shan Anya input) and/or displaying a multi-tasking user interface (e.g., in response to a dual pressure input).
In fig. 5A-5D, when the device displays any user interface, a gesture starting from the bottom of the screen (e.g., within a predefined area of the device that is adjacent to an edge of the display (e.g., an edge area that includes a predefined portion of the display near the bottom edge of the device (e.g., 20 pixels wide)) invokes a user interface navigation process, and optionally directs navigation between multiple user interfaces based on the speed and direction of input and optionally based on movement parameters and characteristics of the currently displayed user interface object (e.g., a scaled-down representation of the currently displayed user interface).
An exemplary user interface of an application operating on an electronic device includes an affordance (e.g., home affordance 5002) that provides visual guidance to a user regarding: the location of the edge region of the navigation gesture may be initiated; and optionally whether navigation is limited to the current mode of operation of the currently displayed application (e.g., there is no home affordance indicating that navigation is limited and that a confirmation input or enhanced navigation gesture (e.g., swipe up after press input or swipe up after touch-and-hold input) is required to navigate between user interfaces (e.g., as shown in fig. 5X-5 AA, where an initial input is required to redisplay the affordance, then a subsequent navigation gesture can be recognized)). In some implementations, the home affordance is not activatable or does not respond directly to touch input in a manner similar to a virtual button. In some embodiments, a home affordance or other affordance implemented using methods described herein is responsive to touch input that includes contacts directly on the affordance.
Fig. 5A shows a web browsing user interface displaying the contents of web page 5004. The Home affordance 5002 is displayed overlaying a portion of the content displayed near the bottom edge of the touch screen 112. The user interface navigation process is activated by contact 5006, which begins at a location below, on or near home affordance 5002 and travels upward from the bottom of the screen, as shown in fig. 5A-5B.
In fig. 5B, the web browsing user interface is replaced with a card 5008 representing the web browser user interface in fig. 5A. In fig. 5A-5C, as the input moves up on the screen, the card 5008 dynamically shrinks, displaying a blurred view 5010 of the home screen in the background. In some implementations, the amount of blurring applied to the home screen dynamically varies according to the distance of the contact 5006 from the bottom of the screen.
In some embodiments, as shown in fig. 5B, home affordance 5002 stops displaying when the user interface navigation process is activated by input through contact 5006 (e.g., home affordance 5006 is not directly manipulable by touch input and stops displaying when it has completed the purpose of providing visual guidance to the user regarding navigation gestures). When the size of card 5008 is sufficiently small, other cards (e.g., cards 5012 and 5014, representing a system control panel user interface and a user interface of a recently opened application, respectively) are displayed next to card 5008. Optionally, terminating the input (e.g., lifting off contact 5006) when multiple cards are displayed causes the device to enter a multitasking mode (e.g., displaying an application switcher user interface that allows the user to select an application to replace the web browser application with a foreground application).
In fig. 5C, when contact 5006 continues to move upward and predefined home navigation criteria are met (e.g., predefined characteristics of contact 5006 (e.g., location, speed, etc.) meet predefined thresholds), other cards on the screen cease to be displayed and only card 5008 remains on the screen. When the input is terminated (e.g., lift-off of contact 5006 is detected), while only card 5008 is displayed (e.g., similar to the user interface state shown in fig. 5C), the device displays home screen 5016 as shown in fig. 5D (e.g., dashed oval represents the lift-off position of contact 5006). When home screen 6016 is displayed on touch screen 112, home affordance 5002 is not displayed on the touch screen.
Since in many scenarios (e.g., when a different application or other system level user interface is displayed (e.g., notification center user interface, cover user interface, control panel user interface, etc.) the currently displayed user interface needs to be dismissed and turned to the home screen, the home affordance 5002 needs to be displayed on all kinds of contexts, which may also be spontaneous or responsive to user manipulation and temporal and time-varying. In addition, a greater degree of freedom is given to the user in deciding where to begin navigating the gesture along the bottom edge region of the touch screen, and therefore, home affordance 5002 is designed with a greater horizontal span to indicate the expansibility of the reaction area for the gesture and with a relatively small width to avoid excessive confusion of the screen and unnecessary disturbance to the user. Thus, even if the underlying content is static over time, the portion of the content below home affordance 5002 may include variations in color and brightness and other display attributes in different portions of the portion of the content. Thus, in some embodiments, the appearance of each subsection of the affordance (e.g., each pixel or each small cluster of pixels) is determined separately based on the appearance of the content directly beneath the subsection of the affordance (and optionally, the appearance of the content extending slightly beyond the boundaries of the subsection of the affordance (e.g., by blurring or averaging effects on the content or the affordance) of the application. As shown in fig. 5A, home affordance 5002 is displayed on a portion of content 5004 in a web page. A portion of the content includes regions having different brightness levels, and the resulting home affordance 5002 also includes brightness variations along its length (e.g., horizontal range).
In some embodiments, as shown in fig. 5E, a plurality of image processing filters (e.g., in order or without limiting the ordering of the filters) are applied to the background content below the affordance to determine the appearance of the affordance. For example, an original full-color image of the content is desaturated to obtain a luminance map of the content. The brightness of the content is inverted (e.g., according to a predefined inversion relationship between the brightness value of the background and the brightness value of the affordance (e.g., one of the inversion relationships shown in fig. 5F, 5R, 5AE, etc.)) to obtain the brightness value of the affordance at each pixel of the affordance. An inverse relationship between the luminance of the affordance and the luminance of the underlying content is used as an example of a correspondence between the affordance and the value of the selected display attribute of the underlying content. Other types of display attributes, such as variations in gray values or brightness, may also be used in various embodiments.
As shown in fig. 5E, the inversion creates an appearance contrast between the affordance and the underlying content. When a portion of the underlying content is brighter (e.g., has a higher brightness value), the corresponding portion of the affordance is darker (e.g., has a lower brightness value). For example, inverting is performed on different portions of desaturated background content having different luminance values (e.g., portions surrounded by circles labeled 1, 2, 3, and 4 in the desaturated stripe) to obtain corresponding portions of affordances having different luminance values (e.g., portions surrounded by circles labeled 1, 2, 3, and 4 in the inverted stripe). Fig. 5F illustrates an exemplary inversion curve for generating affordable luminance values from corresponding background luminance values. The values of the corresponding parts marked with circles in fig. 5E and the affordances are marked with circles marked 1, 2, 3, 4, as shown in fig. 5F. The shadows of the affordances are also reproduced in the circles labeled 1, 2, 3, 4 in fig. 5F. In some implementations, after performing the inversion, a thresholding process is performed on the luminance values to reduce the dynamic range of the luminance values. For example, the luminance value of each pixel of the affordance is limited to 50% of the maximum luminance of the affordance to produce a softer appearance with lower internal visual contrast (e.g., comparing the inverted affordance to the thresholded affordance). In some embodiments, to further reduce internal variations and contrast within the affordance, a blur filter is applied to average the brightness variations across multiple adjacent pixels in the content and, thus, across the affordance. Finally, the resulting affordance has a wider stroke brightness variation corresponding to the brightness variation in the underlying content.
When generating affordances that stand out against different backgrounds, even a simple inversion of the luminance values will in most cases produce sufficient contrast, and the full range of luminance values using the affordances will typically result in a more pleasing appearance that may distract the user. Therefore, it is advantageous to constrain the range of luminance values of the affordance to a sub-range of the range of luminance values of the content. In addition, depending on the desired brightness level of the underlying content, the brightness range value of the affordance is constrained to a "dark" affordance value range or a "light" affordance value range, resulting in a "dark" affordance or a "light" affordance. In some embodiments, the affordances (e.g., "dark" and "light") do not change after the affordances are initially displayed, even if the appearance of the underlying content changes from very dark to very bright, or vice versa (as shown in fig. 5G-5P). In some implementations, the affordances (e.g., "dark" and "light") do not change in response to temporal changes in the content (e.g., temporarily reversing the content brightness level for a short period of time), but eventually change in response to more permanent changes in the content (e.g., maintaining the reversal of the content brightness level on a longer time scale). In some embodiments, the affordance appearance type (e.g., a particular appearance value range of "bright" or "dark" or affordance) is selected based on the initial brightness level of the underlying content at the first display of the affordance, and the affordance is maintained until a context switch event occurs (e.g., switching between applications, between applications or system user interfaces, or between two system user interfaces, etc.), and the affordance appearance type is redetermined based on the underlying content in the new context. As an example, fig. 5R shows an exemplary inverse relationship of "bright" affordances and "dark" affordances (e.g., curves labeled "LA" and "DA", respectively), where the content luminance value range (e.g., values along the horizontal axis) is the entire range from black to white (e.g., gray values of gray images [0,1], or luminance values of color images [0, 255], or luminance or other similar display attribute [0, 100% ]), and the affordance luminance value range (e.g., values along the vertical axis) is constrained within an upper-limit value range (e.g., a value range of "bright" affordances) or a lower-limit value range (e.g., a value range of "dark" affordances). In some embodiments, the two value ranges do not overlap (e.g., are separated by a gap). As shown in fig. 5R, two curves (e.g., curve LA and curve DA) indicate that an increase in brightness of the content results in a decrease in brightness of the affordance.
Fig. 5F illustrates an exemplary brightness inversion curve 5017 for performing the inversion illustrated in fig. 5E according to some embodiments. In this example, the affordance value range constrained between the upper and lower limits is greater than half of the background value range, and does not include pure black (value=0) and pure white (e.g., value=1). In some embodiments, the inversion curve is continuous and does not include continuous points.
Fig. 5G-5K illustrate changes in appearance of an affordance (e.g., affordance 5002-DA) of a first affordance appearance type (e.g., a "dark" affordance type) in accordance with some embodiments. Fig. 5G to 5K illustrate scrolling of content 5018 shown in the web browser user interface displayed on the touch screen 112. As the content 5018 scrolls, a portion of the content located below the affordance 5002-DA displayed near the bottom edge of the touch screen changes. In other words, during scrolling of the content 5018, different portions of the content 5018 are moved to the underlying affordance 5002-DA.
As shown in fig. 5G, in some embodiments, when the affordance 5002-DA is initially displayed (e.g., when a web browser application is opened and a web browser interface is initially displayed with content 5018), the overall brightness level of that portion of content 5018 is evaluated and an appropriate affordance appearance type is selected for the affordance. In this particular example, the portion of the content 5018 located below the affordance is relatively dark (e.g., the overall brightness level is below a predefined brightness threshold), and the affordance type (e.g., the "dark" affordance type) corresponding to the "darker" portion of the affordance brightness value range (e.g., the range [0,0.4 ]) is selected as the affordance type of the affordance. For example, an affordance is generated using a set of filters as shown in FIG. 5E, and the inverse relationship used is an inverse relationship implying that the type of appearance can be represented (e.g., curve DA shown in FIG. 5R).
In fig. 5H, a scroll input is detected on the touch screen that begins outside (e.g., above) the reaction area of the user interface navigation gesture (e.g., home/multitasking gesture by contact 5006, as shown in fig. 5A-5D) (e.g., move up on the touch screen by contact 5020). The scroll input scrolls the content 5018 of the web page up and causes the previously undisplayed portion of the content 5018 to arrive below the affordance 5002-DA. At the time depicted in fig. 5H, the portion of content 5018 located directly below affordances 5002-DA is fully white (e.g., a brightness value of 1 or 100%), and accordingly, the brightness value of affordances 5002-DA is fully black (e.g., a brightness value of 0 or 0%), as determined based on the inverse relationship implying that the appearance types can be represented (e.g., as depicted by curve DA in fig. 5R).
Fig. 5I-5K illustrate that as the scroll input continues (e.g., contact 5020 continues to move upward, then contact 5020 lifts off at a final speed), content 5018 scrolls upward under affordance 5002-DA. The appearance of affordances 5002-DA varies depending on the portion of content 5018 that is currently under affordances 5002-DA. The brightness of the affordances 5002-DA is determined based on an inverse relationship implying that the type of appearance can be represented (e.g., as depicted by the curve DA in FIG. 5R).
Specifically, at the time depicted in FIG. 5J, the portion of content 5018 directly below affordances 5002-DA is fully white (e.g., a luminance value of 1 or 100%) on the left and fully black (e.g., a luminance value of 0 or 0%) on the right, and accordingly, half of the affordances 5002-DA on the left are fully black (as is the case in FIG. 5H) and half of the affordances 5002-DA on the right are not fully white. In contrast, the right half of affordance 5002-DA is gray (e.g., the affordance brightness value is greater than 0 (e.g., 0.4)), as determined based on the inverse relationship implying that the affordance appearance type (e.g., as depicted by curve DA in FIG. 5R). In other words, implications are that the range of luminance values that can be represented (e.g., 5002-DA) is constrained to a range below the maximum luminance threshold (e.g., 0.4).
Fig. 5L-5P illustrate changes in appearance of affordances of a second affordance appearance type (e.g., a "bright" affordance type) in accordance with some embodiments.
Fig. 5L to 5P illustrate scrolling of the content 5018 shown in the web browser user interface displayed on the touch screen 112. The scrolling is the reverse of the scrolling depicted in fig. 5G-5K.
As shown in fig. 5L, in some embodiments, when the affordance 5002-LA is initially displayed (e.g., when a web browser application is opened and a web browser interface is initially displayed with content 5018), the overall brightness level of that portion of content 5018 is evaluated and an appropriate affordance appearance type is selected for the affordance. In this particular example, the portion of the content 5018 located below the affordance is relatively bright (e.g., the overall brightness level is above a predefined brightness threshold), and the affordance type (e.g., the "bright" affordance type) corresponding to the "bright" portion of the affordance brightness value range (e.g., the range [0.6,1 ]) is selected as the affordance type of the affordance. For example, an affordance is generated using a set of filters as shown in FIG. 5E, and the inverse relationship used is that of a bright affordance (e.g., curve LA shown in FIG. 5R).
In fig. 5L-5M, a scroll input is detected on the touch screen that begins outside (e.g., above) the reaction area of the user interface navigation gesture (e.g., home/multitasking gesture by contact 5006, as shown in fig. 5A-5D) (e.g., move down on the touch screen by contact 5022). The scroll input scrolls down the content 5018 of the web page and brings the upper portion of the content 5018 below the affordance 5002-LA.
At the time depicted in fig. 5M, the portion of content 5018 directly below the affordances 5002-LA is fully white (e.g., a brightness value of 1 or 100%) on the left and fully black (e.g., a brightness value of 0 or 0%) on the right, and correspondingly, half of the affordances 5002-LA are fully white and half of the affordances 5002-LA on the left are not fully black. In contrast, the left half of affordance 5002-LA is gray (e.g., the luminance value is greater than 0 (e.g., 0.6)), as determined based on the inverse relationship of the type of appearance of the affordance (e.g., as depicted by curve LA in fig. 5R). In other words, the affordance of the bright affordance is constrained to a range of luminance values above a minimum luminance threshold (e.g., 0.6). In some embodiments, as depicted in FIGS. 5J and 5M, the implication is that the overall appearance of the affordance 5002-DA is darker than the overall appearance of the light affordance 5002-LA for the same background.
Fig. 5N-5P illustrate that as the scroll input continues (e.g., contact 5022 continues to move downward, then contact 5022 lifts off at a final speed), content 5018 scrolls downward under affordance 5002-LA. The appearance of affordance 5002-LA varies depending on the portion of content 5018 that is currently under affordance 5002-DA. The brightness of affordances 5002-LA is determined based on the inverse relationship of the type of appearance of the affordance (e.g., as depicted by curve LA in fig. 5R).
At the time depicted in FIG. 5O, the portion of content 5018 located directly below the affordances 5002-LA is fully white (e.g., a luminance value of 1 or 100%), and accordingly, the luminance value of the affordances 5002-LA is not fully black. In contrast, affordances 5002-LA are gray (e.g., luminance values greater than 0 (e.g., 0.6)) as determined based on the inverse relationship of the type of the affordance appearance (e.g., as depicted by curve LA in fig. 5R). In other words, the range of luminance values that the light affordance is constrained to be above a minimum luminance threshold (e.g., 0.6).
Fig. 5Q illustrates the difference in appearance of the affordances 5002 of the two types of affordances (e.g., LA and DA) with the same change in background (e.g., content 5018) in accordance with some embodiments.
Fig. 5Q lists the appearance of affordances 5002 for each of the states shown in fig. 5G-5P. These states are divided into five groups, each group corresponding to a respective state of content 5018 shown in the web browser user interface. For example, from top to bottom, five groups correspond to: (I) fig. 5G and 5P, (ii) fig. 5H and 5O, (iii) fig. 5I and 5N, (iv) fig. 5J and 5M, and (v) fig. 5K and 5L.
As shown in fig. 5Q, for each group corresponding to a respective content state, the implication energy representation (e.g., comparing DA version and LA version of the affordance 5002 below the same content strip) can represent the appearance type as having an overall darker appearance (e.g., lower overall brightness) than the highlighting energy representation appearance type.
FIG. 5R illustrates display properties (e.g., brightness or gray values) of an affordance in accordance with some embodiments versus a range of values and inversions of underlying content implying that the type of appearance and the type of affordance.
The difference in appearance shown in fig. 5Q is also reflected in fig. 5R, where the affordance luminance value range of the bright affordance is completely higher than the affordance luminance value range of the implied affordance, and the two value ranges optionally do not overlap.
Fig. 5S-5 AA illustrate user interfaces including affordances having appearance changes responsive to a background and operating mode changes associated with the background user interfaces, in accordance with some embodiments.
In fig. 5S, a web browser application is launched and a web browser user interface 5024 is displayed on the touch screen. In this example, the web browser user interface is displayed in a landscape orientation, depending on the orientation of the device 100. A home affordance 5002 is displayed near a bottom edge of the touch screen in a first state (e.g., a fully visible state/high-contrast state 5002-a). The affordance appearance type of affordance 5002 is optionally selected based on an initial overall brightness level of a portion of web page content that is located below affordance 5002.
Fig. 5S-5T illustrate selecting a media item (e.g., a movie clip "Live Bright") for playback (e.g., in fig. 5S, tap input is made on a playback icon associated with the media item by contact 5026). In response to selecting the media item, the media player application is launched and a user interface (e.g., user interface 5028) of the media player application is displayed on the touch screen. In fig. 5T, the media player application is operating in a first mode (e.g., full screen mode with controls displayed, or interactive mode). When media playback has just been initiated, the user interface 5028 includes a plurality of control regions that overlie the media playback region (e.g., media content that occupies substantially the entire screen), including various controls such as a media scrubber, a "done" button for closing the media player application and returning to the web browser application user interface 5024, a volume slider control, a rewind control, a pause/play control, and a fast forward control. These controls are initially displayed on the media playback area because the user, after first seeing how the media content looks or sounds, will likely want to adjust the default starting position or volume selected by the device or return to the previous application. In some embodiments, upon switching from the web browser user interface 5024 to the media player user interface 5028, the context switch event is recorded by the device and the affordance appearance type of the affordance 5002 is redetermined based on the initial overall brightness level of the portion of the media content that is located below the affordance 5002 when media playback is first initiated. Regardless of whether the affordance appearance type is redetermined, the affordance 5002 is initially displayed in a fully-viewable state (e.g., a high-contrast state) on the user interface 5028.
Fig. 5U indicates that the control region remains visible on the media content for a first predetermined amount of time (e.g., 10 seconds) after the media playback is initiated, and the affordance 5002 remains in a fully-visible state (e.g., a full-contrast state) on the user interface 5028. During this time, the appearance of affordance 5002 is determined in accordance with a first set of rules. In some embodiments, the first set of rules includes a set of filters (such as those shown in fig. 5E) and inverse relationships (such as those shown in fig. 5R), where the shape of the first set of parameters and/or one or more inverse curves of the filters is optionally a pre-selection of the first state of the affordance. In fig. 5S to 5U, the affordance 5002-a is a gray affordance without color information, even if the underlying content is full color.
In fig. 5V, a first predetermined amount of time (e.g., 10 seconds) after the media playback is initiated has expired. The control region stops displaying on the media content in response to the first predetermined amount of time expiring. The control region fade may occur instantaneously or through a short animation. The first predetermined amount of time expires and/or the control region ceases to be displayed on the media player user interface indicating that the media player user interface is now operating in the second mode (e.g., full screen display mode without a control being displayed, or media consumption mode). In addition, in response to the first predetermined amount of time expiring, the device determines an appearance of affordance 5002 in accordance with a second set of rules different from the first set of rules. In other words, the affordance is displayed in a second state (semi-visible state/low-contrast state 5002-A'). In some embodiments, the second set of rules includes a set of filters (such as those shown in fig. 5E) and inversion relationships (such as those shown in fig. 5R), wherein the shape of the second set of parameters and/or one or more inversion curves of the filters is optionally a pre-selection of the second state (e.g., semi-visible state/low contrast state 5002-a') of the affordance 5002. In some embodiments, in the low contrast state, affordance 5002 maintains some color of the underlying content. For example, instead of fully desaturating the underlying content to obtain a luminance map of the content, only 70% of the color values (e.g., RGB values) per pixel are desaturated, and 30% of the color information per pixel in the underlying content is maintained in the final appearance of the affordance. In some embodiments, the transparency level of the affordance is adjusted such that the affordance is not completely opaque and some color information of the underlying content is transferred to the pixels of the affordance. In some embodiments, after performing the brightness inversion, the remaining color saturation of the affordance is increased (e.g., by 30%) to make the affordance appear slightly more vivid and thus better mix with the background. The reduced visibility or contrast of the second state of affordance 5002 is a consideration of the reduced likelihood that the user would like to interact with any control after an initial period of time after the media playback is first initiated. In some embodiments, the transition from the first state to the second state is optionally a gradual and continuous transition over a plurality of intermediate states between the first state and the second state, as opposed to a sudden and discrete transition. The gradual transition is less likely to distract the user from viewing the media content.
As shown in fig. 5V-5W, when affordance 5002 is in a second state (e.g., semi-visible state/low-contrast state 5002-a'), the appearance of affordance 5002 changes based on a second set of rules in accordance with changes in content located below the affordance.
In fig. 5X, after affordance 5002 is in the second state for a second predetermined amount of time (e.g., 5 seconds), affordance 5002 transitions from the second state to an invisible state (e.g., referred to as a third state), or in other words, the affordance fades out entirely and stops displaying on the media content. In some embodiments, the transition from the second state to the third state is optionally a gradual and continuous transition over a plurality of intermediate states between the second state and the third state, as opposed to a sudden and discrete transition. The gradual transition is less likely to distract the user from viewing the media content.
Fig. 5Y-5 AA illustrate that after affordance 5002 is no longer displayed on media content 5028, media playback continues until an input is detected (e.g., movement of the device, tap or swipe input by contact on a touch screen, contact near a bottom edge region of the display, etc.). In response to detecting the input, the affordance 5002 is redisplayed over the media content 5028.
As shown in fig. 5Y, in response to movement of the device 100 (or other type of input, such as a tap, swipe, or touch down near the bottom edge region of the display), the affordance 5002 is redisplayed (e.g., along with other control regions) on the media content 5028 in a second state (e.g., the semi-visible/low contrast state 5002-a'). In some embodiments, the region is not redisplayed in response to the input, and the device continues to operate in a full screen display mode (without displayed controls) of the media player application. In some implementations, if another input (e.g., a tap input or swipe input) is not detected within a threshold amount of time, the affordance is again stopped from being displayed. If the desired input is detected within a threshold amount of time, the affordance is again displayed in the first state and the media control area is also optionally redisplayed. The user interface returns to a first mode of operation of the media player application (e.g., full screen playback mode with controls displayed, or interactive mode). In contrast to displaying affordance 5002 in a first state and immediately returning to the first mode of operation, initially providing affordance 5002 in a second state in response to a first input provides some indication to the user as to the location of the home affordance and the state of the user interface, but also taking into account that the input may be unintentional and that the user does not actually wish to distract from viewing the media content. If the user's intent is to use the control and/or affordance 5002, confirmation input from the user is required (e.g., a sustained touch by the same contact for a threshold amount of time, a press input by the same contact with a threshold press strength, or a second tap input by another contact).
Fig. 5Z and 5AA illustrate that in some embodiments, when affordance 5002 is not displayed (e.g., as shown in fig. 5X) or when affordance 5002 is displayed in a second state (e.g., translucent/low-contrast state 5002-a'), input through contact (e.g., contact 5030) is detected on the touch screen. In response to input via contact 5030, affordance 5002 is displayed (if not already displayed) in a second state (e.g., semi-visible/low contrast state 5002-A'). In addition, the media control area is also redisplayed on the media player user interface. In other words, the media player application returns to the first mode of operation. In some implementations, a sustained touch input near a bottom edge region of the display causes the device to first redisplay the affordance in a second state (e.g., when contact is detected) and then redisplay the affordance in a first state (e.g., when contact is maintained for more than a threshold amount of time less than a threshold amount of movement from a downward touch). In some embodiments, after redisplaying the affordance in the first state, the device recognizes a navigation gesture when movement of the contact (without lift-off of the contact) is detected. In some embodiments, instead of requiring the contact to be held substantially stationary for a threshold amount of time to redisplay the affordance in the first state, the device requires the intensity of the contact to exceed a predefined tap intensity threshold by an amount of movement less than the threshold of the contact. In some embodiments, after redisplaying the affordance in the first state, the device recognizes a navigation gesture when movement of the contact (without lift-off of the contact) is detected.
In fig. 5AA, in response to the media player application returning to the first operational state (e.g., interactive state), the affordance 5002 also transitions from the second state (e.g., semi-visible state/low-contrast state 5002-a') back to the first state (e.g., fully-visible state/high-contrast state 5002-a). In some embodiments, the transition from the second state to the first state is optionally a discrete transition, as opposed to a plurality of intermediate states between the first state and the second state. The abrupt transition is more likely to alert the user that the mode of operation of the user interface has changed and to shorten the waiting time for the user to access the control and provide subsequent input. In some embodiments, when a confirmation input is detected (e.g., a second tap after displaying the affordance in a second state in response to the first tap, or a sustained touch by triggering the same contact in the second state display affordance), the state shown in fig. 5Y is transitioned directly from the state shown in fig. 5AA to the state shown in fig. 5AA, skipping the state shown in fig. 5Z.
Fig. 5 AB-5 AC illustrate differences in appearance of affordances of a background in different modes of operation, according to some embodiments.
In fig. 5AB, the first state 5002-a of the affordance is opaque and is used when the media player user interface is operating in a first operational state (e.g., an interactive state) and when media controls are displayed on the media content. The second state 5002-a' is a translucent state and is used when the media player user interface is operating in a second operating state (e.g., a protected state or a media consumption state) and no media controls are displayed on the media content. The third state is a state in which the affordance is no longer displayed and the user interface continues to operate in a second operational state (e.g., a protected state or a media consumption state). The affordance 5002 is enabled to pass through these states when no input is received within a predetermined amount of time after the media player is started (e.g., in full-screen or landscape mode).
In fig. 5AC, the affordance is started in an invisible state or is not displayed on the media content (e.g., after stopping displaying affordance 5002 due to the absence of user input). The affordance is displayed in a second state in response to a first input or first portion of the input while the media player user interface remains in a second operational state (e.g., a protected state or a media consumption state). The affordance is then displayed in a first state 5002-a in response to a second input or second portion of the input and the user interface returns to a first operational state (e.g., an interactive state). In the first state 5002-A, the affordance is opaque. In the second state 5002-A', the affordance is translucent.
In some embodiments, the affordances of the affordances are of a fixed appearance type and do not change when the underlying content changes. This provides a consistent affordance appearance that reduces interference with the user. However, in some scenarios, when the content of the displayed affordance varies greatly, the fixed affordance type may not provide adequate contrast on the underlying background after the content has changed from an overall dark-tone to an overall light-tone, or vice versa. Further, sometimes, the switching of the content brightness level is a short-term switching (e.g., scrolling through black text lines on a white background), and in such cases switching the affordance type in response to such short-term changes may be inefficient, confusing and distracting to the user. On the other hand, if the switching of the content brightness level is a more permanent or long-term switching (e.g., flipping from one page (e.g., a page displaying a warm tone of the evening sky) to another page (e.g., displaying a star night scene)), maintaining the affordance appearance type fixed may result in insufficient visibility of the affordance over an extended period of time.
To address the above, while still balancing the need to maintain visibility without undue interference to the user, in some embodiments, the device allows the affordance to switch its affordance appearance type and, accordingly, shift the affordance appearance value range of values from one value range to another when predefined range switching criteria are met. In some implementations, the range switching criteria is met when a measure of the overall brightness state of content located below (and optionally around) the affordance (e.g., an accumulated value of brightness values and an aggregate value) exceeds a predefined threshold due to the appearance of the content changing over time. In some embodiments, the measure of the overall brightness state of the content takes into account the brightness level of the relevant portion of the content over a period of time (e.g., using a weighted running average) and also favors keeping the current affordance appearance type of the affordance unchanged (e.g., favoring being optionally achieved by giving higher weights to older brightness levels of the content and less weights to newer brightness levels of the content). Due to the cumulative effect and bias towards the type of appearance of the current affordance, the short-term goal of maintaining affordance appearance stable relative to temporal changes in content, and the continued provision of sufficient affordance prominence over the changing content, is satisfied.
FIG. 5AD illustrates a user interface including an affordance that dynamically switches between affordance appearance types based on underlying content over time, in accordance with some embodiments.
Fig. 5A shows a simple example in which affordance 5002 begins as a implied affordance (e.g., initially displayed on a user interface in a state as shown in fig. 5G). Then, the content located below the affordance 5002 becomes relatively bright (e.g., becomes a state of the user interface as illustrated in fig. 5K), for example, by scrolling. In this example, the affordance appearance type of affordance 5002 has not changed during scrolling of content, for example because scrolling is relatively fast and the preference is to keep the currently selected affordance appearance type yet to be overcome by appearance changes of the content for a short period of time. After stopping scrolling the content, the affordance is still overlaid on the content shown in the light state. Over time, the overall brightness state of the content gradually changes, and the brightness level of the current background gradually takes over and dominates the brightness level of the previously shown background (e.g., when the affordance is initially displayed or during scrolling of the content). Finally, at time t1, the measure of the overall brightness state of the background exceeds a predefined threshold and meets the range switch trigger criteria. In some embodiments, in response to detecting that the range switch trigger criteria is met, the device immediately switches the affordance appearance type and displays the affordance according to the newly selected affordance appearance type, for example as shown on the user interface on the right side of fig. 5 AD. The appearance of the affordance is the same as that shown in fig. 5L, but in this example, the user does not have to close the web browser application and restart it in order for the affordance to be displayed as a light affordance on content 5018.
In some embodiments, when the range switch trigger criteria is met, the device begins to gradually transition from a first affordance appearance type (e.g., implying that the appearance type can be represented) to a second affordance appearance type (e.g., a light affordance appearance type). For example, during a predetermined transition period (e.g., t=t2-T1, 5 seconds), the affordance appearance value range passes through one or more intermediate ranges between the value ranges of the first appearance type and the second appearance type. At any time during the transition period, the appearance of the affordance is determined based on the particular intermediate value range currently being used as the range of affordance appearance values. As shown in the intermediate user interface in fig. 5AD, with the same background content, the affordance has an intermediate brightness level that is intermediate between the implied and the highlighted affordances.
In some implementations, during the transition period, the metric of the overall brightness state of the underlying content continues to update over time, with the brightness level of the content at a closer time taking over the brightness level of the content at an earlier time. If the range switch trigger criteria is met again (e.g., with the same threshold for earlier switches or a different threshold depending on the type of affordance appearance currently selected), then the switch to the second affordance appearance type is not completed completely and the affordance returns to the first affordance appearance type. In this particular example, the content is not changed and the range switch trigger criteria will no longer be met during the transition period, as a result, the switch to the second affordance appearance type is completed completely at time t2 (e.g., the period between t1 and t2 is a predefined transition period). After completing the switch to the second affordance appearance type, the metric of the overall brightness state of the underlying content continues to update over time, and when the range switch trigger criteria is again met due to accumulated changes in the underlying content (e.g., due to a switch scenario, scrolling, etc.), the switch may again occur back to the first affordance appearance type.
Fig. 5AE illustrates a display attribute (e.g., brightness) of an affordance in accordance with some embodiments versus a range of values and inversions of underlying content of a bright affordance appearance type (a), a implied affordance appearance type (C), and a transition affordance appearance type (B). In some embodiments, the inversion curve shown in fig. 5AE is optionally used to generate the appearance of affordance 5002 in fig. 5 AD.
In the example shown in 5AE, the different appearance affordances show that the shape of the inversion curves of the types (a), (B), (C) are the same. Using an inversion curve having the same shape allows the value of the correspondence between the background luminance value and the affordance luminance value for each point in the graph to be calculated and stored in a data table such that when transitions between affordance types pass continuously through many intermediate value ranges, the luminance of each pixel on the affordance can be simply determined by a lookup in the table based at least in part on the luminance of the corresponding pixel in the background. For example, during a transition period, each time point of a plurality of evenly spaced time points is associated with a respective intermediate value range between the light affordance value range and the implication affordance value range, and even if the content continuously varies during the transition period, the affordance appearance can be quickly determined at each time point based on the inverse curve of the corresponding intermediate value range for that time point.
As shown in fig. 5AE, the inversion curve 5032 includes two discontinuous points. Left-hand discontinuities 5034 are introduced to address interference points (e.g., at 25.4% background brightness) that are a truncated and non-discontinuous inversion curve of an isocratic line 5036 (e.g., affording brightness = background brightness). The left discontinuity allows the affordance to have a brightness value that is not exactly the same as the background brightness, thereby avoiding the possibility of "invisible" affordances in some special cases. Similarly, a right discontinuity 5038 is introduced to address the interference point (e.g., at 74.51% brightness), which is the inversion curve of the intercept and no discontinuity of the isocratic line 5036. The right discontinuity allows the affordance to have a brightness value that is not exactly the same as the background brightness, thereby avoiding the possibility of "invisible" affordances in some special cases.
In addition, near discontinuities 5036 and 5038, special corrections are made to the affordance brightness, so that a strict inversion is not always observed (e.g., increasing background brightness corresponds to decreasing affordance brightness, and vice versa). For example, in the correction region of the left discontinuity 5036, the affordance on the side with the higher background luminance is relatively flat while the affordance on the side with the lower background luminance is less inverted, including the normal inversion relationship. In the correction region of the right discontinuity 5038, the affordance on the side with the lower background luminance indicates a relatively large reversal of luminance including the normal reversal relationship. The design of the exact shape of these correction zones takes into account the responsiveness of human vision to the luminance values within these zones, as well as the need to create sufficient contrast between the affordance near the interference point and the background. For example, the width of the interference region and the adjustment to the normal inversion relationship depend on the amount of contrast needed for the affordance over the background at these interference points.
Each of the graphs (a), (B), and (C) shown in fig. 5AE further includes a background brightness bar and an affordance response bar (displayed below the graph area). The background luminance bar and affordance response bar for each graph shows the corresponding background luminance and affordance luminance generated from the inversion curve 5032 in the same graph. In addition, the background color of the cued representable graph (a) is white, so that all shadows of the cued representable may be shown (e.g., including a completely black affordance, but not including a completely white affordance). The background color of the diagram (C) of the luminous affordance is black, so that all shadows of the luminous affordance can be shown (e.g., including a completely white affordance, but not including a completely black affordance). The background color of the diagram (B) represented by the transition affordance is gray (50% brightness). An affordance with a background brightness of 50% indicates a slightly lighter gray (e.g., a brightness value below 50%) as shown by comparing the affordance response along half the horizontal axis to the 50% gray bar displayed at the bottom of graph (B).
FIG. 5AF illustrates a gradual transition of affordance appearance types (e.g., represented in graphs (B-1), (B-2), (B-3)) from impliedly affordable appearance types (e.g., represented in graph (A)) to brightly affordable appearance types (e.g., represented in graph (C)) through multiple transitions, in accordance with some embodiments.
In some embodiments, when the range switch trigger criteria is met at time t1, the device begins to gradually transition from a first affordance appearance type (e.g., implying that the appearance type can be represented) to a second affordance appearance type (e.g., a light affordance appearance type). Then, during a predetermined transition period (e.g., t=t2-T1), the affordance appearance value range transitions between the first appearance type and the second appearance type value range (e.g., up or down depending on the direction of switching) through a plurality of intermediate ranges (e.g., the total number of intermediate ranges depends on the refresh rate of the display and the value gap between the upper limit of the affordance appearance brightness and 1 is implied). At any time during the transition period T, the appearance of the affordance is determined based on a particular intermediate value range that is currently used as the affordance appearance value range.
In some implementations, during the transition period, the metric of the overall brightness state of the underlying content continues to update over time, with the brightness level of the content at a closer time taking over the brightness level of the content at an earlier time. If the range switch trigger criteria is met again (e.g., with the same threshold for earlier switches or a different threshold depending on the type of affordance appearance currently selected), then the switch to the second affordance appearance type is not completed completely and the transition of the affordance appearance value range is reversed in direction and the affordance may eventually return to the first affordance appearance type. If the range switch trigger criteria is not met a second time during the transition period, a switch to a second affordance appearance type is completed at the end of the transition period. After completing the switch to the second affordance appearance type, the metric of the overall brightness state of the underlying content continues to update over time, and when the range switch trigger criteria is again met due to accumulated changes in the underlying content (e.g., due to a switch scenario, scrolling, etc.), the switch may again occur back to the first affordance appearance type.
In the example shown in 5AF, the different appearance affordances show that the shape of the inversion curves for types (A), (B-1), (B-2), (B-3), and (C) are the same. Using an inversion curve having the same shape allows for calculating and storing in a data table a correspondence value between the background luminance value and the affordance luminance value for each point (or inversion curve for each intermediate value range) in the graph, such that when transitions between affordance types pass continuously through many intermediate value ranges, the luminance of each pixel on the affordance can be simply determined by a quick look-up in the data table based at least in part on the luminance of the corresponding pixel in the background. For example, during a transition period, each of a plurality of evenly spaced time points is associated with a respective intermediate value range between a light affordance value range and a implication affordance value range, and even if the content continuously changes during the transition period and the underlying content has a large brightness change under different portions of the affordance, the affordance appearance can still be quickly determined at each time point based on the data pre-stored in the data table.
Fig. 5AG to 5AK are enlarged copies of the inverse relationship of the display attribute of the affordance and the underlying content of the different affordance types shown in fig. 5 AF.
Fig. 6A-6C are flowcharts illustrating a method 6000 of changing the appearance of an affordance as a function of changes in the appearance of underlying content in accordance with some embodiments. Method 6000 is performed on an electronic device (e.g., device 300, fig. 3; or portable multifunction device 100, fig. 1A) having a display and a touch-sensitive surface. In some embodiments, the electronic device includes one or more sensors for detecting the intensity of the contact of the touch-sensitive surface. In some embodiments, the touch-sensitive surface and the display integrate Cheng Chumin the display. In some implementations, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some implementations, the display is separate from the touch-sensitive surface. Some operations in method 6000 are optionally combined and/or the order of some operations is optionally changed.
Method 6000 involves displaying an affordance (an affordance indicating an acceptable starting region for a gesture to display a home screen) on content in the following manner: wherein the same display properties of the affordance are dynamically changed based on changes in the display properties (e.g., gray values or brightness values) of the underlying content. Specifically, the value of the display attribute of the affordance varies in a direction opposite to the change in the value of the same display attribute of the underlying content (e.g., the gray value of the affordance is an inversion of the gray value of the underlying content). In addition, the value of the display attribute of the affordance is constrained to a range of values that is smaller than the value of the display attribute of the underlying content. Thus, the device can provide affordances in a less distracting or less intrusive manner while maintaining sufficient affordance visibility as the appearance of the content continues to change (e.g., due to scrolling, scene cuts, and dynamic content playback). Providing affordances having dynamically changing appearances based on the appearances of underlying content in the manner described herein enhances the operability of the device (e.g., by providing guidance to the user regarding the required inputs for the desired results without undue interference to the user, which may reduce user error in operating the device), and makes the user device interface more efficient (e.g., by utilizing the required inputs to assist the user in achieving the desired results and reducing user error in operating/interacting with the device), thereby improving the battery life of the device (e.g., by helping the user to use the device more quickly and efficiently). Providing affordances in the manner described herein allows on-screen affordances to effectively replace hardware buttons that provide the same functionality (e.g., display home screens) in many different user interface scenarios, which helps reduce manufacturing and maintenance costs of the device. In addition, the claimed solution constrains the range of values of the affordance to eliminate white-to-black and black-to-white contrast between the affordance and the background content, thereby mitigating the risk of display afterimages due to display of the affordance on a white or black background for a long period of time. A known cause of screen afterimage is the long-term display of non-moving images (e.g., system-level affordances such as home gesture indicators), plus non-uniformly used pixels (e.g., this is most severe where the contrast between foreground and background content is high). In some use cases (e.g., implementing a system level home affordance), the proposed solution (e.g., reducing the extremely high contrast while maintaining the visual saliency of the affordance) effectively solves the afterimage problem. The problem of afterimage in the display of a mobile phone and its reasons have been well documented in the industry literature for many years and still exist in many commercial product problems. Attempts have been made to solve this problem by letting the affordance move around on the screen or simply disappear after a period of inactivity. However, this solution may make the operability of the device worse. The claimed solution, wherein the inversion of the display properties of the underlying content provides a basis for determining the values of the same display properties of the affordance within a sub-range of display property values, enables the affordance to be used in the same place without the risk of generating afterimages.
The method 6000 is performed on a device having a display and a touch-sensitive surface (e.g., a touch screen display that acts as both a display and a touch-sensitive surface). The device displays (6002) content (e.g., a home screen, a desktop applet screen, a desktop, a user interface of an application, a media player user interface, etc.) and an affordance (e.g., a home affordance indicating a home gesture reaction area on the display), on the display, wherein: displaying an affordance on a portion of the content; the value of the display attribute (e.g., gray value or luminance value of an image (e.g., color image or monochrome image), an intrinsic display parameter other than gray value or luminance value (e.g., hue, saturation, etc. of a full-color image), or a derived display parameter calculated based on one or more intrinsic display parameters (e.g., gray value or luminance value of a full-color image, or variants or equivalents thereof)) is determined based on the value of the same display attribute of a portion of content displayed thereon; and allowing the value of the display attribute of the content to vary within a first value range (e.g., range [0,1], e.g., "range" is mathematically defined as the difference in value between the maximum value of the range and the minimum value of the range), and the value of the display attribute that can be represented is constrained to vary within a second value range (e.g., one of range [0,0.4], [0.6,1], [0.1,0.7], [0,0.7], or [0.3,1], etc., that is smaller than the maximum value of the first range and the minimum value is greater than the minimum value of the first range, or a range where the maximum value is smaller than the maximum value of the first range and the minimum value is the same as the minimum value of the first range, or a range where the maximum value is the same as the maximum value of the first range and the minimum value is greater than the minimum value of the first range); when displaying content and affordances, the device detects (6004) a change in appearance of the content displaying the affordances. In response to detecting a change in appearance of the content of the display affordance, the device changes (6006) the appearance of the affordance, including: increasing the value of the display attribute of the affordance in accordance with the magnitude of the change in the value of the display attribute of the content and the second range of values in accordance with determining that the value of the display attribute of the content has decreased (e.g., the affordance becomes brighter when the content located below the affordance becomes darker); and decreasing the value of the display attribute of the affordance in accordance with the magnitude of the change in the value of the display attribute of the content and the second range of values in accordance with a determination that the value of the display attribute of the content has increased (e.g., the affordance becomes darker when the content located below the affordance becomes brighter). This is illustrated in, for example, fig. 5F, 5G to 5P, 5Q, and 5R, where the display attribute (e.g., brightness) of the affordance changes in accordance with a change in the same display attribute (e.g., brightness) of the underlying content (e.g., when scrolling the content). In addition, the affordance appearance value range of a display attribute (e.g., brightness) is constrained to a sub-range of the value range of the same display attribute of the underlying content (e.g., a full value range from black (e.g., 0 or 0%) to white (e.g., 1 or 100%).
In some implementations, the appearance change of the content is (6008) due to navigation of the content (e.g., due to scrolling, paging, etc. of the content, a portion of the content located below the affordance changes). This is shown, for example, in fig. 5G to 5P. Changing the value of the same display attribute of the affordance while the display attribute of the underlying content changes due to content navigation enhances the operability of the device (e.g., by maintaining sufficient affordance visibility throughout the content navigation and helping the user provide the input needed to achieve the desired result) and makes the user device operate more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device), thereby additionally improving power efficiency and battery life of the device (e.g., by reducing user errors and helping the user use the device more quickly and efficiently).
In some implementations, the change in appearance of the content is (6010) due to the change in content over time (e.g., the content is a video or animation being played, and the displayed image of the video or animation changes over time). This is shown, for example, in fig. 5T to 5U. Changing the value of the same display attribute of the affordance while the display attribute of the underlying content changes over time as the content changes, enhances the operability of the device (e.g., by maintaining sufficient affordance visibility throughout the content over time and helping the user provide the input needed to achieve the desired result), and makes the user device operate more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device), thereby additionally improving power efficiency and battery life of the device (e.g., by reducing user errors and helping the user use the device more quickly and efficiently).
In some embodiments, the affordance has (6012) a first variant having a first set of endpoints for a second range of values and a second variant having a second set of endpoints for the second range of values that is different from the first set of endpoints (e.g., the second range of values has a first starting point and a first ending point for a "bright affordance" initially displayed on a bright background and the second range of values has a second starting point and a second ending point that are different from the first starting point and the second ending point, respectively, for a "implied affordance" initially displayed on a dark background). This is shown, for example, in fig. 5A and 5R. In some embodiments, the range of values of the first variant does not overlap with the range of values of the second variant (e.g., all values in the range of values of "light affordance" are greater than all values in the range of values of "implied affordance"). Providing two variants of affordances with different value ranges allows the device to further improve the visibility of the affordance while maintaining the appearance of the affordance unobtrusive over different types of content, thereby enhancing the operability of the device (e.g., by maintaining sufficient affordance visibility for different types of content and helping the user provide input needed to achieve a desired result), and making the user device operate more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
In some embodiments, a first variant of the affordance is displayed in accordance with determining that the content corresponds to a first application (6014), and a second variant of the affordance is displayed in accordance with determining that the content corresponds to a second application that is different from the first application (e.g., a "bright" and a "dark" affordance used in a currently displayed application is selected by an application developer of the currently displayed application, and the device displays the first variant or second variant of the affordance in accordance with a respective set of endpoints of the affordance selection parameter or second value range specified in program code of the currently displayed application (e.g., a first set of endpoints or a second set of endpoints of the second value range is used when changing the value of the display attribute of the affordance)). Allowing different applications to use different affordance variants helps an application developer to customize the appearance of the affordance based on the application context, further improving compatibility between the appearance of the affordance and the appearance of the application content, thereby enhancing operability of the device (e.g., by maintaining sufficient affordance visibility without undue interference to the user and helping the user provide input needed to achieve desired results), and making user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
In some embodiments, the appearance of the first portion of the content of the display affordance changes (6016) by an amount different from the appearance of the second portion of the content of the display affordance; and changing the appearance of the affordance includes changing the appearance of a first portion of the affordance corresponding to a first portion of the content by an amount different from the appearance of a second portion of the affordance corresponding to a second portion of the content (e.g., the affordance changes according to changes in the appearance of a portion of the corresponding content that underlies the first portion of the affordance). This is shown, for example, in fig. 5Q. For example, if the underlying content changes (e.g., the affordance is a blurred/desaturated/inverted version of a portion of the content), the appearance of the affordance changes, and the appearance of the different portions of the affordance reflects the appearance of the content underlying the different portions of the affordance. Allowing different amounts of change to be applied to the display attributes of different portions of the affordance based on the different amounts of change occurring in the display attributes of different portions of the underlying content increases the operability of the device (e.g., by maintaining sufficient affordance visibility throughout the content change and helping the user provide the input needed to achieve the desired result) and makes the user device operate more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device), thereby additionally increasing power efficiency and battery life of the device (e.g., by reducing user errors and helping the user use the device more quickly and efficiently).
In some embodiments, the first range of values is (6018) a continuous range of values, and the second range of values includes a discontinuity corresponding to at least a first value of the display attribute in the first range of values. For example, in some embodiments, for a small range of values where the display attribute of content located below the affordance is near 0.5, the value of the display attribute of the affordance is discontinuous and jumps from a first value below 0.5 to a second value above 0.5. In some embodiments, the device uses a discontinuous function to calculate a value of a display attribute of the underlying content based on a value of the same display attribute of the affordance to ensure that the appearance of the affordance does not come too close to the appearance of the underlying content (e.g., to ensure that the gray affordance does not appear over gray content whose gray value is very close to the gray affordance). Fig. 5AE shows an inversion curve including two discrete points (e.g., 5034 and 5038) of the values of the display properties that can be represented. Using a discontinuous range of values for the display attribute of the affordance while maintaining a continuous range of values for the value display of the underlying content helps to avoid the affordance taking values too close to the value of the underlying content, resulting in insufficient visibility of the affordance. Using a discontinuous range of values of the display properties of the affordances enhances operability of the device (e.g., by maintaining sufficient affordance visibility throughout the content change and helping the user provide the input needed to achieve the desired result) and makes user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device), thereby additionally improving power efficiency and battery life of the device (e.g., by reducing user errors and helping the user use the device more quickly and efficiently).
In some embodiments, the affordance has (6020) a first variant and a second variant (e.g., "light affordance" and "implied affordance"), a first range of values corresponding to the first variant of the affordance and a second range of values corresponding to the second variant of the affordance do not overlap (e.g., the range of values of the "light affordance" and the range of values of the "implied affordance" are separated by a "cutoff range of values"), and the device dynamically selects one of the first variant and the second variant to display on the content based on an initial value of a display attribute of the content at a predetermined time (e.g., when the application is launched, or when there is a scene change in the video, or when a user interface within the application switches, etc.). For example, if the content initially has a darker gray value (e.g., less than 0.5), then the affordance has an initial value in the bright gray value range (e.g., greater than 0.6); and if the content initially has a brighter gray value (e.g., greater than 0.5), then the affordance has an initial value (e.g., less than 0.4) within the dark gray range. The exact gray value that can be represented is optionally obtained by: the gray value of the content is inverted by a corresponding inversion function of the range of bright or dark values associated with the initial appearance of the affordance. In some embodiments, if the affordance is a bright affordance (e.g., 0.9 gray) that is initially displayed on dark content (e.g., 0.2 gray), then as the underlying content becomes brighter (e.g., the gray increases toward 1), the affordance becomes darker (e.g., the gray decreases toward 0), but the affordance is constrained by a minimum threshold gray (e.g., 0.6) that is still brighter than the center gray (e.g., 0.5) on gray [0,1 ]. As the affordance indicates that the underlying content becomes darker (e.g., the gray value decreases toward 0), the affordance becomes lighter (e.g., the gray value increases toward 1) until it becomes completely white (e.g., reaching an end value of 1 on gray 0, 1) when the content becomes completely black. In another example, if the affordance is a implied affordance (e.g., a gray value of 0.2) that is initially displayed on bright content (e.g., a gray value of 0.9), then as the underlying affordance becomes darker (e.g., the gray value decreases toward 0), the affordance becomes lighter (e.g., the gray value increases toward 1), but the affordance is constrained by a maximum threshold gray value (e.g., 0.4) that is still darker than the center gray value (e.g., 0.5) on gray [0,1 ]. As the affordance becomes brighter (e.g., the gray value increases toward 1), the affordance becomes darker (e.g., the gray value decreases toward 0) until it becomes completely black (e.g., reaches an end value of 0 on gray 0, 1) when the content becomes completely white. FIG. 5G illustrates selecting a hinted affordance based on an initial overall brightness state of relatively dark underlying content, and FIG. 5L illustrates selecting a brightly affordance based on an initial overall brightness state of relatively bright underlying content. Allowing an application to dynamically select from two different variants of affordances (e.g., for darker content and lighter content) further improves the appearance of the affordance based on the application context, thereby further improving compatibility between the appearance of the affordance and the appearance of the underlying content, thereby enhancing operability of the device (e.g., by maintaining sufficient affordance visibility without undue interference to the user and helping the user provide input needed to achieve the desired result), and making user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
In some embodiments, displaying the affordance includes (6022) displaying the affordance at a first size (e.g., near a bottom edge of the device) when the device is in a first orientation, and the method includes: when the affordance is displayed in a first size, rotation of the device from a first orientation to a second orientation that is different from the first orientation (e.g., rotation of the device changes displayed content from a first user interface orientation (e.g., portrait orientation) to a second user interface orientation (e.g., landscape orientation)); and in response to detecting rotation of the device from the first orientation to the second orientation, displaying the affordance at a second size different from the first size (and optionally, displaying the affordance at a different location (e.g., near a new bottom edge of the device as defined based on the second orientation of the device)). In some embodiments, a longer version of the affordance is displayed when the device is in landscape orientation and a shorter version of the affordance is displayed when the device is in portrait orientation. Displaying different sized affordances while rotating the device improves visual compatibility between the appearance of the affordance and the orientation of the device (and thus the orientation of the content), thereby enhancing operability of the device (e.g., by maintaining sufficient affordance visibility without undue interference to the user and helping the user provide input needed to achieve desired results), and making user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
In some embodiments, the display attribute is (6024) a gray value (e.g., the gray value has a full range of values [0,1], representing a gray value ranging from black (e.g., gray value=0) to white (e.g., gray value=1). Providing an affordance that changes its gray value based on the gray value of the underlying content enhances the operability of the device (e.g., by maintaining sufficient affordance visibility without undue interference to the user and helping the user provide input needed to achieve the desired result) and makes the user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
In some implementations, the current value of the display attribute of the content is obtained (6026) by blurring a portion of the content (e.g., by applying a blur function (e.g., a gaussian blur function) having a predefined blur radius to the region of the content that is located directly below the affordance and applying a blur function having at least one blur radius to the surrounding of the region of the content that is located directly below the affordance). In some embodiments, after blurring the content, other filters are applied, such as desaturation and/or changing opacity. Providing affordances with display properties derived based on the display properties of underlying content of the obscured version enhances operability of the device (e.g., by maintaining sufficient affordance visibility without undue interference to the user and helping the user provide input needed to achieve desired results) and makes user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
In some implementations, the current value of the display attribute of the content is (6028) obtained by desaturating a portion of the content (e.g., by converting the color value (e.g., RGB value, HSL value, or HSV value) of each pixel of the region of the content below the affordance) or at least one blur radius around the region to a corresponding scalar value (e.g., gray value) on a monochrome scale (e.g., gray). In some embodiments, after the content is desaturated, other filters are applied, such as blurring and/or changing opacity. Providing affordances with display properties derived based on display properties of underlying content of the desaturated version enhances operability of the device (e.g., by maintaining sufficient affordance visibility without undue interference to the user and helping the user provide input needed to achieve desired results), and makes user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
In some implementations, the affordance indicates (6030) a reaction area for initiating a predefined gesture input (e.g., a home/multitasking gesture for displaying a home screen and/or an application switcher user interface). This is shown, for example, in fig. 5A to 5D. In some embodiments, the affordance is not an actual button. Tapping or pressing the affordance does not trigger any function of the device. In some implementations, when the affordance is no longer displayed (e.g., after it fades out in full-screen mode of the application), the predefined gesture (e.g., home/multitasking gesture) still works as when the affordance is displayed. In some implementations, when a user input (e.g., a home/multitasking gesture or a tap input on the display, etc.) is detected, the affordance is redisplayed in full-screen content display mode. The affordance is a narrow affordance having a relatively small height to length. The affordance indicating a reaction area for initiating a predefined gesture generally need not have particularly enhanced visibility relative to underlying content, as the reaction area of the gesture is generally wider than the reaction area of the button, and thus it is further advantageous that the affordance is capable of reducing interference with a user, thereby helping to avoid user errors in interacting with the device. Thus, using affordances with dynamically changing display properties in the manner described herein enhances operability of the device (e.g., by maintaining adequate affordance visibility without undue interference to the user and helping the user provide input needed to achieve desired results) and makes user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
It should be understood that the particular order in which the operations in fig. 6A-6C are described is merely exemplary and is not intended to suggest that the order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. In addition, it should be noted that the details of other processes described herein with respect to other methods described herein (e.g., method 7000 and method 8000) are equally applicable in a similar manner to method 6000 described above with respect to fig. 6A-6C. For example, the contacts, gestures, user interface objects, application views, control panels, controls, affordances, position thresholds, orientation conditions, reversal curves, filters, value ranges, navigation criteria, movement parameters, focus selectors, and/or animations described above with reference to method 6000 optionally have one or more of the features of contacts, gestures, user interface objects, application views, control panels, controls, position thresholds, orientation conditions, navigation criteria, movement parameters, focus selectors, and/or animations described herein with reference to other methods described herein (e.g., methods 7000 and 8000). For the sake of brevity, these details are not repeated here.
The operations in the above-described information processing method are optionally implemented by running one or more functional modules in an information processing apparatus, such as a general purpose processor (e.g., as described above with respect to fig. 1A and 3) or an application-specific chip.
The operations described above with reference to fig. 6A-6C are optionally implemented by the components depicted in fig. 1A-1B. For example, the detection operations and the change operations are optionally implemented by the event sorter 170, the event recognizer 180, and the event handler 190. An event monitor 171 in the event sorter 170 detects a contact on the touch-sensitive display 112 and an event dispatcher module 174 communicates the event information to the application 136-1. The respective event identifier 180 of the application 136-1 compares the event information to the respective event definition 186 and determines whether the first contact at the first location on the touch-sensitive surface (or whether the rotation of the device) corresponds to a predefined event or sub-event, such as a selection of an object on the user interface, or a rotation of the device from one orientation to another. When a respective predefined event or sub-event is detected, the event recognizer 180 activates an event handler 190 associated with the detection of the event or sub-event. Event handler 190 optionally uses or invokes data updater 176 or object updater 177 to update the application internal state 192. In some embodiments, event handler 190 accesses a corresponding GUI updater 178 to update what is displayed by the application. Similarly, it will be apparent to those skilled in the art how other processes may be implemented based on the components depicted in fig. 1A-1B.
Fig. 7A-7E are flowcharts illustrating methods of changing the appearance of an affordance according to changes in the appearance of underlying content and changes in the mode of a user interface displaying the affordance in accordance with some aspects. Method 7000 is performed on an electronic device (e.g., device 300, fig. 3; or portable multifunction device 100, fig. 1A) having a display and a touch-sensitive surface. In some embodiments, the electronic device includes one or more sensors for detecting the intensity of the contact of the touch-sensitive surface. In some embodiments, the touch-sensitive surface and the display integrate Cheng Chumin the display. In some implementations, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some implementations, the display is separate from the touch-sensitive surface. Some operations in method 600 are optionally combined and/or the order of some operations is optionally changed.
Method 7000 involves displaying an affordance (an affordance indicating an acceptable starting region for a gesture to display a home screen) on the application user interface in the following manner: wherein a set of display attributes (e.g., gray value, brightness value, opacity, hue, saturation, etc.) of the affordance is changed based on a set of display attributes of the underlying content in accordance with two different sets of rules depending on the current display mode of the application user interface. For example, when the application user interface is displayed in an interactive mode (e.g., where user input is expected frequently), the appearance of the affordance changes in a first manner (e.g., based on a first set of rules) based on the appearance of the underlying content such that the affordance is more apparent; and when the application user interface is displayed in a full screen content display mode (e.g., where content viewing may be a primary goal), the appearance of the affordance changes in a second manner based on the appearance of the underlying content (e.g., based on a second set of rules) such that the affordance is less distracting to the user. Thus, providing affordances that change their appearance differently based on the appearance of underlying content based on the display mode of the application user interface enhances the operability of the device (e.g., by providing the user with an appropriate amount of guidance regarding the required input of the desired result without undue interference with the user, which may reduce user error in operating the device), and makes the user device interface more efficient (e.g., by utilizing the required input to assist the user in achieving the desired result and reducing user error in operating/interacting with the device), thereby improving the battery life of the device (e.g., by helping the user to use the device more quickly and efficiently). Providing affordances in the manner described herein allows on-screen affordances to effectively replace hardware buttons that provide the same functionality (e.g., display home screens) in many different user interface scenarios, which helps reduce manufacturing and maintenance costs of the device. Providing affordances in the manner described herein also helps to reduce and eliminate afterimage problems with the display.
Method 7000 is performed on a device having a display and a touch-sensitive surface (e.g., a touch screen display that acts as both the display and the touch-sensitive surface). The device displays (7002) a user interface (e.g., a media player user interface, a browser user interface, an instant message user interface, a map user interface, a telephone user interface, a game user interface, etc.) of the application in a first mode (e.g., a content display mode that does not include full screen content, a window mode, or a default mode). When a user interface of an application is displayed in a first mode, the device displays (7004) an affordance (e.g., a home affordance indicating a home gesture reaction area on the display) on the user interface with a first appearance, wherein: an affordance is displayed on a portion of the user interface (e.g., in a first predefined area of the display (e.g., a home affordance display area located near a bottom center area of the display), and according to a first set of one or more rules, a value of a set of one or more display attributes of the affordance having a first appearance varies according to a change in the value of the set of one or more display attributes of a portion of the user interface located below the affordance (e.g., the set of one or more display attributes of the affordance is obtained by applying a first set of filters to desaturate, blur, change opacity, and/or reverse brightness or grayscale value of an image of the portion of the user interface located below the affordance)). When an affordance having a first appearance is displayed on a portion of a user interface displayed in a first mode, the device detects (7006) a request (e.g., a request generated by an application or operating system providing the user interface based on current operating conditions (e.g., no input for a long period of time, predefined criteria for switching between operating modes, etc.), or a user request (e.g., by contacting a tap input made on a touch screen, or swipe input, etc.), to transition from displaying the user interface in the first mode to displaying the user interface in a second mode (e.g., full-screen content display mode). In response to detecting the request: the device displays (7008) the user interface in a second mode (e.g., in full screen content display mode, a portion of the original user interface is enlarged, some user interface elements in the user interface (such as application menu bars, scroll bars, etc.) are removed from the user interface, and a system status bar previously displayed with the user interface is also removed from the display); and the device displays an affordance having a second appearance on a user interface displayed in a second mode, wherein: according to a second set of one or more rules that are different from the first set of one or more rules, the value of the set of one or more display attributes of the affordance having the second appearance varies according to the variation in the value of the set of one or more display attributes of the portion of the user interface that is located below the affordance (e.g., the set of one or more display attributes of the affordance is obtained by applying a second set of filters to desaturate, blur, change opacity, and/or reverse the brightness or gray-value of the image of the portion of the user interface that is located below the affordance) (e.g., the affordance having the second appearance causes a variation of the affordance having the first appearance, but both are derived from the portion of the user interface that is located below the affordance, but using a different set of filters, or the same set of filters having different adjustment parameters). This is illustrated in fig. 5S-5W, where affordance 5002 is displayed in a second state (e.g., a low-contrast state in fig. 5V and 5W), for example, when the device transitions from the interactive mode to the media consumption mode after initially displaying the affordance in the first state (e.g., the fully-visible state in fig. 5T and 5U).
In some embodiments, when an affordance having a second appearance is displayed on the user interface displayed in the second mode: in accordance with a determination that the fade-out criterion is met, the device stops (7010) displaying the affordance on the user interface displayed in the second mode (e.g., while maintaining the user interface displayed in the second mode); and in accordance with a determination that the fade-out criterion is not met, maintaining display of an affordance having a second appearance on the user interface displayed in the second mode. This is shown, for example, in fig. 5X, where the affordance 5002 in the low-contrast state eventually completely disappears after an additional period of time without user input. In some implementations, the fade-out criteria require that no user input be detected on the touch-sensitive surface for at least a predefined threshold amount of time in order to satisfy the fade-out criteria (e.g., when no touch input is detected by the device anywhere on the touch-sensitive surface within 30 seconds after entering full-screen content display mode, or when no touch input is detected by the device near the bottom center area of the touch-screen display within 30 seconds after entering full-screen content display mode (e.g., other portions of the display may still continue to receive and respond to user input without affecting the determination regarding the fade-out criteria), the fade-out criteria are satisfied). The display of the affordance displayed in a predefined display mode of the application user interface is faded out or maintained based on predefined criteria to enhance operability of the device (e.g., by providing the user with an appropriate amount of guidance regarding the required input of the desired result without undue interference with the user, which may reduce user error in operating the device) and to make the user device interface more efficient (e.g., by utilizing the required input to assist the user in achieving the desired result and reducing user error in operating/interacting with the device), thereby improving battery life of the device (e.g., by helping the user to use the device more quickly and efficiently).
In some implementations, the user interface of the application displayed in the first mode includes (7012) a representation of content (e.g., video, games, documents, song album graphics) that occupies less than all of the display area of the display (e.g., the user interface of the application displayed in the first mode is displayed simultaneously with the system status bar on the display); and the user interface of the application displayed in the second mode includes a representation of the content occupying all of the display area of the display (e.g., in full screen content display mode, a portion of the original user interface is enlarged, some user interface elements in the user interface (such as application menu bars, scroll bars, etc.) are removed from the user interface, and the system status bar previously displayed with the user interface is also removed from the display). Depending on whether the content is displayed in a regular display mode or a full screen display mode, providing an affordance that changes its appearance based on the appearance of the underlying content in a different manner, enhancing the operability of the device (e.g., by providing the user with an appropriate amount of guidance regarding the required input of the desired result without undue interference from the user, which may reduce user error in operating the device), and making the user device interface more efficient (e.g., by helping the user achieve the desired result with the required input and reducing user error in operating/interacting with the device), thereby improving the battery life of the device (e.g., by helping the user use the device more quickly and efficiently).
In some implementations, at least one of the first appearance and the second appearance of the affordance (e.g., as reflected in a brightness, intensity, or grayscale value of the affordance) is based on (7014) an inversion of a portion of the user interface that is below the affordance. For example, in some embodiments, a portion of the user interface underlying the affordance is desaturated to obtain a monochrome image, the monochrome image is blurred, and then the brightness or gray-scale values of the pixels in the blurred monochrome image are inverted to obtain the brightness or gray-scale values of the pixels in the affordance. Providing affordances with display properties derived based on inversions of underlying content enhances operability of the device (e.g., by maintaining sufficient affordance visibility without undue interference to the user and helping the user provide input needed to achieve desired results) and makes user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
In some embodiments, a first set of rules requires (7016) that a first amount of inversion be applied to a portion of the user interface that is below the affordance to obtain a first appearance of the affordance, a second set of rules require that a second amount of inversion be applied to a portion of the user interface that is below the affordance to obtain a second appearance of the affordance, and the second amount of inversion be less than the first amount of inversion (e.g., the second set of rules reduce the amount of inversion of a portion of the user interface that is below the affordance to obtain a set of display attributes of the affordance). Changing the amount of inversion applied to the underlying content to obtain the affordance facilitates adjusting the visibility of the affordance in accordance with the display mode of the application user interface, thereby enhancing operability of the device (e.g., by intelligently balancing sufficient visibility requirements with unobtrusiveness requirements of the affordance and helping a user provide input needed to achieve a desired result), and making user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
In some embodiments, at least one of the first appearance and the second appearance of the affordance (e.g., as reflected in a brightness, intensity, or grayscale value of the affordance) is obtained by obscuring (7018) a portion of the user interface that is located below the affordance. For example, in some embodiments, a portion of the user interface underlying the affordance is desaturated to obtain a monochrome image, the monochrome image is blurred, and then the brightness or gray-scale values of the pixels in the blurred monochrome image are inverted to obtain the brightness or gray-scale values of the pixels in the affordance. Providing affordances with display attributes obtained by obscuring underlying content enhances operability of the device (e.g., by maintaining sufficient affordance visibility without undue interference to the user and helping the user provide input needed to achieve desired results), and makes user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
In some embodiments, a first set of rules requires (7020) that a first amount of blur (e.g., a gaussian blur function) be applied to a portion of the user interface located below the affordance to obtain a first appearance of the affordance, a second set of rules require that a second amount of blur (e.g., a gaussian blur function) be applied to a portion of the user interface located below the affordance to obtain a second appearance of the affordance, and the second amount of blur be less than the first amount of blur (e.g., the second blur function has a smaller blur radius than the first blur function) (e.g., the second set of rules reduce the amount of blur of a portion of the user interface located below the affordance to obtain a set of display attributes of the affordance). Changing the amount of blurring applied to the underlying content to obtain the affordance facilitates adjusting the visibility of the affordance in accordance with the display mode of the application user interface, thereby enhancing operability of the device (e.g., by intelligently balancing sufficient visibility requirements with unobtrusiveness requirements of the affordance and helping a user provide input needed to achieve a desired result), and making user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
In some embodiments, at least one of the first appearance and the second appearance of the affordance (e.g., as reflected in a brightness, intensity, or grayscale value of the affordance) is obtained by (7022) desaturating a portion of the user interface that is located below the affordance. For example, in some embodiments, a portion of the user interface underlying the affordance is desaturated to obtain a monochrome image, the monochrome image is blurred, and then the brightness or gray-scale values of the pixels in the blurred monochrome image are inverted to obtain the brightness or gray-scale values of the pixels in the affordance. Providing affordances with display properties obtained by desaturating underlying content enhances operability of the device (e.g., by maintaining adequate affordance visibility without undue interference to the user and helping the user provide input needed to achieve desired results), and makes user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
In some embodiments, a first set of rules requires (7024) that a first amount of desaturation be applied to a portion of the user interface that is below the affordance to obtain a first appearance of the affordance, a second set of rules requires that a second amount of desaturation be applied to a portion of the user interface that is below the affordance to obtain a second appearance of the affordance, and the second amount of desaturation be less than the first amount of desaturation (e.g., the second set of rules reduces the amount of desaturation of a portion of the user interface that is below the affordance to obtain a set of display properties of the affordance). Changing the amount of desaturation applied to the underlying content to obtain an affordance facilitates adjusting the visibility of the affordance in accordance with the display mode of the application user interface, thereby enhancing operability of the device (e.g., by intelligently balancing sufficient visibility requirements with unobtrusiveness requirements of the affordance and helping a user provide input needed to achieve a desired result), and making user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
In some implementations, the device detects (7026) an input that satisfies a first affordance redisplay criterion when the user interface is displayed in a second mode without the affordance being displayed, wherein the first affordance redisplay criterion is satisfied when the input is detected on the touch-sensitive surface (e.g., at a location corresponding to the first display location); and in response to detecting an input meeting the affordance redisplay criteria, redisplay the affordance on the user interface displayed in the second mode. This is shown, for example, in fig. 5X through 5 AA. In some embodiments, when the affordance is redisplayed on the user interface displayed in the second mode, the values of the set of one or more display attributes of the affordance change in accordance with changes in the values of the set of one or more display attributes of a portion of the user interface in accordance with the second set of one or more rules. In some embodiments, when the affordance is redisplayed on the user interface displayed in the first mode, the values of the set of one or more display attributes of the affordance change in accordance with changes in the values of the set of one or more display attributes of a portion of the user interface in accordance with the first set of one or more rules. Redisplaying the affordance after the affordance is faded based on predefined criteria enhances operability of the device (e.g., by providing the user with an appropriate amount of guidance regarding the required input of the desired result without undue interference with the user, which may reduce user error in operating the device), and makes the user device interface more efficient (e.g., by utilizing the required input to assist the user in achieving the desired result and reducing user error in operating/interacting with the device), thereby improving battery life of the device (e.g., by helping the user use the device more quickly and efficiently).
In some embodiments, the device detects (7028) an input that meets a second affordance when the user interface is displayed in a second mode without the affordance being displayed, wherein the second affordance is met when the input is a request to transition from displaying the user interface in the second mode to displaying the user interface in the first mode; and responsive to detecting an input meeting the second affordance redisplay criteria, redisplaying the user interface in the first mode by the device; and the device redisplays the affordance on the redisplayed user interface in the first mode. This is shown, for example, in fig. 5X through 5 AA. In some embodiments, when the affordance is redisplayed on the user interface displayed in the first mode, the values of the set of one or more display attributes of the affordance change in accordance with changes in the values of the set of one or more display attributes of a portion of the user interface in accordance with the first set of one or more rules. Redisplaying the affordance when transitioning from the second display mode to the first display mode enhances operability of the device (e.g., by providing the user with an appropriate amount of guidance regarding the required input of the desired result without undue interference with the user, which may reduce user error in operating the device), and makes the user device interface more efficient (e.g., by utilizing the required input to assist the user in achieving the desired result and reducing user error in operating/interacting with the device), thereby improving battery life of the device (e.g., by helping the user to use the device more quickly and efficiently).
In some implementations, at least one of the first appearance and the second appearance of the affordance is dynamically adjusted (e.g., due to dynamic content changes in the user interface or due to navigation through content displayed in the user interface) based on changes that occur in a portion of the user interface that is located below the affordance (7030). This is shown, for example, in fig. 5T to 5W. For example, when scrolling the user interface, or when the user interface displayed in the second mode is a full screen movie, game, or web page that is evolving and refreshed, the appearance of the affordance is also updated continually to reflect the underlying user interface changes. The appearance of the affordance dynamically changes according to a first set of rules when the user interface is displayed in a first mode and the appearance of the affordance dynamically changes according to a second set of rules when the user interface is displayed in a second mode. Dynamically changing the appearance of the affordance based on changes in the appearance of the underlying content enhances the operability of the device (e.g., by providing the user with an appropriate amount of guidance regarding the required input of the desired result without undue interference with the user) and makes the user device interface more efficient (e.g., by utilizing the required input to assist the user in achieving the desired result and reducing user errors in operating/interacting with the device), thereby improving the battery life of the device (e.g., by helping the user use the device more quickly and efficiently).
In some embodiments, the first appearance is generated (7032) based on a first set of filters (e.g., blur, desaturation, and reversal) applied to a portion of the user interface located below the affordance (and an edge region around the region), the second appearance is generated based on a second set of filters applied to a portion of the user interface located below the affordance, and for two or more filters in the second set of filters, the first set of filters includes corresponding filters of the same type but with different adjustment parameters (e.g., blur filters with different blur radii, reversal filters with different reversal curves, opaque filters with different levels of transparency, desaturation filters with different desaturation ratios, etc.). Using a set of filters with different adjustment parameters to provide affordances that change their appearance differently based on the appearance of underlying content allows the affordances to remain relatively consistent in appearance and less distracting to the user during switching of display modes, thereby enhancing operability of the device (e.g., by providing the user with an appropriate amount of guidance regarding the required input of the desired result without undue interference with the user, which may reduce user error in operating the device), and making the user device interface more efficient (e.g., by utilizing the required input to assist the user in achieving the desired result and reducing user error in operating/interacting with the device).
In some embodiments, in response to detecting the request: the device generates (7034) one or more intermediate appearances for the affordance, the one or more intermediate appearances being intermediate between the first appearance and the second appearance; and the device displays one or more intermediate appearances for the affordances as displaying transitions between displaying the affordance having the first appearance and displaying the affordance having the second appearance. In some embodiments, the intermediate appearance is displayed on the user interface in a second mode, and the intermediate appearance is an interpolation between the first appearance and the second appearance of the affordance. Generating intermediate appearances of affordances during display mode switching of an application user interface to bridge transitions in the appearances of affordances makes the user less distracting, thereby enhancing operability of the device (e.g., by providing the user with an appropriate amount of guidance regarding required inputs for desired results without undue interference with the user, which may reduce user error in operating the device), and making the user device interface more efficient (e.g., by utilizing the required inputs to assist the user in achieving the desired results and reducing user error in operating/interacting with the device).
In some embodiments, generating one or more intermediate appearances for the affordance that are intermediate between the first appearance and the second appearance includes (7036) gradually transitioning from the first set of rules to the second set of rules (e.g., changing magnitudes of one filter applied to the underlying content to generate the affordance, without changing magnitudes of other filters applied to the underlying content to generate the affordance). Generating an intermediate appearance of the affordance by gradually transitioning a set of rules for generating the appearance of the affordance makes the user less distracting, thereby enhancing operability of the device (e.g., by providing the user with an appropriate amount of guidance regarding the required input of the desired result without undue interference from the user, which may reduce user error in operating the device), and making the user device interface more efficient (e.g., by utilizing the required input to assist the user in achieving the desired result and reducing user error in operating/interacting with the device).
In some embodiments, the affordance having a first appearance has (7038) a first opacity and the affordance having a second appearance has a second opacity that is less than the first opacity (e.g., the color of the user interface is displayed as transmitted). Changing the opacity of the affordance based on the display mode of the application user interface helps to adjust the visibility of the affordance according to the display mode of the application user interface, thereby enhancing operability of the device (e.g., by intelligently balancing sufficient visibility requirements and unobtrusiveness requirements of the affordance and helping a user provide input needed to achieve a desired result), and making user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
In some embodiments, the affordance having the first appearance and the affordance having the second appearance have (7040) the same dimensions and locations on the display. Maintaining the size and position of the affordance in the different display modes of the application user interface helps to maintain continuity of the appearance of the affordance during user interface switching, thereby enhancing operability of the device (e.g., by maintaining a user's context and helping the user provide input needed to achieve a desired result), and making user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
In some embodiments, the visual distinction between the affordance formed (7042) by the first set of one or more rules and the user interface of the application is greater than the visual distinction between the affordance formed by the second set of one or more rules and the user interface of the application. Changing the visual distinctions of the affordances based on the display mode of the application user interface facilitates adjusting the visibility of the affordances according to the display mode of the application user interface, thereby enhancing operability of the device (e.g., by intelligently balancing sufficient visibility requirements and unobtrusiveness requirements of the affordances and helping a user provide input needed to achieve desired results), and making user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device).
It should be understood that the particular order in which the operations in fig. 7A-7E are described is merely exemplary and is not intended to suggest that the order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. In addition, it should be noted that the details of other processes described herein with respect to other methods described herein (e.g., method 6000 and method 8000) are equally applicable in a similar manner to method 7000 described above with respect to fig. 7A-7E. For example, the contacts, gestures, user interface objects, application views, control panels, controls, affordances, position thresholds, orientation conditions, reversal curves, filters, value ranges, navigation criteria, movement parameters, focus selectors, and/or animations described above with reference to method 7000 optionally have one or more of the features of contacts, gestures, user interface objects, application views, control panels, controls, position thresholds, orientation conditions, navigation criteria, movement parameters, focus selectors, and/or animations described herein with reference to other methods described herein (e.g., methods 6000 and 8000). For the sake of brevity, these details are not repeated here.
The operations in the above-described information processing method are optionally implemented by running one or more functional modules in an information processing apparatus, such as a general purpose processor (e.g., as described above with respect to fig. 1A and 3) or an application-specific chip.
The operations described above with reference to fig. 7A-7E are optionally implemented by the components depicted in fig. 1A-1B. For example, the detection operations and display operations are optionally implemented by event sorter 170, event recognizer 180, and event handler 190. An event monitor 171 in the event sorter 170 detects a contact on the touch-sensitive display 112 and an event dispatcher module 174 communicates the event information to the application 136-1. The respective event identifier 180 of the application 136-1 compares the event information to the respective event definition 186 and determines whether the first contact at the first location on the touch-sensitive surface (or whether the rotation of the device) corresponds to a predefined event or sub-event, such as a selection of an object on the user interface, or a rotation of the device from one orientation to another. When a corresponding predefined event or sub-event is detected, the event recognizer 180 activates an event handler 190 that is associated with the detection of the event or sub-event. Event handler 190 optionally uses or invokes data updater 176 or object updater 177 to update the application internal state 192. In some embodiments, event handler 190 accesses a corresponding GUI updater 178 to update what is displayed by the application. Similarly, it will be apparent to those skilled in the art how other processes may be implemented based on the components depicted in fig. 1A-1B.
Fig. 8A-8F are flowcharts illustrating methods of changing the appearance of an affordance and the type of affordance appearance as a function of changes in the appearance of underlying content in accordance with some embodiments. Method 8000 is performed on an electronic device (e.g., device 300, FIG. 3; or portable multifunction device 100, FIG. 1A) having a display and a touch-sensitive surface. In some embodiments, the electronic device includes one or more sensors for detecting the intensity of the contact of the touch-sensitive surface. In some embodiments, the touch-sensitive surface and the display integrate Cheng Chumin the display. In some implementations, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some implementations, the display is separate from the touch-sensitive surface. Some operations in method 8000 are optionally combined, and/or the order of some operations is optionally changed.
Method 8000 involves displaying an affordance on the content (e.g., an affordance indicating an acceptable starting region for a gesture to perform a predefined operation in a user interface, such as displaying a home screen or an application switcher user interface) in the following manner: wherein the same display properties of the affordance are dynamically changed based on changes in the display properties (e.g., gray values or brightness values) of the underlying content. In addition, the method requires that the affordance appearance value range switch between two different value ranges (e.g., a "affordance" value range and a "affordance" value range) depending on whether the display attribute variation of the underlying content meets predefined range switching criteria. Thus, the device can adapt the appearance of the affordance to changes in the underlying content in order to maintain visual contrast between the affordance and the underlying content, and provide the affordance in a less distracting or disturbing manner (e.g., by avoiding rapid flickering of the affordance due to too rapid a change in the appearance of the affordance, which in some cases may distract the user) without being constrained by the type of affordance (e.g., a "bright" or "dark" affordance type) of affordance originally selected. Providing affordances having dynamically changing appearances based on the appearances of underlying content in the manner described herein and allowing affordance appearance value ranges to dynamically switch over time based on changes in underlying content enhances operability of the device (e.g., by providing visual guidance to a user regarding desired inputs of desired results without undue interference to the user, which may reduce user errors in operating the device), and makes the user device interface more efficient (e.g., by utilizing the desired inputs to assist the user in achieving desired results and reducing user errors in operating/interacting with the device), thereby improving battery life of the device (e.g., by helping the user to use the device more quickly and efficiently). Providing affordances in the manner described herein allows on-screen affordances to effectively replace and improve hardware buttons that provide the same functionality (e.g., display home screens) in many different user interface scenarios, which helps reduce manufacturing and maintenance costs of the device. Providing affordances in the manner described herein also helps to reduce and eliminate afterimage problems with the display.
Method 8000 is performed on a device having a display and a touch-sensitive surface (e.g., a touch screen display that acts as both a display and a touch-sensitive surface). The device displays (8002) content (e.g., home screen, desktop applet screen, desktop, user interface of an application, media player user interface, etc.) and affordances (e.g., home affordances indicating home gesture reaction areas on the display), on the display, wherein: an affordance is displayed over a portion of the content; the value of the display attribute (e.g., gray value or luminance value of an image (e.g., color image or monochrome image)), the intrinsic display parameter (e.g., hue, saturation, etc. of a full-color image) other than the gray value or luminance value, or the derived display parameter calculated based on one or more intrinsic display parameters (e.g., gray value or luminance value of a full-color image, or minor variants or equivalents thereof) is determined based on the value of the same display attribute of a portion of the content displayed thereon; and allowing the value of the display attribute of the content to vary within a first range of values (e.g., range [0,1], optionally on a scale of 0% brightness to 100% brightness, wherein 0% brightness is 0 and 100% brightness is 1), and the value of the display attribute of the affordance is constrained to vary within a range of affordance appearance values that is less than the first range of values (e.g., for affordance, affordance appearance value range [0,0.4], optionally on a black-to-white scale, wherein black is 0 and white is 1, or for a bright affordance, the range of values [0.6,1], optionally on a scale of 0% brightness to 100% brightness, wherein 0% brightness is 0 and 100% brightness is 1, both range values being less than the value range of the content [0,1 ]). The device detects (8004) a change in appearance of the content of the affordance while the content and affordance are displayed and while the affordance appearance value range is a second value range (e.g., the affordance appearance value range of the dark affordance is [0,0.4], optionally on a black-to-white scale, where black is 0 and white is 1). In response to detecting a change in appearance of the content of the display affordance, the device changes (8006) the appearance of the affordance (e.g., based on a short-time appearance change policy and a long-time appearance change policy), including: in accordance with a determination that the change in appearance of the content meets the range switching criteria (e.g., when a measure of the overall brightness or darkness of the content (e.g., an accumulated value and an aggregate value of display attributes (e.g., gray values or brightness)) below and around the affordance exceeds a first predefined threshold due to the change in appearance of the background content, i.e., meets the range switching criteria): transferring the affordance appearance value range to a third range of values (e.g., a light affordance appearance value range [0.6,1], optionally on a black-to-white scale, where black is 0, and white is 1), wherein the third range of values is different from the second range of values (e.g., the third range of values includes at least one value that is not included in and optionally does not overlap the second range of values) (e.g., when a range switching criteria is met, the currently selected affordance type changes from a previously selected affordance type (e.g., a implication affordance) to an alternative affordance type (a light affordance)), and the third range of values (e.g., [0.6,1], optionally on a black-to-white scale, where black is 0, and white is 1) is less than the first range of values; and changing the value of the same display attribute of the affordance (determining the value of the display attribute of each pixel of the affordance based on a first predefined value map corresponding to the currently selected affordance type (e.g., a light affordance)) in accordance with the value of the display attribute of the content of the affordance, wherein the display attribute of the affordance is constrained to vary within a range of affordance appearance values; and in accordance with a determination that the change in appearance of the content does not satisfy the range switching criteria, changing the value of the same display attribute of the affordance in accordance with the value of the display attribute of the content of the display affordance (e.g., determining the value of the display attribute of each pixel of the affordance based on a second predefined conversion relationship corresponding to the currently selected affordance type (e.g., implying a affordance)), while maintaining the affordance appearance value range to a second value range (e.g., when the first criteria is not satisfied, the currently selected affordance type remains the same as the previously selected affordance type (e.g., implying a affordance)). This is shown, for example, in fig. 5AD to 5 AE.
In some embodiments, the range switch criteria include (8008) a range switch trigger criteria and a range switch completion criteria, the range switch trigger criteria requiring that the change in appearance of the content include a first amount of change within a first time period that causes a predefined measure of the appearance of the content (e.g., a biased running average of display attributes (such as aggregate brightness values of underlying and underlying portions of the content) to exceed a predefined threshold (e.g., the predefined threshold is a first threshold when switching from a light-up representation to a light-up representation and the predefined threshold is a second threshold different from the first threshold when switching from a light-up representation) and the range switch completion criteria requiring that the change in appearance of the content not include a second amount of change within a second time period after the first time period to again satisfy the range switch trigger criteria before a predefined transition period (e.g., 5 seconds) expires after the range switch trigger criteria is satisfied. Requiring that the range switch trigger criteria not be met again within a predefined transition period after the range switch trigger criteria is first met in order to complete the affordance appearance range switch between the two value ranges, enhances the operability of the device (e.g., by avoiding unnecessarily switching affordance appearance types and avoiding interference to the user when the underlying content changes are transient) and makes the user device operate more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device), thereby improving the battery life of the device (e.g., reducing the need for processors and screens due to range switches).
In some embodiments, selecting a predefined threshold for a range switch trigger criteria based on (8010) a current value range that is used as an affordance appearance value range (e.g., a value range of a currently used affordance type), includes: when the affordance appearance value range is a second value range, using the first threshold as a predefined threshold; and when the affordance appearance value range is a third value range, using the second threshold as the predefined threshold. (in some embodiments, when the affordance appearance value range is between the second value range (e.g., the value range associated with "affordance") and the third value range (e.g., the value range associated with "affordance"), the predefined threshold is based on any value range recently selected as the affordance appearance value range (e.g., upon selection of the "affordance" range of affordance appearance values as the target range of affordance appearance values, "affordance" threshold is used to determine when to switch back to the "affordance" range of affordance appearance values, and upon selection of the "affordance" range of affordance appearance values as the target range of affordance appearance values, "affordance" threshold is used to determine when to switch back to the "affordance" range of affordance appearance values.) the different thresholds of the range switching triggering criteria enhance operability of the device (e.g., by adjusting the affordance appearance type to the current selection and avoiding unnecessary switching of affordance appearance types and transient changes in underlying content), and thereby reducing user interaction with the user device by reducing user interaction with the user interface requirements (e.g., reducing user interface requirements).
In some embodiments, changing the appearance of the affordance in response to detecting a change in the appearance of the content of the display affordance includes (8012): in accordance with a determination that the range switch trigger criteria are met by the first amount of change within the first period of time and before the range switch completion criteria are met: transferring the affordance appearance value range to an intermediate value range (e.g., intermediate value range [0.3-0.7], optionally on a black-to-white scale, wherein black is 0 and white is 1) that is different from the second value range and the third value range, the starting value of the intermediate value range being between the starting value of the second value range and the starting value of the third value range, and the ending value of the intermediate value range being between the ending value of the second value range and the ending value of the third value range; and changing the value of the same display attribute of the affordance (the value of the display attribute of each pixel of the affordance is determined based on a first predefined value map corresponding to the currently selected affordance type (e.g., a light affordance)) in accordance with the value of the display attribute of the content of the affordance, wherein the display attribute of the affordance is constrained to vary within a range of affordance appearance values. In some implementations, there are a plurality of intermediate value ranges between the second value range and the third value range, and the device moves sequentially through each of the plurality of intermediate value ranges within the predefined transition period until the range switch completion criteria is met. After the range switch completion criteria are met, the display attribute of the affordance is constrained to be within a third range of values when the display attribute of the affordance is changed in accordance with any additional appearance changes of the content. Shifting the affordance appearance value range to an intermediate value range that is different from (or intermediate to) the appearance range values of the two stable affordances appearance types, as a transition during switching between the two stable affordance appearance types, enhances operability of the device (e.g., by making the switching affordance appearance types finer and less distracting to the user) and makes user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device), thereby improving battery life of the device (e.g., by allowing the user to operate the device more quickly and efficiently).
In some embodiments, the appearance change of the content includes (8014) a third amount of change in a third period of time after the first period of time, after the range switch trigger criteria is met, and before the range switch completion criteria is met. For example, after meeting the range switch trigger criteria, the device selects an intermediate content affordance appearance reversal curve between the bright affordance and the implied affordance appearance reversal curve, and uses the intermediate reversal curve to determine how to change the brightness of the affordance based on the brightness change of the content during the transition period (e.g., within 5 seconds after meeting the range switch trigger criteria). Allowing the appearance of the affordance to continue to change with underlying content as the affordance appearance range transitions from one stable value range to another stable value range over a period of time enhances the operability of the device (e.g., by providing visual continuity of the affordance appearance during switching of affordance appearance types) and makes the user device operate more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device), thereby improving the battery life of the device (e.g., by allowing the user to operate the device more quickly and efficiently).
In some embodiments, the change in appearance of the content of the display affordance is (8016) caused by scrolling of the content, and the range switching criteria need not lack scrolling of the content in order to satisfy the range switching criteria. Allowing the appearance of the affordance to change with the underlying content when scrolling the underlying content enhances the operability of the device (e.g., by providing visual continuity of the affordance appearance when scrolling the underlying content) and makes the user device operate more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device), thereby improving the battery life of the device (e.g., by allowing the user to operate the device more quickly and efficiently).
In some embodiments, the change in appearance of the content of the affordance is (8018) caused by movement of the underlying content by the affordance (e.g., scaling or scrolling of the content), and the range-switching criteria require that the underlying content be moved less than a predefined amount of time by the affordance in order to satisfy the range-switching criteria. For example, in addition to requiring a predefined measure of the appearance of the content (e.g., a biased running average of display attributes (such as aggregate brightness values of underlying and portions of the content) to exceed a predefined threshold, the scoping switching criteria also requires the content to remain substantially stationary for a short period of time around the time that the predefined threshold is exceeded. Requiring content located below the affordance to be substantially stationary (e.g., stop scrolling) in order to meet the range-switching criteria, enhances operability of the device (e.g., by avoiding unnecessary toggling back and forth of the affordance appearance type when content is changing rapidly due to continuous scrolling), and makes user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device), thereby improving battery life of the device (e.g., by allowing the user to operate the device more quickly and efficiently).
In some embodiments, changing the value of the same display attribute of the affordance in accordance with the value of the display attribute of the content includes (8020): increasing the value of the affordance display attribute (e.g., affordance becomes brighter when underlying content becomes darker) in accordance with the magnitude of the change (e.g., decrease) in the value of the display attribute of the content in accordance with determining that the value of the display attribute of the content has decreased (and optionally, that the value of the display attribute of the content is within a predefined subrange of the first value range (e.g., that there are discontinuities in the affordance value outside of two particular ranges of content values); in accordance with a determination that the value of the display attribute of the content has increased (and optionally, the value of the display attribute of the content is within a predefined subrange of the first value range (e.g., outside of two particular ranges of content values where there are discontinuities in the affordance value)), the value of the affordance display attribute is reduced (e.g., the affordance becomes darker when the content underlying the affordance becomes brighter) in accordance with the magnitude of the change (e.g., increase) in the value of the display attribute of the content. Using value inversions (e.g., increasing the value of the affordance when decreasing the content value and decreasing the value of the affordance when increasing the content value) to determine the value of the same display attribute of the affordance based on the value of the display attribute of the underlying content enhances the operability of the device (e.g., by providing sufficient affordance visibility without distraction or distraction) and makes the user device operate more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device), thereby improving the battery life of the device (e.g., by allowing the user to operate the device more quickly and efficiently).
In some embodiments, a given magnitude of change in the value of the display attribute of the content causes (e.g., 8022) the same magnitude of change in the value of the display attribute of the affordance when the value of the display attribute of the affordance changes within the second range of values (e.g., before the range-switching criteria is met) and when the value of the display attribute of the affordance changes within the third range of values (e.g., after the range-switching criteria is met). This is shown, for example, in fig. 5AE and 5 AF. In some embodiments, the content affordance appearance inversion curve (e.g., affordance brightness versus background brightness curve) has the same implied affordance and brightly affordance shape between the display attributes of the affordances and the same display attributes of the content, and the brightness value of each pixel of the affordance is looked up from a database of pre-stored brightness value pairs for the currently selected affordance type. Requiring content located below the affordance to be substantially stationary (e.g., stop scrolling) in order to meet the range-switching criteria, enhances operability of the device (e.g., by avoiding unnecessary toggling back and forth of the affordance appearance type when content is changing rapidly due to continuous scrolling), and makes user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device), thereby improving battery life of the device (e.g., by allowing the user to operate the device more quickly and efficiently).
In some embodiments, the device transitions (8024) the affordance appearance value range from a second value range (e.g., a impliedly affordance appearance value range) to a third value range (e.g., a brightly affordance appearance value range), including gradually transitioning from the second value range to the third value range over a period of time (e.g., 5 seconds); and the method includes, when gradually transitioning the affordance appearance value range from the second value range to the third value range: detecting an additional appearance change of the content of the display affordance; and in response to detecting additional appearance changes to the content of the display affordance, changing the appearance of the affordance according to the affordance appearance value range (e.g., as the affordance appearance value range gradually transitions over time), including: in accordance with a determination that the appearance change of the content meets the range switching criteria (e.g., meets the range switching criteria a second time), gradually transitioning the affordance appearance value range back to the second value range (e.g., changing the beginning and end of the value range over a period of time (such as 1, 2, 3, 4, 5, 10 seconds); and in accordance with a determination that the change in appearance of the content does not meet the range switching criteria (e.g., the range switching criteria is met a second time), gradually transitioning the affordance appearance value range from the second value range to the third value range continues. In some embodiments, as described above, the predefined threshold for determining whether the range switching criteria is met varies based on whether the target value range of the affordance appearance value range is a second value range (e.g., for a implied affordance) or a third value range (e.g., for a highlighted affordance). The method allows the affordance to gradually switch from one appearance type to another over a predefined transition period; and during the predefined transition period, the appearance value of the affordance continues to change in accordance with the change in appearance value of the underlying content. Further, during the transfer of the appearance value range, the range switching criteria may again be met (e.g., due to the continuous change in appearance of the content and affordance). Thus, depending on whether the range switching criteria are again met during the transition period, the device enables the triggered range switching to be reversed or continue to complete. The mechanism for reversing range switching during transition periods enhances the operability of the device (e.g., by avoiding unnecessary toggling of affordance appearance types when content changes rapidly) and makes user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device), thereby improving the battery life of the device (e.g., by allowing the user to operate the device more quickly and efficiently).
In some embodiments, the device detects (8026) an additional appearance change of the content for which an affordance is displayed after changing a value of a display attribute of the affordance within a third range of values in accordance with determining that the appearance change of the content meets a range-switching criterion; and in response to detecting an additional appearance change of the content of the display affordance, the device changes the appearance of the affordance, including: changing the value of the same display attribute of the affordance (e.g., determining the value of the display attribute of each pixel of the affordance based on a first predefined value map corresponding to a currently selected affordance type (e.g., a bright affordance)) in accordance with the value of the display attribute of the content of the display affordance in accordance with determining that the additional appearance change of the content satisfies the range-switch trigger criteria for the first time and does not satisfy the range-switch completion criteria, wherein the display attribute of the affordance is constrained to change within a first intermediate range of values between the second and third ranges of values; changing the value of the same display attribute of the affordance according to the value of the display attribute of the content of the display affordance in accordance with determining that the additional appearance change of the content meets the range switch trigger criteria a second time after the first time without meeting the range switch completion criteria, wherein the display attribute of the affordance is constrained to change within a second intermediate range of values between the third range of values and the first intermediate range of values; and in accordance with a determination that the additional appearance change of the content meets the range switch trigger criteria only once before meeting the range switch completion criteria, changing a value of a same display attribute of the affordance in accordance with a value of a display attribute of the content of the display affordance, wherein the display attribute of the affordance is constrained to change within a second range of values. After the affordance is switched from the first appearance type to the second appearance type (e.g., in response to the range switch criteria being met for the first time), if the range switch trigger criteria is met again, the affordance may begin to switch back from the second appearance type to the first appearance type. During the predefined transition period, the appearance value range continues to transition from the value range of the second appearance type to the value range of the first appearance type. During the transfer of the appearance value range, the range switch trigger criteria may again be met (e.g., due to the continuous change in appearance of the content and affordance). Thus, the triggered range switch may be reversed from its current state and again continue towards the value range of the second appearance type. The mechanism for reversing range switching during transition periods enhances the operability of the device (e.g., by avoiding unnecessary toggling of affordance appearance types when content changes rapidly) and makes user device operation more efficient (e.g., by reducing interference to the user and reducing user errors in using or interacting with the device), thereby improving the battery life of the device (e.g., by allowing the user to operate the device more quickly and efficiently).
In some embodiments, when the home affordance is initially displayed (e.g., a scene cut occurs, resulting in a change in the content displayed on the screen), the device has not accumulated a sufficient amount of information about the underlying content to positively select the affordance and the hinting affordance based on the metrics of the content appearance in the manner described in various embodiments. Thus, the device optionally sets the value of the display attribute of the affordance to a predefined value, such as an intermediate value (e.g., 0.5 or 50% of the full brightness value), and then lets the dynamic algorithm described above update the affordance appearance over time (e.g., according to both the short-time and long-time policies described herein). In some embodiments, when the affordance is initially displayed, the rate of progress for switching the affordance appearance type is temporarily increased (e.g., decreasing the bias to maintain the currently selected affordance type) such that the home affordance quickly adapts to the underlying content's appearance or hints affordance. In some embodiments, such as when a new application user interface is displayed (e.g., an application is launched from an application chart via selection of a recently opened application in a multitasking user or on a home screen), when a cover user interface (e.g., a system level information screen) is pulled from the edge of the display to overlay the currently displayed application user interface or home screen, or when the user interface is rotated (e.g., due to a rotating device), etc., the case where the home affordance is initially displayed is triggered by a context switch event in the user interface. In some embodiments, the affordance is initially displayed using an animated transition that includes increasing the opacity of the affordance over time, sliding the affordance over time from an edge of the screen onto the screen, and/or increasing the size of the affordance over time.
It should be understood that the particular order in which the operations in fig. 8A-8F are described is merely exemplary and is not intended to suggest that the order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. In addition, it should be noted that the details of other processes described herein with respect to other methods described herein (e.g., method 6000 and method 7000) apply in a similar manner to method 8000 described above with respect to fig. 8A-8E as well. For example, the contacts, gestures, user interface objects, application views, control panels, controls, affordances, position thresholds, orientation conditions, reversal curves, filters, value ranges, navigation criteria, movement parameters, focus selectors, and/or animations described above with reference to method 8000 optionally have one or more of the features of contacts, gestures, user interface objects, application views, control panels, controls, position thresholds, orientation conditions, navigation criteria, movement parameters, focus selectors, and/or animations described herein with reference to other methods described herein (e.g., methods 6000 and 7000). For the sake of brevity, these details are not repeated here.
The operations in the above-described information processing method are optionally implemented by running one or more functional modules in an information processing apparatus, such as a general purpose processor (e.g., as described above with respect to fig. 1A and 3) or an application-specific chip.
The operations described above with reference to fig. 8A-8F are optionally implemented by the components depicted in fig. 1A-1B. For example, the detection operation, the transfer operation, and the change operation are optionally implemented by the event sorter 170, the event recognizer 180, and the event handler 190. An event monitor 171 in the event sorter 170 detects a contact on the touch-sensitive display 112 and an event dispatcher module 174 communicates the event information to the application 136-1. The respective event identifier 180 of the application 136-1 compares the event information to the respective event definition 186 and determines whether the first contact at the first location on the touch-sensitive surface (or whether the rotation of the device) corresponds to a predefined event or sub-event, such as a selection of an object on the user interface, or a rotation of the device from one orientation to another. When a respective predefined event or sub-event is detected, the event recognizer 180 activates an event handler 190 associated with the detection of the event or sub-event. Event handler 190 optionally uses or invokes data updater 176 or object updater 177 to update the application internal state 192. In some embodiments, event handler 190 accesses a corresponding GUI updater 178 to update what is displayed by the application. Similarly, it will be apparent to those skilled in the art how other processes may be implemented based on the components depicted in fig. 1A-1B.
The foregoing description, for purposes of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention and various described embodiments with various modifications as are suited to the particular use contemplated.

Claims (14)

1. A display method, comprising:
at a device having a display and a touch-sensitive surface:
displaying content and affordances on the display, wherein:
the affordance is displayed on a portion of the content;
determining a value of a same display attribute of the affordance based on a value of a display attribute of the portion of the content that displays the affordance; and
allowing the value of the display attribute of the content to vary within a first range of values and constraining the value of the display attribute of the affordance to vary within a range of affordance appearance values that is less than the first range of values;
Detecting a change in appearance of the content on which the affordance is displayed when the content and the affordance are displayed and when the affordance appearance value range is a second value range; and
in response to detecting the change in the appearance of the content on which the affordance is displayed, changing the appearance of the affordance includes:
in accordance with a determination that the change in appearance of the content has met a range switching criterion:
transferring the affordance appearance value range to a third value range,
wherein the third range of values is different from the second range of values and the third range of values is less than the first range of values; and
changing the value of the same display attribute of the affordance as a function of the value of the display attribute of the content displaying the affordance, wherein the display attribute of the affordance is constrained to vary within the affordance appearance value range; and is also provided with
In accordance with a determination that the change in appearance of the content has not met the range switching criteria, the value of the same display attribute of the affordance is changed in accordance with the value of the display attribute of the content on which the affordance is displayed while maintaining the affordance appearance value range as the second value range.
2. The display method according to claim 1, wherein:
the range switch criteria include a range switch trigger criteria and a range switch completion criteria,
the range switch triggering criteria require that the appearance change of the content includes a first amount of change over a first period of time that causes a predefined measure of content appearance to exceed a predefined threshold, and
the range switch completion criteria require that the appearance change of the content does not include a second amount of change within a second period of time after the first period of time such that the range switch trigger criteria are not met again after the range switch trigger criteria are met before a predefined transition period expires.
3. The display method of claim 2, wherein selecting the predefined threshold for the range switch trigger criteria based on a current range of values used as the range of affordances values comprises:
when the affordance appearance value range is the second value range, using a first threshold as the predefined threshold; and is also provided with
When the affordance appearance value range is the third value range, a second threshold is used as the predefined threshold.
4. The display method of claim 2, wherein changing the appearance of the affordance in response to detecting the change in appearance of the content on which the affordance is displayed comprises:
in accordance with a determination that the range switch trigger criteria are met by the first amount of change within the first period of time and before the range switch completion criteria are met:
transferring the affordance appearance value range to an intermediate value range that is different from the second value range and the third value range; and is also provided with
Changing the value of the same display attribute of the affordance as a function of the value of the display attribute of the content on which the affordance is displayed, wherein the display attribute of the affordance is constrained to vary within the affordance appearance value range.
5. The display method according to claim 4, wherein the appearance change of the content includes a third amount of change in a third period of time after the first period of time, after the range switch trigger criteria is met, and before the range switch completion criteria is met.
6. The display method according to any one of claims 1 to 2, wherein the change in appearance of the content on which the affordance is displayed is caused by scrolling of the content, and the range switching criterion does not require scrolling of the content to be absent in order to satisfy the range switching criterion.
7. The display method of any of claims 1-2, wherein the change in appearance of the content on which the affordance is displayed is caused by movement of the content below the affordance, and the scope-switching criteria requires that the content below the affordance move less than a predefined amount of time for at least a predetermined amount of time in order to meet the scope-switching criteria.
8. The display method of any of claims 1-2, wherein changing the value of the same display attribute of the affordance as a function of the value of the display attribute of the content comprises:
increasing the value of the display attribute of the affordance in accordance with a magnitude of change in the value of the display attribute of the content in accordance with a determination that the value of the display attribute of the content has decreased; and is also provided with
In accordance with a determination that the value of the display attribute of the content has increased, the value of the display attribute of the affordance is reduced in accordance with a magnitude of change in the value of the display attribute of the content.
9. The display method of any of claims 1-2, wherein a given magnitude of change in the value of the display attribute of the content causes a same amount of change in the value of the display attribute of the affordance when the value of the display attribute of the affordance changes within the second range of values and when the value of the display attribute of the affordance changes within the third range of values.
10. The display method according to any one of claims 1 to 2, wherein:
transferring the affordance appearance value range from the second value range to the third value range includes gradually transferring from the second value range to the third value range over a period of time; and is also provided with
The method includes, upon gradually transitioning the affordance appearance value range from the second value range to the third value range:
detecting an additional appearance change of the content on which the affordance is displayed; and
in response to detecting the additional appearance change of the content on which the affordance is displayed, changing the appearance of the affordance according to the affordance appearance value range, comprising:
in accordance with a determination that the appearance change of the content meets the range switching criteria, initiating a gradual transfer of the affordance appearance value range back to the second value range; and is also provided with
In accordance with a determination that the change in appearance of the content does not meet the range switching criteria, gradually transitioning the affordance appearance value range from the second value range to the third value range continues.
11. The display method according to claim 2, comprising:
Detecting an additional change in appearance of the content on which the affordance is displayed after changing the value of the display attribute of the affordance within the third range of values in accordance with determining that the change in appearance of the content meets the range-switching criteria; and
in response to detecting the additional appearance change of the content on which the affordance is displayed, changing the appearance of the affordance includes:
in accordance with a determination that the additional appearance change of the content satisfies the range switch trigger criteria for the first time and does not satisfy the range switch completion criteria, changing the value of the same display attribute of the affordance in accordance with the value of the display attribute of the content on which the affordance is displayed, wherein the display attribute of the affordance is constrained to change within a first intermediate range of values between the second range of values and the third range of values;
in accordance with a determination that the additional appearance change of the content meets the range switch trigger criteria a second time after the first time without meeting the range switch completion criteria, changing the value of the same display attribute of the affordance in accordance with the value of the display attribute of the content on which the affordance is displayed, wherein the display attribute of the affordance is constrained to change within a second intermediate range of values that is intermediate between the third range of values and the first intermediate range of values; and is also provided with
In accordance with a determination that the additional appearance change of the content meets the range switch trigger criteria only once before meeting the range switch completion criteria, changing the value of the same display attribute of the affordance in accordance with the value of the display attribute of the content on which the affordance is displayed, wherein the display attribute of the affordance is constrained to change within the second range of values.
12. An electronic device, comprising:
a display;
a touch sensitive surface;
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods of claims 1-11.
13. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display and a touch-sensitive surface, cause the device to perform any of the methods of claims 1-11.
14. An electronic device, comprising:
a display;
a touch sensitive surface; and
apparatus for performing any of the methods of claims 1 to 11.
CN201880001526.8A 2017-09-09 2018-01-25 Apparatus, method and graphical user interface for displaying an affordance over a background Active CN109769396B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202311082973.XA CN117032541A (en) 2017-09-09 2018-01-25 Apparatus, method and graphical user interface for displaying an affordance over a background
CN201910756761.2A CN110456979B (en) 2017-09-09 2018-01-25 Device, method and electronic device for displaying an affordance on a background
CN202111363213.7A CN114063842A (en) 2017-09-09 2018-01-25 Apparatus, method, and graphical user interface for displaying affordances over a background

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US201762556402P 2017-09-09 2017-09-09
US62/556,402 2017-09-09
DKPA201770711A DK179931B1 (en) 2017-09-09 2017-09-22 Devices, methods and graphical user interfaces for displaying an affordance on a background
DKPA201770711 2017-09-22
US15/878,276 2018-01-23
US15/878,276 US10691321B2 (en) 2017-09-09 2018-01-23 Device, method, and graphical user interface for adjusting a display property of an affordance over changing background content
PCT/US2018/015195 WO2019050562A1 (en) 2017-09-09 2018-01-25 Devices, methods, and graphical user interfaces for displaying an affordance on a background

Related Child Applications (3)

Application Number Title Priority Date Filing Date
CN202111363213.7A Division CN114063842A (en) 2017-09-09 2018-01-25 Apparatus, method, and graphical user interface for displaying affordances over a background
CN202311082973.XA Division CN117032541A (en) 2017-09-09 2018-01-25 Apparatus, method and graphical user interface for displaying an affordance over a background
CN201910756761.2A Division CN110456979B (en) 2017-09-09 2018-01-25 Device, method and electronic device for displaying an affordance on a background

Publications (2)

Publication Number Publication Date
CN109769396A CN109769396A (en) 2019-05-17
CN109769396B true CN109769396B (en) 2023-09-01

Family

ID=61163854

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201880001526.8A Active CN109769396B (en) 2017-09-09 2018-01-25 Apparatus, method and graphical user interface for displaying an affordance over a background
CN201910756761.2A Active CN110456979B (en) 2017-09-09 2018-01-25 Device, method and electronic device for displaying an affordance on a background

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910756761.2A Active CN110456979B (en) 2017-09-09 2018-01-25 Device, method and electronic device for displaying an affordance on a background

Country Status (2)

Country Link
CN (2) CN109769396B (en)
WO (1) WO2019050562A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3286915B1 (en) 2015-04-23 2021-12-08 Apple Inc. Digital viewfinder user interface for multiple cameras
US10009536B2 (en) 2016-06-12 2018-06-26 Apple Inc. Applying a simulated optical effect based on data received from multiple camera sensors
DK180859B1 (en) 2017-06-04 2022-05-23 Apple Inc USER INTERFACE CAMERA EFFECTS
US11722764B2 (en) 2018-05-07 2023-08-08 Apple Inc. Creative camera
DK201870623A1 (en) 2018-09-11 2020-04-15 Apple Inc. User interfaces for simulated depth effects
US11770601B2 (en) 2019-05-06 2023-09-26 Apple Inc. User interfaces for capturing and managing visual media
US11321857B2 (en) 2018-09-28 2022-05-03 Apple Inc. Displaying and editing images with depth information
US11128792B2 (en) 2018-09-28 2021-09-21 Apple Inc. Capturing and displaying images with multiple focal planes
JP2020098420A (en) * 2018-12-17 2020-06-25 ソニー株式会社 Image processing apparatus, image processing method and program
CN113518148A (en) * 2019-05-06 2021-10-19 苹果公司 User interface for capturing and managing visual media
US11706521B2 (en) 2019-05-06 2023-07-18 Apple Inc. User interfaces for capturing and managing visual media
CN111263002B (en) * 2020-01-19 2022-08-26 华为技术有限公司 Display method and electronic equipment
CN115185431A (en) * 2020-06-01 2022-10-14 苹果公司 User interface for managing media
US11039074B1 (en) 2020-06-01 2021-06-15 Apple Inc. User interfaces for managing media
US11539876B2 (en) 2021-04-30 2022-12-27 Apple Inc. User interfaces for altering visual media
US11778339B2 (en) 2021-04-30 2023-10-03 Apple Inc. User interfaces for altering visual media

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104487929A (en) * 2012-05-09 2015-04-01 苹果公司 Device, method, and graphical user interface for displaying additional information in response to a user contact
CN104885050A (en) * 2012-12-29 2015-09-02 苹果公司 Device, method, and graphical user interface for determining whether to scroll or select contents

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8091038B1 (en) * 2006-11-29 2012-01-03 Adobe Systems Incorporated Adaptive graphical interface
US7970206B2 (en) * 2006-12-13 2011-06-28 Adobe Systems Incorporated Method and system for dynamic, luminance-based color contrasting in a region of interest in a graphic image
US9952756B2 (en) * 2014-01-17 2018-04-24 Intel Corporation Dynamic adjustment of a user interface
US20160246475A1 (en) * 2015-02-22 2016-08-25 Microsoft Technology Licensing, Llc Dynamic icon recoloring to improve contrast
US9880735B2 (en) * 2015-08-10 2018-01-30 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104487929A (en) * 2012-05-09 2015-04-01 苹果公司 Device, method, and graphical user interface for displaying additional information in response to a user contact
CN104885050A (en) * 2012-12-29 2015-09-02 苹果公司 Device, method, and graphical user interface for determining whether to scroll or select contents

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
开放式座舱显示系统关键技术研究与实现;曹猛等;《航空计算技术》;20110715(第04期);78-81 *

Also Published As

Publication number Publication date
CN110456979A (en) 2019-11-15
WO2019050562A1 (en) 2019-03-14
CN109769396A (en) 2019-05-17
CN110456979B (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN109769396B (en) Apparatus, method and graphical user interface for displaying an affordance over a background
US11119642B2 (en) Device, method, and graphical user interface for adjusting a display property of an affordance over changing background content
US11079929B2 (en) Devices, methods, and graphical user interfaces for navigating between user interfaces, displaying a dock, and displaying system user interface elements
US10976917B2 (en) Devices and methods for interacting with an application switching user interface
US11797150B2 (en) Devices, methods, and graphical user interfaces for navigating between user interfaces, displaying a dock, and displaying system user interface elements
US11275502B2 (en) Device, method, and graphical user interface for displaying user interfaces and user interface overlay elements
KR102503076B1 (en) Devices, methods, and graphical user interfaces for navigating between user interfaces, displaying the dock, and displaying system user interface elements
US20240045564A1 (en) Devices, Methods, and Graphical User Interfaces for Navigating Between User Interfaces, Displaying a Dock, and Displaying System User Interface Elements
AU2020102351A4 (en) Devices, methods, and graphical user interfaces for displaying an affordance on a background
EP3559795B1 (en) Devices, methods, and graphical user interfaces for displaying an affordance on a background
US11966578B2 (en) Devices and methods for integrating video with user interface navigation
US20190369862A1 (en) Devices and Methods for Integrating Video with User Interface Navigation
CN117321560A (en) System and method for interacting with a user interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant