CN108829325B - Apparatus, method and graphical user interface for dynamically adjusting presentation of audio output - Google Patents

Apparatus, method and graphical user interface for dynamically adjusting presentation of audio output Download PDF

Info

Publication number
CN108829325B
CN108829325B CN201810369048.8A CN201810369048A CN108829325B CN 108829325 B CN108829325 B CN 108829325B CN 201810369048 A CN201810369048 A CN 201810369048A CN 108829325 B CN108829325 B CN 108829325B
Authority
CN
China
Prior art keywords
contact
media item
media
input
characteristic intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810369048.8A
Other languages
Chinese (zh)
Other versions
CN108829325A (en
Inventor
N·德夫雷斯
D·C·格拉哈姆
F·A·安祖雷斯
M·阿朗索鲁伊斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DKPA201670599A external-priority patent/DK179033B1/en
Priority claimed from DKPA201670597A external-priority patent/DK179034B1/en
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN108829325A publication Critical patent/CN108829325A/en
Application granted granted Critical
Publication of CN108829325B publication Critical patent/CN108829325B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path

Abstract

The invention provides an apparatus, method and graphical user interface for dynamically adjusting the presentation of audio output. An electronic device displays a user interface including a first interaction region and a second interaction region of an application on a display. While displaying the user interface, the device detects an input on the display caused by a contact on the touch-sensitive surface at a location corresponding to a user interface element in the first interaction region. In response to detecting an input caused by a contact, when the input satisfies an intensity-based activation criterion (e.g., the contact has a characteristic intensity above an intensity threshold), the device obscures a first interaction region of the application in addition to the user interface element without obscuring a second interaction region of the application. When the input satisfies a selection criterion (e.g., the contact has a characteristic intensity below an intensity threshold), the device performs a selection operation corresponding to the user interface element without obscuring the first interaction region of the application.

Description

Apparatus, method and graphical user interface for dynamically adjusting presentation of audio output
RELATED APPLICATIONS
The present application is a divisional application of the inventive patent application having application number 201710364610.3, filing date 2017, 5/22, entitled "apparatus, method, and graphical user interface for dynamically adjusting presentation of audio output".
Technical Field
The present invention relates generally to electronic devices that provide audio output, and more particularly to devices, methods, and graphical user interfaces that dynamically adjust the presentation of audio output.
Background
Some electronic devices utilize an audiovisual interface as a way of providing feedback regarding a user's interaction with the device.
Disclosure of Invention
However, some electronic devices provide audiovisual feedback in a limited, inefficient, and frustrating manner. For example, some methods interrupt and stop providing audio that the user is currently listening to (e.g., lecture) and abruptly switch to some other audio (e.g., audio associated with a short video message). These abrupt transitions distract the user (causing the user to lose focus and have to replay some portion of the audio they are listening to), force the user to perform additional inputs to restore back to the audio they are listening to, force the user to disable certain audio-based effects, and create additional frustration. Conventional electronic devices waste energy by requiring the user to perform additional input and/or forcing the user to replay certain portions.
Some electronic devices utilize an audiovisual interface as a way of providing feedback regarding a user's interaction with the device. However, some electronic devices provide audiovisual feedback in a limited, inefficient, and frustrating manner. For example, some methods provide predetermined audio feedback in response to user interaction with a graphical user interface element (e.g., provide an audible tone in response to a user typing a number on a keypad for a telephone application). This predetermined audio feedback does not change based on user interaction, forcing the user to repeatedly hear the same predetermined and invariant audio feedback. Thus, many users disable certain audio-based effects and/or delete certain applications that have become too annoying.
Therefore, there is a need for electronic devices having more efficient methods and interfaces for providing audiovisual feedback. Such methods and interfaces optionally complement or replace conventional methods for providing audiovisual feedback. Such methods and interfaces reduce the number, scope, and/or nature of inputs from a user and result in a more efficient human-machine interface (e.g., by dynamically obscuring the audio output together, embodiments disclosed herein allow a user to effectively preview new audio output without having to suddenly stop listening to the current audio output). Moreover, such approaches reduce the processing power consumed to process touch inputs, save power (thereby increasing the time between battery charges), reduce unnecessary/extraneous/repetitive inputs, and potentially reduce memory usage.
According to some embodiments, the method is performed at an electronic device in communication with a display and an audio system. The method comprises the following steps: providing first sound information to an audio system to present a first audio output, the first audio output comprising: volume and audio attributes other than volume. The method also includes receiving an input corresponding to a request to present a second audio output while the audio system is presenting the first audio output. The method also includes, in response to receiving an input corresponding to a request to present a second audio output: providing information to the audio system to dynamically adjust presentation of the first audio output as a function of the magnitude of the input, wherein dynamically adjusting presentation of the first audio output includes dynamically adjusting a non-volume audio attribute as the magnitude of the input changes; and providing the second sound information to the audio system to present the second audio output concurrently with the first audio output.
According to some embodiments, the method is performed at an electronic device in communication with a display and an audio system. The method comprises the following steps: when displaying a user interface on a display that includes a set of one or more affordances: a first input directed to a first affordance of the set of one or more affordances is detected at a first point in time. The method also includes, in response to detecting a first input directed to the first affordance, beginning to provide first sound information to the audio system to present a first audio output corresponding to the first affordance, where the first audio output has a first audio profile. The method also includes detecting a second input directed to a second affordance of the set of one or more affordances at a second point in time after the first point in time. The method also includes, in response to detecting a second input directed to a second affordance: in accordance with a determination that the audio modification criteria are met: causing the audio system to present an altered first audio output corresponding to the first affordance, without continuing to present the first audio output with the first audio profile, wherein the altered first audio output has an altered audio profile that is different from the first audio profile; and providing the second audio information to the audio system to render a second audio output corresponding to the second affordance, wherein the second audio output has a second audio profile. The method also includes, in accordance with a determination that the audio modification criteria are not satisfied: causing the audio system to continue to present a first audio output corresponding to the first affordance and having a first audio profile; and providing the third sound information to the audio system to present a third audio output corresponding to the second affordance, wherein the third audio output has a third audio profile.
In accordance with some embodiments, a method is performed on an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface. The method comprises the following steps: a user interface including representations of media items is displayed on a display. The method also includes detecting an input caused by the contact at a location on the touch-sensitive surface that corresponds to the representation of the media item while the user interface is displayed. The method also includes, in response to detecting the input caused by the contact: in accordance with a determination that the input satisfies media-prompting criteria, wherein the media-prompting criteria include a criterion that is satisfied when the contact has a characteristic intensity above a first intensity threshold: starting to play a respective portion of the media item; and dynamically changing a set of one or more audio attributes of the media item as the characteristic intensity of the contact changes while the media item is playing. The method also includes, in accordance with a determination that the input does not satisfy the media-cue criteria, forgoing starting playing a respective portion of the media item and forgoing dynamically changing the set of one or more audio attributes of the media item as the characteristic intensity of the contact changes.
In accordance with some embodiments, a method is performed on an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface. The method comprises the following steps: a user interface including a first interactive area of an application and a second interactive area of the application is displayed on a display. The method also includes detecting, while displaying the user interface, a first input on the display caused by a contact on the touch-sensitive surface at a location corresponding to a first user interface element in the first interaction region. The method also includes, in response to detecting the first input caused by the contact: in accordance with a determination that the first input satisfies intensity-based activation criteria that require the contact to have a characteristic intensity above a first intensity threshold in order to satisfy the intensity-based activation criteria, the first interaction region of the application is obscured from view in addition to the first user interface element without obscuring the second interaction region of the application. The method also includes, in accordance with a determination that the first input satisfies first selection criteria that do not require the contact to have a characteristic intensity above a first intensity threshold in order to satisfy the selection criteria, performing a first selection operation corresponding to the first user interface element without obscuring a first interaction region of the application.
According to some embodiments, an electronic device in communication with a display and an audio system includes one or more processors, memory, and one or more programs; the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include instructions for performing, or causing the performance of, the operations of any of the methods described herein. According to some embodiments, a computer-readable storage medium has stored therein instructions that, when executed by an electronic device in communication with a display and an audio system, cause the device to perform or cause to be performed the operations of any of the methods described herein. According to some embodiments, a graphical user interface on an electronic device comprising one or more processors and memory for executing one or more programs stored in the memory includes one or more of the elements displayed in any of the methods described herein that are updated in response to an input, as described in any of the methods described herein. According to some embodiments, an electronic device in communication with a display and an audio system comprises means for performing, or causing to be performed, operations of any of the methods described herein. According to some embodiments, an information processing apparatus for use in an electronic device in communication with a display and an audio system comprises means for performing, or causing to be performed, operations of any of the methods described herein.
Accordingly, electronic devices in communication with displays and audio systems are provided with faster, more efficient methods and interfaces for providing audio feedback and obscuring audio, thereby improving the effectiveness, efficiency, and user satisfaction of such devices. Such methods and interfaces may complement or replace conventional methods for providing audio feedback and obscuring audio.
In accordance with some embodiments, an electronic device includes a display, a touch-sensitive surface, optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface, one or more processors, memory, and one or more programs; the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include instructions for performing, or causing the performance of, the operations of any of the methods described herein. According to some embodiments, a computer-readable storage medium has stored therein instructions that, when executed by an electronic device with a display, a touch-sensitive surface, and optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface, cause the device to perform or cause performance of the operations of any of the methods described herein. According to some embodiments, a graphical user interface on an electronic device with a display, a touch-sensitive surface, optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface, a memory, and one or more processors to execute one or more programs stored in the memory includes one or more of the elements displayed in any of the methods described herein, which are updated in response to an input, as described in any of the methods described herein. According to some embodiments, an electronic device comprises: a display, a touch-sensitive surface, and optionally one or more sensors for detecting intensity of contacts with the touch-sensitive surface; and means for performing or causing to be performed the operations of any method described herein. In accordance with some embodiments, an information processing apparatus for use in an electronic device with a display and a touch-sensitive surface, and optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface, includes means for performing, or causing to be performed, operations of any method described herein.
Accordingly, electronic devices having a display, a touch-sensitive surface, and optionally one or more sensors for detecting intensity of contacts with the touch-sensitive surface are provided with faster, more efficient methods and interfaces for providing audio feedback and obscuring audio, thereby improving the effectiveness, efficiency, and user satisfaction of such devices. Such methods and interfaces may complement or replace conventional methods for providing audio feedback and obscuring audio.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following detailed description, taken in conjunction with the following drawings, in which like reference numerals refer to corresponding parts throughout.
FIG. 1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
Fig. 1B is a block diagram illustrating exemplary components for event processing, according to some embodiments.
FIG. 2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
Fig. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
FIG. 4A illustrates an exemplary user interface of an application menu on a portable multifunction device according to some embodiments.
FIG. 4B illustrates an exemplary user interface of a multifunction device with a touch-sensitive surface separate from a display in accordance with some embodiments.
Fig. 4C through 4E illustrate examples of dynamic intensity thresholds according to some embodiments.
FIG. 5 illustrates an exemplary electronic device in communication with a display and a touch-sensitive surface, where for at least a subset of the electronic devices the display and/or the touch-sensitive surface are integrated into the electronic device, in accordance with some embodiments.
Fig. 6A-6Y illustrate exemplary user interfaces for dynamically adjusting the presentation of audio output and optionally dynamically adjusting the visual presentation of the user interface, according to some embodiments.
Fig. 7A-7G illustrate exemplary user interfaces for providing audio output based on an audio profile, according to some embodiments.
Fig. 8A-8B are flow diagrams illustrating methods of dynamically adjusting presentation of audio output, according to some embodiments.
Fig. 8C is a flow diagram illustrating a method of dynamically adjusting the presentation of audio output, according to some embodiments.
Fig. 8D through 8F are flow diagrams illustrating methods of dynamically adjusting the presentation of audio output, according to some embodiments.
Fig. 8G-8H are flow diagrams illustrating methods of dynamically adjusting a visual presentation of a user interface, according to some embodiments.
Fig. 9A-9C are flow diagrams illustrating methods of providing audio output based on an audio profile, according to some embodiments.
Detailed Description
The disclosed embodiments provide a method and apparatus for dynamically adjusting the presentation of audio output. More specifically, in some embodiments, the devices provided herein preview audio content by adjusting an audio attribute (e.g., a volume or non-volume attribute) according to a magnitude of a user input (e.g., an intensity of a contact on a touch-sensitive surface and/or a length of a swipe gesture). In some embodiments, while the device is providing the first audio output (e.g., playing the first song), the device "dims-in" the second audio output (e.g., the second song) depending on the magnitude of the user input. In some implementations, the faded-in of the second audio blur includes dynamically adjusting a non-volume property of the first audio content (e.g., stereo balance, cut-off frequency of a low pass filter applied to the first audio content) based on a magnitude of the input.
Consider an example where a song is being played via a music application on a user device. At the same time, the user may open a messaging application that is displaying a video message (imagine that the video has not yet been played and the user must press an image of the video to play it). In some embodiments, the user may play a "cue" of the video message by a press and hold gesture made to the video message. The prompt dims the song (e.g., by decreasing the volume and decreasing the cutoff frequency of the low pass filter as the intensity of the press and hold gesture increases) and dims the video message (by increasing the volume as the intensity of the press and hold gesture increases). In some embodiments, above a particular contact intensity, the device plays the video message at full volume and plays the song at a low volume while filtering out high frequency components of the song, so that the song only sounds like "booming" in a gentle manner in the background.
According to some embodiments, methods and apparatus for modifying audio feedback are also provided. For example, when activating a respective affordance causes a device to generate audio feedback, activating the respective affordance twice in rapid succession causes the device to alter the audio feedback.
Fig. 1A through 1B, 2, and 3 provide a description of exemplary devices, below.
Fig. 4A-4B and 6A-6O illustrate exemplary user interfaces for "cross-fading" audio output (e.g., when background audio is already playing) according to the magnitude of the user input. Fig. 8A-8B and 8C illustrate two methods of "cross-fading" the audio output (e.g., when the background audio is already playing) depending on the magnitude of the user input. The user interfaces in fig. 6A to 6O are used to illustrate the processes in fig. 8A to 8C.
Fig. 4A-4B and 6P-6Y illustrate exemplary user interfaces for dynamically adjusting attributes of an audio output (e.g., prompts at the audio output) according to a magnitude of a user input. Fig. 8D-8F illustrate a method of dynamically adjusting a property of an audio output (e.g., a cue at the audio output) according to a magnitude of a user input. The user interfaces in fig. 6P to 6Y are used to illustrate the processes in fig. 8D to 8F.
4A-4B and 6P-6Y illustrate exemplary user interfaces for providing visual feedback (e.g., visual blur). Fig. 9A-9C illustrate a flow diagram of a method of providing visual feedback (e.g., visual blur) according to some embodiments. The user interfaces in fig. 6P to 6Y are used to illustrate the processes in fig. 9A to 9C.
Exemplary device
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of various described embodiments. It will be apparent, however, to one skilled in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail as not to unnecessarily obscure aspects of the embodiments.
It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements in some cases, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact can be termed a second contact, and, similarly, a second contact can be termed a first contact, without departing from the scope of the various described embodiments. The first contact and the second contact are both contacts, but they are not the same contact unless the context clearly indicates otherwise.
The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term "if" is optionally to be interpreted to mean "when … …" ("where" or "upon") or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined … …" or "if [ stated condition or event ] is detected" is optionally to be construed to mean "upon determination … …" or "in response to determination … …" or "upon detection of [ stated condition or event ] or" in response to detection of [ stated condition or event ] ", depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described herein. In some embodiments, the device is a portable communication device, such as a mobile phone, that also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of portable multifunction devices include, but are not limited to, those from Apple Inc
Figure GDA0001802397030000081
Device and iPod
Figure GDA0001802397030000082
An apparatus and
Figure GDA0001802397030000083
an apparatus. Other portable electronic devices are optionally used, such as laptops or tablets with touch-sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the following discussion, an electronic device including a display and a touch-sensitive surface is described. However, it should be understood that the electronic device optionally includes one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.
The device typically supports a variety of applications, such as one or more of the following: a note application, a drawing application, a presentation application, a word processing application, a website creation application, a disc editing application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, a fitness support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications executing on the device optionally use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device are optionally adjusted and/or varied from application to application and/or within respective applications. In this way, a common physical architecture of the device (such as a touch-sensitive surface) optionally supports various applications with a user interface that is intuitive and clear to the user.
Attention is now directed to embodiments of portable devices having touch sensitive displays. FIG. 1A is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display system 112 is sometimes referred to as a "touch screen" for convenience, and is sometimes simply referred to as a touch-sensitive display. Device 100 includes memory 102 (which optionally includes one or more computer-readable storage media), a memory controller 122, one or more processing units (CPUs) 120, a peripheral interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, an input/output (I/O) subsystem 106, other input or control devices 116, and an external port 124. The device 100 optionally includes one or more optical sensors 164. Device 100 optionally includes one or more intensity sensors 165 (e.g., a touch-sensitive surface, such as touch-sensitive display system 112 of device 100) for detecting the intensity of contacts on device 100. Device 100 optionally includes one or more tactile output generators 167 for generating tactile outputs on device 100 (e.g., tactile outputs on a touch-sensitive surface such as touch-sensitive display system 112 of device 100 or touch pad 355 of device 300). These components optionally communicate via one or more communication buses or signal lines 103.
As used in this specification and claims, the term "haptic output" refers to a physical displacement of a device relative to a previous position of the device, a physical displacement of a component of the device (e.g., a touch-sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a center of mass of the device that is to be detected by a user with the user's sense of touch. For example, where a device or component of a device is in contact with a touch-sensitive surface of a user (e.g., a finger, palm, or other portion of a user's hand), the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or a trackpad) is optionally interpreted by a user as a "down click" or "up click" of a physical actuator button. In some cases, the user will feel a tactile sensation, such as a "press click" or "release click," even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moving. In another example, movement of the touch-sensitive surface is optionally interpreted or sensed by the user as "roughness" of the touch-sensitive surface, even when there is no change in the smoothness of the touch-sensitive surface. While such interpretation of touch by a user will be limited by the user's individualized sensory perception, many sensory perceptions of the presence of touch are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click down," "click up," "roughness"), unless otherwise stated, the generated haptic output corresponds to a physical displacement of the device or a component thereof that would generate a sensory perception of a typical (or ordinary) user.
It should be understood that device 100 is merely one example of a portable multifunction device, and that device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of these components. The various components shown in fig. 1A are implemented in hardware, software, firmware, or any combination thereof, including one or more signal processing circuits and/or application specific integrated circuits.
The memory 102 optionally includes high-speed random access memory, and also optionally includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to memory 102 by other components of device 100, such as one or more CPUs 120 and peripheral interface 118, is optionally controlled by a memory controller 122.
Peripheral interface 118 may be used to couple the input and output peripherals of the device to one or more CPUs 120 and memory 102. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in the memory 102 to perform various functions of the device 100 and process data.
In some embodiments, peripherals interface 118, one or more CPUs 120, and memory controller 122 are optionally implemented on a single chip, such as chip 104. In some other embodiments, they are optionally implemented on separate chips.
RF (radio frequency) circuitry 108 receives and transmits RF signals, also called electromagnetic signals. The RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and so forth. RF circuitry 108 optionally communicates with networks and other devices via wireless communication, such as the internet, also known as the World Wide Web (WWW), intranets and/or wireless networks such as a cellular telephone network, a wireless Local Area Network (LAN), and/or a Metropolitan Area Network (MAN). The wireless communication optionally uses any of a number of communication standards, protocols, and techniques, including, but not limited to, global system for mobile communications (GSM), Enhanced Data GSM Environment (EDGE), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), evolution, data-only (EV-DO), HSPA +, Dual-cell HSPA (DC-HSPDA), Long Term Evolution (LTE), Near Field Communication (NFC), wideband code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth, Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11ac, IEEE 802.11ax, IEEE 802.11b, IEEE 802.11g, and/or IEEE 802.11n), Voice over Internet protocol (VoIP), Wi-MAX, protocols for email (e.g., Internet Message Access Protocol (IMAP), and/or Post Office Protocol (POP)) Instant messaging (e.g., extensible messaging and presence protocol (XMPP), session initiation protocol with extensions for instant messaging and presence (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol including communication protocols not yet developed by the filing date of this document.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. The audio circuitry 110 receives audio data from the peripheral interface 118, converts the audio data to electrical signals, and transmits the electrical signals to the speaker 111. The speaker 111 converts the electrical signals into human-audible sound waves. The audio circuit 110 also receives electrical signals converted from sound waves by the microphone 113. The audio circuit 110 converts the electrical signals to audio data and transmits the audio data to the peripheral interface 118 for processing. Audio data is optionally retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripheral interface 118. In some embodiments, the audio circuit 110 also includes a headset jack (e.g., 212 in fig. 2). The headset jack provides an interface between the audio circuitry 110 and a removable audio input/output peripheral, such as an output-only headset or a headset having both an output (e.g., a monaural headset or a binaural headset) and an input (e.g., a microphone).
The I/O subsystem 106 couples input/output peripheral devices on the device 100, such as a touch-sensitive display system 112 and other input or control devices 116, to a peripheral interface 118. The I/O subsystem 106 optionally includes a display controller 156, an optical sensor controller 158, an intensity sensor controller 159, a haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive/transmit electrical signals from/to other input or control devices 116. Other input or control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and the like. In some alternative embodiments, one or more input controllers 160 are optionally coupled to (or not coupled to) any of: a keyboard, an infrared port, a USB port, a stylus, and/or a pointing device such as a mouse. The one or more buttons (e.g., 208 in fig. 2) optionally include an up/down button for volume control of the speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206 in fig. 2).
Touch-sensitive display system 112 provides an input interface and an output interface between the device and the user. Display controller 156 receives electrical signals from touch-sensitive display system 112 and/or transmits electrical signals to touch-sensitive display system 112. Touch-sensitive display system 112 displays visual output to a user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively "graphics"). In some embodiments, some or all of the visual output corresponds to a user interface object. As used herein, the term "affordance" refers to a user-interactive graphical user interface object (e.g., a graphical user interface object configured to respond to input directed to the graphical user interface object). Examples of user interactive graphical user interface objects include, but are not limited to, buttons, sliders, icons, selectable menu items, switches, hyperlinks, or other user interface controls.
Touch-sensitive display system 112 has a touch-sensitive surface, sensor, or group of sensors that accept input from a user based on tactile sensation and/or tactile contact. Touch-sensitive display system 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch-sensitive display system 112 and convert the detected contact into interaction with user interface objects (e.g., one or more soft keys, icons, web pages, or images) that are displayed on touch-sensitive display system 112. In an exemplary embodiment, the point of contact between touch-sensitive display system 112 and the user corresponds to a user's finger or a stylus.
Touch-sensitive display system 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch-sensitive display system 112 and display controller 156 optionally detect contact and any movement or breaking thereof using any of a variety of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch-sensitive display system 112. In one exemplary embodiment, projected mutual capacitance sensing technology is used, such as that from Apple Inc (Cupertino, California)
Figure GDA0001802397030000131
iPod
Figure GDA0001802397030000132
And
Figure GDA0001802397030000133
the technique found in (1).
Touch sensitive display system 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touchscreen video resolution exceeds 400dpi (e.g., 500dpi, 800dpi, or greater). The user optionally makes contact with touch-sensitive display system 112 using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work with finger-based contacts and gestures, which may not be as accurate as stylus-based input due to the large contact area of the finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the action desired by the user.
In some embodiments, in addition to a touch screen, device 100 optionally includes a touch pad (not shown) for activating or deactivating particular functions. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike a touch screen, does not display visual output. The touchpad is optionally a touch-sensitive surface separate from touch-sensitive display system 112, or an extension of the touch-sensitive surface formed by the touch screen.
The device 100 also includes a power system 162 for powering the various components. Power system 162 optionally includes a power management system, one or more power sources (e.g., battery, Alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a Light Emitting Diode (LED)), and any other components associated with power generation, management, and distribution in portable devices.
The device 100 optionally further includes one or more optical sensors 164. FIG. 1A shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The one or more optical sensors 164 optionally include Charge Coupled Devices (CCDs) or Complementary Metal Oxide Semiconductor (CMOS) phototransistors. The one or more optical sensors 164 receive light projected through the one or more lenses from the environment and convert the light into data representing an image. In conjunction with imaging module 143 (also called a camera module), one or more optical sensors 164 optionally capture still images and/or video. In some embodiments, an optical sensor is located on the back of device 100 opposite touch-sensitive display system 112 on the front of the device, enabling the touch screen to be used as a viewfinder for still and/or video image capture. In some embodiments, another optical sensor is located on the front of the device to capture images of the user (e.g., for self-photography, for video conferencing while the user is viewing other video conference participants on a touch screen, etc.).
Device 100 optionally further comprises one or more contact intensity sensors 165. FIG. 1A shows a contact intensity sensor coupled to an intensity sensor controller 159 in the I/O subsystem 106. The one or more contact intensity sensors 165 optionally include one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors for measuring the force (or pressure) of a contact on a touch-sensitive surface). One or more contact intensity sensors 165 receive contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. In some implementations, at least one contact intensity sensor is collocated with or proximate to a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on the back of device 100 opposite touch screen display system 112 located on the front of device 100.
The device 100 optionally further includes one or more proximity sensors 166. Fig. 1A shows a proximity sensor 166 coupled with the peripheral interface 118. Alternatively, the proximity sensor 166 is coupled to the input controller 160 in the I/O subsystem 106. In some embodiments, the proximity sensor turns off and disables touch-sensitive display system 112 when the multifunction device is placed near the user's ear (e.g., the user is making a phone call).
Device 100 optionally further comprises one or more tactile output generators 167. FIG. 1A shows a haptic output generator coupled to a haptic feedback controller 161 in I/O subsystem 106. The one or more tactile output generators 167 optionally include one or more electro-acoustic devices, such as speakers or other audio components, and/or electromechanical devices that convert energy into linear motion, such as motors, solenoids, electroactive aggregators, piezoelectric actuators, electrostatic actuators, or other tactile output generating components (e.g., components that convert electrical signals into tactile output on the device). The one or more tactile output generators 167 receive tactile feedback generation instructions from tactile feedback module 133 and generate tactile outputs on device 100 that can be felt by a user of device 100. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., into/out of the surface of device 100) or laterally (e.g., back and forth in the same plane as the surface of device 100). In some embodiments, at least one tactile output generator sensor is located on a rear portion of device 100 opposite touch-sensitive display system 112 located on a front portion of device 100.
Device 100 optionally also includes one or more accelerometers 168. Fig. 1A shows accelerometer 168 coupled with peripheral interface 118. Alternatively, accelerometer 168 is optionally coupled with input controller 160 in I/O subsystem 106. In some embodiments, information is displayed in a portrait view or a landscape view on the touch screen display based on analysis of data received from one or more accelerometers. Device 100 optionally includes a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) in addition to the one or more accelerometers 168 for obtaining information about the position and orientation (e.g., portrait or landscape) of device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or set of instructions) 128, a contact/motion module (or set of instructions) 130, a graphics module (or set of instructions) 132, a haptic feedback module (or set of instructions) 133, a text input module (or set of instructions) 134, a Global Positioning System (GPS) module (or set of instructions) 135, an application program (or set of instructions) 136, and audio-specific modules (including an audio preview module 163-1, an audio modification module 163-2, and an audio alteration module 163-3). Further, in some embodiments, memory 102 stores device/global internal state 157, as shown in fig. 1A and 3. Device/global internal state 157 includes one or more of: an active application state indicating which applications (if any) are currently active; display state indicating what applications, views, or other information occupy various regions of touch-sensitive display system 112; sensor states including information obtained from various sensors of the device and other input or control devices 116; and position and/or location information regarding the device position and/or pose.
Operating system 126 (e.g., iOS, Darwin, RTXC, LINUX, UNIX, OSX, WINDOWS, or embedded operating systems such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
The communication module 128 facilitates communication with other devices through one or more external ports 124 and also includes various software components for processing data received by the RF circuitry 108 and/or the external ports 124. The external port 124 (e.g., Universal Serial Bus (USB), firewire, etc.) is adapted to couple directly to other devices or indirectly via a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external port is some of Apple Inc
Figure GDA0001802397030000161
Device and iPod
Figure GDA0001802397030000162
A device, and a multi-pin (e.g., 30-pin) connector that is the same as or similar and/or compatible with the 30-pin connector used in iPod devices. In some embodiments, the external port is some of Apple Inc
Figure GDA0001802397030000164
Device and iPod
Figure GDA0001802397030000163
A device, and a Lightning connector that is the same as or similar and/or compatible with the Lightning connector used in the iPod device.
Contact/motion module 130 optionally detects contact with touch-sensitive display system 112 (in conjunction with display controller 156) and other touch-sensitive devices (e.g., a touchpad or a physical click wheel). Contact/motion module 130 includes various software components for performing various operations related to the detection of contact (e.g., by a finger or by a stylus), such as determining whether contact has occurred (e.g., detecting a finger-down event), determining the intensity of contact (e.g., the force or pressure of the contact, or a surrogate for the force or pressure of the contact), determining whether there is movement of the contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining whether contact has ceased (e.g., detecting a finger-up event or a contact-break). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining velocity (magnitude), velocity (magnitude and direction), and/or acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single finger contacts or stylus contacts) or multiple simultaneous contacts (e.g., "multi-touch"/multi-finger contacts). In some embodiments, the contact/motion module 130 and the display controller 156 detect contact on the touch panel.
The contact/motion module 130 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, the gesture is optionally detected by detecting a particular contact pattern. For example, detecting a single-finger tap gesture includes detecting a finger-down event, and then detecting a finger-up (lift-off) event at the same location (or substantially the same location) as the finger-down event (e.g., at an icon location). In another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event, then detecting one or more finger-dragging events, and then subsequently detecting a finger-up (lift-off) event. Similarly, taps, swipes, drags, and other gestures are optionally detected for the stylus by detecting a particular contact pattern of the stylus.
In some embodiments, detecting a finger tap gesture is dependent on a length of time between detecting a finger down event and a finger up event, but is independent of an intensity of finger contact between detecting a finger down event and a finger up event. In some embodiments, a flick gesture is detected based on a determination that a length of time between a finger down event and a finger up event is less than a predetermined value (e.g., less than 0.1, 0.2, 0.3, 0.4, or 0.5 seconds) regardless of whether an intensity of the finger contact during the flick satisfies a given intensity threshold (greater than a nominal contact detection intensity threshold), such as a light pressure intensity threshold or a deep pressure intensity threshold. Thus, the finger tap gesture may satisfy a particular input criterion that does not require that the characteristic intensity of the contact satisfy a given intensity threshold in order to satisfy the particular input criterion. For clarity, a finger contact in a flick gesture typically needs to meet a nominal contact detection intensity threshold below which no contact can be detected in order to detect a finger press event. Similar analysis applies to detecting flick gestures made through a stylus or other contact. Where the device is capable of detecting a finger or stylus contact hovering above the touch-sensitive surface, the nominal contact detection intensity threshold optionally does not correspond to a physical contact between the finger or stylus and the touch-sensitive surface.
The same concept applies in a similar manner to other types of gestures. For example, a swipe gesture, a pinch-out gesture, and/or a long press gesture are optionally detected based on satisfying a criterion that is independent of the intensity of the contacts included in the gesture or that does not require one or more contacts performing the gesture to reach an intensity threshold in order to be recognized. For example, a swipe gesture is detected based on an amount of movement of one or more contacts; detecting a pinch gesture based on movement of two or more contacts toward each other; detecting an un-pinch gesture based on movement of the two or more contacts away from each other; and detect a long press gesture based on a duration of contact on the touch-sensitive surface having less than a threshold amount of movement. Thus, the statement that a particular gesture recognition criterion does not require that the intensity of one or more contacts meet a respective intensity threshold in order for the particular gesture recognition criterion to be met means that the particular gesture recognition criterion can be met if one or more contacts in the gesture do not meet the respective intensity threshold, and can also be met if one or more contacts in the gesture meet or exceed the respective intensity threshold. In some embodiments, a flick gesture is detected based on determining that finger down and finger up events are detected within a predefined time period, regardless of whether the contact is above or below a respective intensity threshold during the predefined time period, and a swipe gesture is detected based on determining that the contact movement is greater than a predefined magnitude, the contact being above the respective intensity threshold even at the end of the contact movement. Even in implementations where the detection of gestures is affected by the intensity of the contact performing the gesture (e.g., the device detects a long press sooner when the intensity of the contact is above an intensity threshold, or delays detection of a tap input when the intensity of the contact is higher), the detection of those gestures does not require the contact to reach a particular intensity threshold, so long as the criteria for recognizing the gesture can be met without the contact reaching the particular intensity threshold (e.g., even if the amount of time it takes to recognize the gesture changes).
In some cases, the contact intensity threshold, the duration threshold, and the movement threshold are combined into a plurality of different combinations in order to create a heuristic for distinguishing two or more different gestures directed to the same input element or region such that a plurality of different interactions with the same input element are enabled to provide a richer set of user interactions and responses. A set of particular gesture recognition criteria does not require that the intensity of one or more contacts meet a respective intensity threshold in order for a statement of a particular gesture recognition criteria to be met does not preclude simultaneous evaluation of other intensities depending on the gesture recognition criteria to identify other gestures having criteria that are met when the gesture includes contacts having an intensity above the respective intensity threshold. For example, in some cases, a first gesture recognition criterion for a first gesture (which does not require the intensity of one or more contacts to meet a respective intensity threshold in order to meet the first gesture recognition criterion) competes with a second gesture recognition criterion for a second gesture (which relies on the one or more contacts reaching the respective intensity threshold). In such competition, if the second gesture recognition criteria for the second gesture are first satisfied, the gesture is optionally not recognized as satisfying the first gesture recognition criteria for the first gesture. For example, if the contact reaches a respective intensity threshold before the contact moves by a predefined amount of movement, a deep-press gesture is detected instead of a swipe gesture. Conversely, if the contact moves by a predefined amount of movement before the contact reaches the respective intensity threshold, a swipe gesture is detected instead of a deep-press gesture. Even in such cases, the first gesture recognition criteria for the first gesture do not require that the intensity of the one or more contacts satisfy the respective intensity threshold in order for the first gesture recognition criteria to be satisfied, because if the contacts remain below the respective intensity threshold until the gesture ends (e.g., a swipe gesture with contacts that do not increase to an intensity above the respective intensity threshold), the gesture will have been recognized by the first gesture recognition criteria as a swipe gesture. Thus, a particular gesture recognition standard that does not require the intensity of one or more contacts to satisfy a respective intensity threshold in order to satisfy the particular gesture recognition standard will (a) in some cases ignore the intensity of the contact relative to the intensity threshold (e.g., for a tap gesture) and/or (B) in some cases still rely on the intensity of the contact relative to the intensity threshold in a sense that if a set of competing intensity-dependent gesture recognition criteria (e.g., for a deep-press gesture) recognizes the input as corresponding to the intensity-dependent gesture before the particular gesture recognition standard recognizes the gesture corresponding to the input (e.g., for a long-press gesture that is competing with the deep-press gesture for recognition), the particular gesture recognition standard (e.g., for the long-press gesture) will fail.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch-sensitive display system 112 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual properties) of displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including, but not limited to, text, web pages, icons (such as user interface objects including soft keys), digital images, videos, animations and the like.
In some embodiments, graphics module 132 stores data to be used to represent graphics. Each graphic is optionally assigned a corresponding code. The graphics module 132 receives one or more codes specifying graphics to be displayed, coordinate data and other graphics attribute data if necessary, from an application program or the like, and then generates screen image data to output to the display controller 156.
Haptic feedback module 133 includes various software components for generating instructions for use by one or more haptic output generators 167 to produce haptic outputs at one or more locations on device 100 in response to user interaction with device 100.
Text input module 134, which is optionally a component of graphics module 132, provides soft keys for entering text in various applications, such as contacts 137, email 140, IM 141, browser 147, and any other application that requires text input.
The GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to the phone 138 for location-based dialing, to the camera 143 as photo/video metadata, and to applications that provide location-based services, such as weather desktop applets, local yellow pages desktop applets, and map/navigation desktop applets).
The applications 136 optionally include the following modules (or sets of instructions), or a subset or superset thereof:
a contacts module 137 (sometimes referred to as an address book or contact list);
a phone module 138;
a video conferencing module 139;
an email client module 140;
an Instant Messaging (IM) module 141;
a news module 142;
a camera module 143 for still and/or video images;
an image management module 144;
a browser module 147;
a calendar module 148;
desktop applet module 149, optionally including one or more of: a weather desktop applet 149-1, a stock market desktop applet 149-2, a calculator desktop applet 149-3, an alarm desktop applet 149-4, a dictionary desktop applet 149-5 and other desktop applets acquired by the user, and a user created desktop applet 149-6;
A desktop applet creator module 150 for making a user-created desktop applet 149-6;
a search module 151;
a video and music player module 152, optionally consisting of a video player module and a music player module;
a notepad module 153;
a map module 154; and/or
Online video module 155.
Examples of other applications 136 that are optionally stored in memory 102 include other word processing applications, other image editing applications, drawing applications, rendering applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, contacts module 137 includes executable instructions for managing contact lists or contact lists (e.g., stored in application internal state 192 of contacts module 137 in memory 102 or memory 370), including: adding one or more names to the address book; deleting one or more names from the address book; associating one or more telephone numbers, one or more email addresses, one or more physical addresses, or other information with a name; associating the image with a name; classifying and ordering names; providing a telephone number and/or email address to initiate and/or facilitate communications over telephone 138, video conference 139, email 140, or IM 141; and so on.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, phone module 138 includes executable instructions for entering a sequence of characters corresponding to a phone number, accessing one or more phone numbers in address book 137, modifying an entered phone number, dialing a corresponding phone number, conducting a conversation, and disconnecting or hanging up when the conversation is complete. As noted above, the wireless communication optionally uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, one or more optical sensors 164, optical sensor controller 158, contact module 130, graphics module 132, text input module 134, contact list 137, and telephony module 138, video conference module 139 includes executable instructions for initiating, conducting, and terminating video conferences between the user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions for creating, sending, receiving, and managing emails in response to user instructions. In conjunction with the image management module 144, the email client module 140 makes it very easy to create and send an email with a still image or a video image captured by the camera module 143.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, instant message module 141 includes executable instructions for entering a sequence of characters corresponding to an instant message, modifying previously entered characters, sending a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for telephone-based instant messages or using XMPP, SIMPLE, Apple Push Notification Services (APNs) or IMPS for internet-based instant messages), receiving an instant message, and viewing the received instant message. In some embodiments, the transmitted and/or received instant messages optionally include graphics, photos, audio files, video files, and/or other attachments supported in MMS and/or Enhanced Messaging Service (EMS). As used herein, "instant messaging" refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, APNs, or IMPS).
In conjunction with touch-sensitive display system 112, display controller 156, contact/motion module 130, and graphics module 132, news module 142 includes executable instructions for displaying user-specific news articles (e.g., articles collected from a variety of publication sources based on user-specific preferences) and allowing a user to interact with the user-specific news articles (or interact with portions of content contained within the user-specific news articles).
In conjunction with touch-sensitive display system 112, display controller 156, one or more optical sensors 164, optical sensor controller 158, contact module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions for capturing still images or video (including video streams) and storing them in memory 102, modifying features of the still images or video, and/or deleting the still images or video from memory 102.
In conjunction with touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions for arranging, modifying (e.g., editing), or otherwise manipulating, labeling, deleting, presenting (e.g., in a digital slide or album), and storing still and/or video images.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions to browse the internet (including searching for, linking to, receiving, and displaying web pages or portions thereof, and attachments and other files linked to web pages) according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, email client module 140, and browser module 147, calendar module 148 includes executable instructions for creating, displaying, modifying, and storing calendars and data associated with calendars (e.g., calendar entries, to-do items, etc.) according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, the desktop applet module 149 is a mini-application (e.g., weather desktop applet 149-1, stock market desktop applet 149-2, calculator desktop applet 149-3, alarm clock desktop applet 149-4, and dictionary desktop applet 149-5) or a mini-application created by a user (e.g., user-created desktop applet 149-6) that is optionally downloaded and used by the user. In some embodiments, the desktop applet includes an HTML (hypertext markup language) file, a CSS (cascading style sheet) file, and a JavaScript file. In some embodiments, the desktop applet includes an XML (extensible markup language) file and a JavaScript file (e.g., Yahoo! desktop applet).
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, desktop applet creator module 150 includes executable instructions for creating a desktop applet (e.g., turning a user-specified portion of a web page into a desktop applet).
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions for searching memory 102 for text, music, sound, images, videos, and/or other files that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions.
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, audio circuitry 110, speakers 111, RF circuitry 108, and browser module 147, video and music player module 152 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, as well as executable instructions for displaying, rendering, or otherwise playing back video (e.g., on touch-sensitive display system 112 or on an external display connected wirelessly or via external port 124). In some embodiments, the device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of apple inc).
In conjunction with touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, notepad module 153 includes executable instructions for creating and managing notepads, backlogs, and the like according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 includes executable instructions for receiving, displaying, modifying, and storing maps and data associated with maps (e.g., driving directions; data about stores and other points of interest at or near a particular location; and other location-based data) according to user instructions.
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, audio circuit 110, speaker 111, RF circuit 108, text input module 134, email client module 140, and browser module 147, online video module 155 includes executable instructions that allow a user to access, browse, receive (e.g., through streaming media and/or download), playback (e.g., on touch screen 112 or on an external display connected wirelessly or via external port 124), send an email with a link to a particular online video, and otherwise manage online video in one or more file formats, such as h.264. In some embodiments, the link to a particular online video is sent using instant messaging module 141 instead of email client module 140.
As depicted in fig. 1A, portable multifunction device 100 also includes an audio output providing module 163 for providing sound information to the audio system so that the audio system can present audio output (e.g., as shown in fig. 5, the audio system can be included in portable multifunction device 100 or separate from portable multifunction device 100). The audio output provision module 163 optionally includes the following modules (or sets of instructions), or a subset or superset thereof:
an audio profile 402 for storing information about audio characteristics (e.g., audio envelope characteristics, pitch characteristics, left-right (L-R) balance characteristics, reverberation curves, frequency filtering characteristics) corresponding to or generated in response to, for example, user interaction with the portable multifunction device 100;
an audio preview module 163-1 comprising executable instructions for providing information to play a preview of audio content (e.g., a particular song) and optionally adjusting the presentation of different content (e.g., different songs) (e.g., blurring the two songs together) in response to a request to present a first audio output;
an audio modification module 163-2 comprising executable instructions to alter a first audio output (e.g., an audio output input resulting from a first user input directed to an affordance in a user interface) in response to detecting a second user input directed to the affordance (e.g., the same affordance or a different affordance) in the user interface. In some embodiments, audio modification module 163-1 modifies the corresponding audio profile 402; and
An audio modification criteria module 163-3 comprising executable instructions for determining whether the audio modification module 163-2 should modify the first audio output based on audio modification criteria.
Each of the modules and applications identified above corresponds to a set of executable instructions for performing one or more of the functions described above as well as the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 102 optionally stores a subset of the modules and data structures described above. Further, memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device on which the operation of a predefined set of functions is performed exclusively through a touch screen and/or a touchpad. The number of physical input control devices (such as push buttons, dials, etc.) on device 100 is optionally reduced by using a touch screen and/or touchpad as the primary input control device for operation of device 100.
The predefined set of functions performed exclusively by the touchscreen and/or touchpad optionally include navigation between user interfaces. In some embodiments, the touchpad, when touched by a user, navigates device 100 from any user interface displayed on device 100 to a main, home, or root menu. In such embodiments, a touchpad is used to implement a "menu button". In some other embodiments, the menu button is a physical push button or other physical input control device rather than a touchpad.
Fig. 1B is a block diagram illustrating exemplary components for event processing, according to some embodiments. In some embodiments, memory 102 (in fig. 1A) or 370 (fig. 3) includes event classifier 170 (e.g., in operating system 126) and corresponding application 136-1 (e.g., any of the aforementioned applications 136, 137-155, 380-390).
Event sorter 170 receives the event information and determines application 136-1 and application view 191 of application 136-1 to which the event information is to be delivered. The event sorter 170 includes an event monitor 171 and an event dispatcher module 174. In some embodiments, application 136-1 includes an application internal state 192 that indicates one or more current application views that are displayed on touch-sensitive display system 112 when the application is active or executing. In some embodiments, device/global internal state 157 is used by event classifier 170 to determine which application(s) are currently active, and application internal state 192 is used by event classifier 170 to determine the application view 191 to which to deliver event information.
In some embodiments, the application internal state 192 includes additional information, such as one or more of: resume information to be used when the application 136-1 resumes execution, user interface state information indicating information being displayed by the application 136-1 or information that is ready for display by the application 136-1, a state queue for enabling a user to return to a previous state or view of the application 136-1, and a repeat/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripheral interface 118. The event information includes information about a sub-event (e.g., a user touch on touch-sensitive display system 112 as part of a multi-touch gesture). Peripherals interface 118 transmits information it receives from I/O subsystem 106 or sensors such as proximity sensor 166, one or more accelerometers 168, and/or microphone 113 (via audio circuitry 110). Information received by peripheral interface 118 from I/O subsystem 106 includes information from touch-sensitive display system 112 or a touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to peripheral interface 118 at predetermined intervals. In response, peripheral interface 118 transmits the event information. In other embodiments, peripheral interface 118 transmits event information only when there is a significant event (e.g., receiving input above a predetermined noise threshold and/or exceeding a predetermined duration).
In some embodiments, event classifier 170 further includes hit view determination module 172 and/or active event recognizer determination module 173.
When touch-sensitive display system 112 displays more than one view, hit view determination module 172 provides a software process for determining where within one or more views a sub-event has occurred. The view consists of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected optionally corresponds to a programmatic level within a programmatic or view hierarchy of applications. For example, the lowest level view in which a touch is detected is optionally referred to as a hit view, and the set of events considered as correct inputs is optionally determined based at least in part on the hit view of the initial touch that initiated the touch-based gesture.
Hit view determination module 172 receives information related to sub-events of the contact-based gesture. When the application has multiple views organized in a hierarchy, the hit view determination module 172 identifies the hit view as the lowest view in the hierarchy that should handle the sub-event. In most cases, the hit view is the lowest level view in which the initiating sub-event (i.e., the first sub-event in the sequence of sub-events that form an event or potential event) occurs. Once a hit view is identified by the hit view determination module, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
The active event recognizer determination module 173 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some implementations, the active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of the sub-event are actively participating views, and thus determines that all actively participating views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely confined to the area associated with a particular view, the higher views in the hierarchy will remain actively participating views.
The event dispatcher module 174 dispatches the event information to an event recognizer (e.g., event recognizer 180). In embodiments that include active event recognizer determination module 173, event dispatcher module 174 delivers event information to event recognizers determined by active event recognizer determination module 173. In some embodiments, the event dispatcher module 174 stores event information in an event queue, which is retrieved by the respective event receiver module 182.
In some embodiments, the operating system 126 includes an event classifier 170. Alternatively, application 136-1 includes event classifier 170. In yet another embodiment, the event classifier 170 is a stand-alone module or is part of another module stored in the memory 102, such as the contact/motion module 130.
In some embodiments, the application 136-1 includes a plurality of event handlers 190 and one or more application views 191, where each application view includes instructions for handling touch events occurring within a respective view of the application's user interface. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, the respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of the event recognizers 180 are part of a separate module, such as a user interface toolkit (not shown) or a higher level object from which the application 136-1 inherits methods and other properties. In some embodiments, the respective event handlers 190 include one or more of: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or calls data updater 176, object updater 177 or GUI updater 178 to update application internal state 192. Alternatively, one or more of the application views 191 include one or more corresponding event handlers 190. Additionally, in some embodiments, one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
The corresponding event recognizer 180 receives event information (e.g., event data 179) from the event classifier 170 and recognizes events from the event information. The event recognizer 180 includes an event receiver 182 and an event comparator 184. In some embodiments, the event identifier 180 further comprises at least a subset of: metadata 183 and event delivery instructions 188 (which optionally include sub-event delivery instructions).
The event receiver 182 receives event information from the event classifier 170. The event information includes information about a sub-event such as a touch or touch movement. According to the sub-event, the event information further includes additional information, such as the location of the sub-event. When the sub-event relates to motion of a touch, the event information optionally also includes the velocity and direction of the sub-event. In some embodiments, the event comprises rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information comprises corresponding information about the current orientation of the device (also referred to as the device pose).
Event comparator 184 compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event or determines or updates the state of the event or sub-event. In some embodiments, event comparator 184 includes event definitions 186. Event definition 186 contains definitions (e.g., predefined sub-event sequences) of events (e.g., event 1(187-1), event 2(187-2), and others). In some embodiments, sub-events in event 187 include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition of event 1(187-1) is a double click on the displayed object. For example, a double tap includes a first touch (touch start) on the displayed object for a predetermined length of time, a first lift-off (touch end) for a predetermined length of time, a second touch (touch start) on the displayed object for a predetermined length of time, and a second lift-off (touch end) for a predetermined length of time. In another example, the definition of event 2(187-2) is a drag on the display object. For example, dragging includes a predetermined length of time of touch (or contact) on a displayed object, movement of the touch across touch-sensitive display system 112, and lifting of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some embodiments, event definition 187 includes definitions of events for respective user interface objects. In some embodiments, event comparator 184 performs a hit test to determine which user interface object is associated with a sub-event. For example, in an application view that displays three user interface objects on touch-sensitive display system 112, when a touch is detected on touch-sensitive display system 112, event comparator 184 performs a hit-test to determine which of the three user interface objects is associated with the touch (sub-event). If each displayed object is associated with a corresponding event handler 190, the event comparator uses the results of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects the event handler associated with the sub-event and the object that triggered the hit test.
In some embodiments, the definition of the respective event 187 further includes a delay action that delays the delivery of the event information until it has been determined whether the sequence of sub-events does or does not correspond to the event type of the event identifier.
When the respective event recognizer 180 determines that the sequence of sub-events does not match any event in the event definition 186, the respective event recognizer 180 enters an event not possible, event failed, or event ended state, after which subsequent sub-events of the touch-based gesture are ignored. In this case, other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the persistent touch-based gesture.
In some embodiments, the respective event recognizer 180 includes metadata 183 with configurable attributes, tags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively participating event recognizers. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate how or how event recognizers interact with each other. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate whether a sub-event is delivered to a different level in the view or programmatic hierarchy.
In some embodiments, when one or more particular sub-events of an event are identified, the respective event identifier 180 activates an event handler 190 associated with the event. In some embodiments, the respective event identifier 180 delivers event information associated with the event to the event handler 190. Activating the event handler 190 is different from sending (and deferring) sub-events to the corresponding hit view. In some embodiments, the event recognizer 180 throws a marker associated with the recognized event, and the event handler 190 associated with the marker retrieves the marker and performs a predefined process.
In some embodiments, the event delivery instructions 188 include sub-event delivery instructions that deliver event information about sub-events without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the sequence of sub-events or to actively participating views. Event handlers associated with the sequence of sub-events or with actively participating views receive the event information and perform a predetermined process.
In some embodiments, data updater 176 creates and updates data used in application 136-1. For example, data updater 176 updates a phone number used in contacts module 137 or stores a video file used in video player module 145. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, object updater 177 creates a new user interface object or updates the location of a user interface object. GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends it to graphics module 132 for display on a touch-sensitive display.
In some embodiments, one or more event handlers 190 include or have access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It should be understood that the above discussion of event processing with respect to user touches on a touch sensitive display also applies to other forms of user input utilizing an input device to operate multifunction device 100, not all of which are initiated on a touch screen. For example, optionally with mouse movement and mouse button presses, optionally in combination with single or multiple keyboard presses or holds; contact movements on the touchpad, such as taps, drags, scrolls, and the like; inputting by a stylus; movement of the device; verbal instructions; a detected eye movement; a biometric input; and/or any combination thereof as inputs corresponding to sub-events defining the event to be identified.
FIG. 2 illustrates a portable multifunction device 100 with a touch screen (e.g., touch-sensitive display system 112 of FIG. 1A) in accordance with some embodiments. The touch screen optionally displays one or more graphics within the User Interface (UI) 200. In this embodiment, as well as other embodiments described below, a user can select one or more of these graphics by making gestures on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figures) or with one or more styluses 203 (not drawn to scale in the figures). In some embodiments, selection of one or more graphics will occur when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up, and/or down), and/or a rolling of a finger (right to left, left to right, up, and/or down) that has made contact with device 100. In some implementations, or in some cases, inadvertent contact with a graphic does not select the graphic. For example, a swipe gesture that sweeps over an application icon optionally does not select the corresponding application when the gesture corresponding to the selection is a tap.
Device 100 optionally also includes one or more physical buttons, such as a "home" button or menu button 204. As previously described, the menu button 204 is optionally used to navigate to any application 136 in a set of applications that are optionally executed on the device 100. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on touch screen display 112.
In some embodiments, device 100 includes a touch screen display, menu buttons 204, a push button 206 for powering the device on/off and for locking the device, one or more volume adjustment buttons 208, a Subscriber Identity Module (SIM) card slot 210, a headset jack 212, and a docking/charging external port 124. The push button 206 is optionally used to power on/off the device by pressing the button and holding the button in the pressed state for a predefined time interval; locking the device by pressing a button and releasing the button before a predefined time interval has elapsed; and/or unlocking the device or initiating an unlocking process. In some embodiments, device 100 also accepts voice input through microphone 113 for activating or deactivating certain functions. Device 100 also optionally includes one or more contact intensity sensors 165 for detecting the intensity of contacts on touch-sensitive display system 112, and/or one or more tactile output generators 167 for generating tactile outputs for a user of device 100.
FIG. 3 is a block diagram of an example of a multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, the device 300 is a laptop, desktop, tablet, multimedia player device, navigation device, educational device (such as a child learning toy), gaming system, or control device (e.g., a home controller or industrial controller). Device 300 typically includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. The communication bus 320 optionally includes circuitry (sometimes called a chipset) that interconnects and controls communication between system components. Device 300 includes an input/output (I/O) interface 330 with a display 340, typically a touch screen display. I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and touchpad 355, tactile output generator 357 (e.g., similar to one or more tactile output generators 167 described above with reference to fig. 1A) for generating tactile outputs on device 300, sensor 359 (e.g., an optical sensor, an acceleration sensor, a proximity sensor, a touch-sensitive sensor, and/or a contact intensity sensor similar to one or more contact intensity sensors 165 described above with reference to fig. 1A). Memory 370 comprises high-speed random access memory such as DRAM, SRAM, DDR RAM or other random access solid state memory devices, and optionally non-volatile memory such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices or other non-volatile solid state storage devices. Memory 370 optionally includes one or more storage devices located remotely from one or more CPUs 310. In some embodiments, memory 370 stores programs, modules, and data structures similar to those stored in memory 102 of portable multifunction device 100 (fig. 1A), or a subset thereof. Further, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk editing module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (FIG. 1A) optionally does not store these modules.
Each of the above identified elements in fig. 3 is optionally stored in one or more of the previously mentioned memory devices. Each of the above identified modules corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures described above. Further, memory 370 optionally stores additional modules and data structures not described above.
Attention is now directed to embodiments of a user interface ("UI") optionally implemented on portable multifunction device 100.
Fig. 4A illustrates an exemplary user interface of an application menu on portable multifunction device 100 according to some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
one or more signal strength indicators 402 for one or more wireless communications, such as cellular signals and Wi-Fi signals;
Time 404;
a Bluetooth indicator 405;
a battery status indicator 406;
a tray 408 with icons for commonly used applications, such as:
an icon 416 of the phone module 138 labeled "phone", the icon 416 optionally including an indicator 414 of the number of missed calls or voice messages;
an icon 418 of the email client module 140 labeled "mail", the icon 418 optionally including an indicator 410 of the number of unread emails;
icon 420 of the browser module 147, labeled "browser"; and
icon 422 labeled "iPod" for video and music player module 152 (also known as iPod (trademark of Apple inc.) module 152); and
icons for other applications, such as:
icon 424 of IM module 141 labeled "message";
icon 426 of calendar module 148 labeled "calendar";
icon 428 of image management module 144 labeled "photo";
icon 430 of camera module 143 labeled "camera";
icon 432 for online video module 155 labeled "online video";
an icon 434 of the O stock desktop applet 149-2 labeled "stock market";
Icon 436 of map module 154 labeled "map";
icon 438 labeled "weather" for weather desktop applet 149-1;
icon 440 of alarm clock desktop applet 149-4 labeled "clock";
icon 442 labeled "news" for news module 142;
icon 444 of O notepad module 153 labeled "notepad"; and
an icon 446 of an application or module is set, the icon 446 providing access to the settings of the device 100 and its various applications 136.
It should be noted that the icon labels shown in fig. 4A are merely examples. For example, in some embodiments, icon 422 of video and music player module 152 is labeled "music" or "music player". Other tabs are optionally used for various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 in fig. 3) having a touch-sensitive surface 451 (e.g., tablet or touchpad 355 in fig. 3) separate from the display 450. Device 300 also optionally includes one or more intensity sensors (e.g., one or more of sensors 359) for detecting the intensity of contacts on touch-sensitive surface 451, and/or one or more tactile output generators 359 for generating tactile outputs for a user of device 300. Although many of the embodiments that follow will be given with reference to input on touch screen display 112 (where the touch-sensitive surface and the display are combined), in some embodiments the device detects input on a touch-sensitive surface that is separate from the display, as shown in fig. 4B. In some implementations, the touch-sensitive surface (e.g., 451 in fig. 4B) has a primary axis (e.g., 452 in fig. 4B) that corresponds to a primary axis (e.g., 453 in fig. 4B) on the display (e.g., 450). In accordance with these embodiments, the device detects contacts (e.g., 460 and 462 in fig. 4B) with the touch-sensitive surface 451 at locations corresponding to respective locations on the display (e.g., in fig. 4B, 460 corresponds to 468 and 462 corresponds to 470). As such, when the touch-sensitive surface (e.g., 451 in fig. 4B) is separated from the display (e.g., 450 in fig. 4B) of the multifunction device, user inputs (e.g., contacts 460 and 462 and movements thereof) detected by the device on the touch-sensitive surface are used by the device to manipulate the user interface on the display. It should be understood that similar methods are optionally used for the other user interfaces described herein.
Additionally, while the following embodiments are presented primarily with reference to finger inputs (e.g., finger contact, finger tap gesture, finger swipe gesture, etc.), it should be understood that in some embodiments, one or more of these finger inputs are replaced by inputs from another input device (e.g., mouse-based inputs or stylus inputs). For example, the swipe gesture is optionally replaced by a mouse click (e.g., instead of a contact), followed by movement of the cursor along the path of the swipe (e.g., instead of movement of a contact). As another example, a flick gesture is optionally replaced by a mouse click (e.g., instead of detection of a contact, followed by ceasing to detect the contact) while the cursor is over the location of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be understood that multiple computer mice are optionally used simultaneously, or mouse and finger contacts are optionally used simultaneously.
As used herein, the term "focus selector" is an input element that indicates the current portion of the user interface with which the user is interacting. In some implementations that include a cursor or other position marker, the cursor acts as a "focus selector" such that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 355 in fig. 3 or touch-sensitive surface 451 in fig. 4B) while the cursor is over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations that include a touch screen display (e.g., touch-sensitive display system 112 in fig. 1A or the touch screen in fig. 4A) that enables direct interaction with user interface elements on the touch screen display, a contact detected on the touch screen serves as a "focus selector" such that when an input (e.g., a press input by the contact) is detected at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element) on the touch screen display, the particular user interface element is adjusted in accordance with the detected input. In some implementations, the focus is moved from one area of the user interface to another area of the user interface without corresponding movement of a cursor or movement of a contact on the touch screen display (e.g., by moving the focus from one button to another using tab or arrow keys); in these implementations, the focus selector moves according to focus movement between different regions of the user interface. Regardless of the particular form taken by the focus selector, the focus selector is typically a user interface element (or contact on a touch screen display) that is controlled by a user to communicate a user-intended interaction with the user interface (e.g., by indicating to the device an element with which the user of the user interface intends to interact). For example, upon detection of a press input on a touch-sensitive surface (e.g., a touchpad or touchscreen), the location of a focus selector (e.g., a cursor, contact, or selection box) over a respective button will indicate that the user desires to activate the respective button (as opposed to other user interface elements shown on the device display).
As used in this specification and claims, the term "intensity" of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact or a stylus contact) on the touch-sensitive surface, or to a substitute (surrogate) for the force or pressure of a contact on the touch-sensitive surface. The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., at least 256). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average or sum) to determine an estimated force of the contact. Similarly, the pressure sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area and/or changes thereto detected on the touch-sensitive surface, the capacitance of the touch-sensitive surface in the vicinity of the contact and/or changes thereto and/or the resistance of the touch-sensitive surface in the vicinity of the contact and/or changes thereto are optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the surrogate measurement of contact force or pressure is used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the surrogate measurement). In some implementations, the substitute measurement of contact force or pressure is converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). The intensity of the contact is used as an attribute of the user input, allowing the user to access additional device functionality that the user would otherwise not have readily accessible on a smaller sized device with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or physical/mechanical controls such as knobs or buttons).
In some embodiments, the contact/motion module 130 uses a set of one or more intensity thresholds to determine whether the user has performed an operation (e.g., to determine whether the user has "clicked" on an icon). In some embodiments, at least a subset of the intensity thresholds are determined as a function of software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and may be adjusted without changing the physical hardware of device 100). For example, the mouse "click" threshold of a trackpad or touchscreen display may be set to any one of a wide range of predefined thresholds without changing the trackpad or touchscreen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more of the set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by simultaneously adjusting multiple intensity thresholds using a system-level click "intensity" parameter).
As used in the specification and claims, the term "characteristic intensity" of a contact refers to the characteristic of the contact based on the intensity of one or more contacts. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. The characteristic intensity is optionally based on a predefined number of intensity samples or a set of intensity samples acquired during a predetermined time period (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, 5 seconds, 10 seconds) relative to a predefined event (e.g., after detecting contact, before detecting contact lift, before or after detecting contact start movement, before or after detecting contact end, before or after detecting increase in intensity of contact, and/or before or after detecting decrease in intensity of contact). The characteristic intensity of the contact is optionally based on one or more of: maximum value of contact strength, mean value of contact strength, average value of strength of contact, value at the first 10% of contact strength, half maximum value of contact strength, 90% maximum value of contact strength, and the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an operation. For example, the set of one or more intensity thresholds may include a first intensity threshold and a second intensity threshold. In this example, a contact whose characteristic intensity does not exceed the first threshold results in a first operation, a contact whose characteristic intensity exceeds the first intensity threshold but does not exceed the second intensity threshold results in a second operation, and a contact whose characteristic intensity exceeds the second threshold results in a third operation. In some embodiments, a comparison between the characteristic intensity and one or more intensity thresholds is used to determine whether to perform one or more operations (e.g., whether to perform a respective option or forgo performing a respective operation) rather than to determine whether to perform a first operation or a second operation.
In some implementations, a portion of the gesture is recognized for determining the feature intensity. For example, the touch-sensitive surface may receive a continuous swipe contact that transitions from a starting location and reaches an ending location (e.g., a drag gesture) where the intensity of the contact increases. In this example, the characteristic strength of the contact at the end position may be based on only a portion of the continuous swipe contact, rather than the entire swipe contact (e.g., only the portion of the swipe contact at the end position). In some implementations, a smoothing algorithm may be applied to the intensity of the swipe gesture before determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of: a non-weighted light-sweep average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some cases, these smoothing algorithms eliminate narrow spikes or dips in the intensity of the swipe contact for the purpose of determining the feature intensity.
The user interface diagrams described herein (e.g., FIGS. 6A-6Y and 7A-7G) optionally include a display showing a contact on the touch-sensitive surface relative to one or more intensity thresholds (e.g., contact detection intensity threshold IT) 0Light press pressure intensity threshold ITLDeep compression strength threshold ITD(e.g., at least initially higher than I)L) And/or one or more other intensity thresholds (e.g., below I)HIntensity threshold value ofL) Various intensity maps of the current intensity of (c). The intensity map is typically not part of the displayed user interface, but is provided to assist in explaining the map. In some embodiments, the light press intensity threshold corresponds to the intensity at which the device will perform an operation typically associated with clicking a button or trackpad of a physical mouse. In some embodiments, the deep press intensity threshold corresponds to an intensity that: at which intensity the device will perform a different operation than that typically associated with clicking a button of a physical mouse or trackpad. In some embodiments, when the characteristic intensity is detected to be below the light press intensity threshold (e.g., and above a nominal contact detection intensity threshold IT below which contact is no longer detected)0) The device will move the focus selector in accordance with the contact movement on the touch-sensitive surface without performing an operation associated with the light press intensity threshold or the deep press intensity threshold. Typically, these intensity thresholds are consistent between different sets of user interface diagrams unless otherwise noted.
In some embodiments, the response of the device to an input detected by the device depends on criteria based on the intensity of the contact during the input. For example, for some "tap" inputs, the intensity of the contact exceeding a first intensity threshold during the input triggers a first response. In some embodiments, the response of the device to an input detected by the device depends on criteria including both the intensity of contact during the input and time-based criteria. For example, for some "deep press" inputs, the intensity of a contact that exceeds the second intensity threshold and is greater than the first intensity threshold of a light press during the input triggers a second response only if a delay time has elapsed between the first intensity threshold being met and the second intensity threshold being met. The delay time is typically less than 200ms in duration (e.g., 40ms, 100ms, or 120ms, depending on the magnitude of the second intensity threshold, with the delay time increasing as the second intensity threshold increases). This delay time helps avoid accidental deep press inputs. As another example, for some "deep press" inputs, a period of reduced sensitivity may occur after the first intensity threshold is met. During this period of reduced sensitivity, the second intensity threshold is increased. This temporary increase in the second intensity threshold also helps to avoid accidental deep press inputs. For other deep press inputs, the response to the detection of the deep press input does not depend on time-based criteria.
In some embodiments, one or more of the input intensity thresholds and/or corresponding outputs vary based on one or more factors, such as user settings, contact motion, input timing, application execution, rate of intensity application, number of concurrent inputs, user history, environmental factors (e.g., environmental noise), focus selector position, and so forth. Exemplary factors are described in U.S. patent application serial nos. 14/399,606 and 14/624,296, which are incorporated herein by reference in their entirety.
For example, FIG. 4C illustrates a dynamic intensity threshold 480 that varies over time based in part on the intensity of the touch input 476 over time. The dynamic intensity threshold 480 is the sum of two components: a first component 474 that decays over time after a predefined delay time p1 since the touch input 476 was initially detected; and a second component 478 that tracks the intensity of the touch input 476 over time. The initial high intensity threshold of the first component 474 reduces accidental triggering of a "deep press" response while still allowing for an immediate "deep press" response if the touch input 476 provides sufficient intensity. The second component 478 reduces inadvertent triggering of a "deep press" response by gradual intensity fluctuations in the touch input. In some embodiments, a "deep press" response is triggered when the touch input 476 meets a dynamic intensity threshold 480 (e.g., at point 481 in fig. 4C).
FIG. 4D illustrates another dynamic intensity threshold 486 (e.g., intensity threshold I)D). Fig. 4D also shows two other intensity thresholds: first intensity threshold IHAnd a second intensity threshold IH. In FIG. 4D, although the touch input 484 meets the first intensity threshold I before time p2HAnd a second intensity threshold ILBut does not provide a response until delay time p2 has elapsed at time 482. Also in FIG. 4D, the dynamic intensity threshold 486 decays over time, where the decay begins at time 488 (when triggered and with the second intensity threshold I) after a de-predefined delay time p1 has elapsed from time 482LAn associated response). This type of dynamic intensity threshold reduces: accidental triggering of a dynamic intensity threshold I immediately upon triggering of a response associated with a lower intensity threshold or simultaneously with triggering of a response associated with a lower intensity thresholdDAssociated response, the lower intensity threshold value such as the first intensity threshold value IHOr a second intensity threshold IH
FIG. 4E shows yet another dynamic intensity threshold 492 (e.g., intensity threshold I)D). In FIG. 4E, the trigger vs. intensity threshold I is set after a delay time p2 has elapsed since the initial detection of the touch input 490LAn associated response. Meanwhile, the dynamic intensity threshold 492 decays after a predefined delay time p1 has elapsed since the touch input 490 was initially detected. Thus, even when the intensity of the touch input 490 is below another intensity threshold (e.g., intensity threshold I) L) At the trigger and intensity threshold ILThe associated response may be followed by a decrease in the intensity of the touch input 490 followed by an increase in the intensity of the touch input 490 without releasing the touch input 490, which may trigger the intensity threshold IDThe associated response (e.g., at time 494).
Characteristic intensity of contact from below light press intensity threshold ITLTo be between the light press intensity threshold ITLAnd deep press intensity threshold ITDThe intensity in between is sometimes referred to as a "light press" input. Characteristic intensity of contact from below deep press intensity threshold ITDTo a strength above the deep press strength threshold ITDIs sometimes referred to as a "deep press" input. Characteristic intensity of contact from below contact detection intensity threshold IT0To be intermediate the contact detection intensity threshold IT0And light press intensity threshold ITLIs sometimes referred to as detecting contact on the touch surface. Characteristic intensity of contact from above contact detection intensity threshold IT0Is reduced to below the contact detection intensity threshold IT0Is sometimes referred to as detecting that the contact is lifted from the touch surface. In some embodiments, IT0Is zero. In some embodiments, IT0Greater than zero. In some illustrations, shaded circles or ellipses are used to represent the intensity of contacts on the touch-sensitive surface. In some illustrations, circles or ellipses without shading are used to represent respective contacts on the touch-sensitive surface without specifying intensities of the respective contacts.
In some embodiments described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting a respective press input performed with a respective contact (or contacts), wherein the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or contacts) above a press input intensity threshold. In some embodiments, the respective operation is performed in response to detecting that the intensity of the respective contact increases above the press input intensity threshold (e.g., performing the respective operation on a "down stroke" of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below the press input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press input threshold (e.g., the respective operation is performed on an "up stroke" of the respective press input).
In some embodiments, the device employs intensity hysteresis to avoid accidental input sometimes referred to as "jitter," where the device defines or selects a hysteresis intensity threshold having a predefined relationship to the press input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press input intensity threshold, or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below a hysteresis intensity threshold corresponding to the press input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., the respective operation is performed on an "up stroke" of the respective press input). Similarly, in some embodiments, a press input is detected only when the device detects an increase in intensity of the contact from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press input intensity threshold and optionally a subsequent decrease in intensity of the contact to an intensity at or below the hysteresis intensity, and a corresponding operation is performed in response to detecting the press input (e.g., an increase in intensity of the contact or a decrease in intensity of the contact, depending on the circumstances).
For ease of explanation, optionally, a description of an operation performed in response to a press input associated with a press input intensity threshold or in response to a gesture that includes a press input is triggered in response to detecting: the intensity of the contact increases above the press input intensity threshold, the intensity of the contact increases from an intensity below the hysteresis intensity threshold to an intensity above the press input intensity threshold, the intensity of the contact decreases below the press input intensity threshold, or the intensity of the contact decreases below the hysteresis intensity threshold corresponding to the press input intensity threshold. Additionally, in examples in which operations are described as being performed in response to detecting that the intensity of the contact decreases below the press input intensity threshold, the operations are optionally performed in response to detecting that the intensity of the contact decreases below a hysteresis intensity threshold that corresponds to and is less than the press input intensity threshold. As described above, in some embodiments, the triggering of these responses is also dependent on the time-based criteria being met (e.g., a delay time has elapsed between the first intensity threshold being met and the second intensity threshold being met).
FIG. 5 illustrates an exemplary electronic device in communication with a display 450 and a touch-sensitive surface 451. For at least a subset of the electronic devices, the display 450 and/or the touch-sensitive surface 451 are integrated into the electronic device, according to some embodiments. Although the examples described in more detail below are described with reference to touch-sensitive surface 451 and display 450 being in communication with an electronic device (e.g., portable multifunction device 100 in fig. 1A-1B or device 300 in fig. 3), it should be understood that, according to some embodiments, the touch-sensitive surface and/or display is integrated with the electronic device, while in other embodiments, one or more of the touch-sensitive surface and display are separate from the electronic device. Additionally, in some embodiments, the electronic device has an integrated display and/or an integrated touch-sensitive surface and communicates with one or more additional displays and/or touch-sensitive surfaces that are separate from the electronic device.
In some embodiments, all of the operations described below with reference to FIGS. 6A-6Y and 7A-7G are performed on a single electronic device (e.g., computing device A described below with reference to FIG. 5) having user interface navigation logic 483. However, it should be understood that a number of different electronic devices are typically linked together to perform the operations described below with reference to FIGS. 6A-6Y and 7A-7G (e.g., an electronic device with user interface navigation logic 483 communicates with a separate electronic device with display 450 and/or a separate electronic device with touch-sensitive surface 451). In any of these embodiments, the electronic device described below with reference to FIGS. 6A-6Y and 7A-7G is an electronic device (or devices) that includes user interface navigation logic 483. Additionally, it should be understood that in various embodiments, user interface navigation logic 483 may be divided between a plurality of different modules or electronic devices; however, for purposes of the description herein, user interface navigation logic section 483 will be primarily referred to as residing in a single electronic device to avoid unnecessarily obscuring other aspects of the embodiments.
In some embodiments, user interface navigation logic 483 includes one or more modules (e.g., one or more event handlers 190 that include one or more object updaters 177 and one or more GUI updaters 178, as described in more detail above with reference to fig. 1B) that receive interpretation inputs and, in response to the interpretation inputs, generate instructions for updating the graphical user interface in accordance with the interpretation inputs, which instructions are then used to update the graphical user interface on the display. In some embodiments, the interpretation input is input that has been detected (e.g., by contact movement 130 in fig. 1A-1B and 3), identified (e.g., by event recognizer 180 in fig. 1B), and/or prioritized (e.g., by event classifier 170 in fig. 1B). In some implementations, the interpretation input is generated by a module at the electronic device (e.g., the electronic device receives raw contact input data to identify a gesture from the raw contact input data). In some embodiments, some or all of the interpreted input is received by the electronic device as interpreted input (e.g., the electronic device that includes touch-sensitive surface 451 processes raw contact input data to identify gestures from the raw contact input data and sends information indicative of the gestures to the electronic device that includes user interface navigation logic 483).
In some implementations, both display 450 and touch-sensitive surface 451 are integrated with an electronic device (e.g., computing device a in fig. 5) that includes user interface navigation logic component 483. For example, the electronic device may be a desktop computer or a laptop computer with an integrated display (e.g., 340 in fig. 3) and touchpad (e.g., 355 in fig. 3). As another example, the electronic device may be a portable multifunction device 100 (e.g., a smartphone, a PDA, a tablet computer, etc.) with a touch screen (e.g., 112 in fig. 2).
In some implementations, touch-sensitive surface 451 is integrated with an electronic device, while display 450 is not integrated with an electronic device (e.g., computing device B in fig. 5) that includes user interface navigation logic 483. For example, the electronic device may be a device 300 (e.g., a desktop computer or a laptop computer) with an integrated touchpad (e.g., 355 in fig. 3) connected (either through a wired connection or a wireless connection) to a separate display (e.g., a computer, monitor, television, etc.). As another example, the electronic device may be a portable multifunction device 100 (e.g., a smartphone, PDA, tablet computer, etc.) having a touch screen (e.g., 112 in fig. 2) connected (by a wired or wireless connection) to a separate display (e.g., a computer monitor, television, etc.).
In some implementations, display 450 is integrated with an electronic device, while touch-sensitive surface 451 is not integrated with an electronic device (e.g., computing device C in fig. 5) that includes user interface navigation logic 483. For example, the electronic device may be a device 300 (e.g., desktop computer, laptop computer, television with integrated set-top box) having an integrated display (e.g., 340 in fig. 3) connected (by a wired connection or a wireless connection) to a separate touch-sensitive surface (e.g., remote touchpad, portable multifunction device, etc.). As another example, the electronic device may be a portable multifunction device 100 (e.g., a smartphone, a PDA, a tablet computer, etc.) having a touch screen (e.g., 112 in fig. 2) connected (by a wired connection or a wireless connection) to a separate touch-sensitive surface (e.g., a remote touchpad, another portable multifunction device having a touch screen that serves as a remote touchpad, etc.).
In some implementations, both display 450 and touch-sensitive surface 451 are not integrated with an electronic device (e.g., computing device D in fig. 5) that includes user interface navigation logic component 483. For example, the electronic device may be a standalone electronic device 300 (e.g., a desktop computer, laptop computer, console, set-top box, etc.) connected (by a wired connection or a wireless connection) to a separate touch-sensitive surface (e.g., a remote touchpad, a portable multifunction device, etc.) and a separate display (e.g., a computer monitor, television, etc.). As another example, the electronic device may be a portable multifunction device 100 (e.g., a smartphone, a PDA, a tablet computer, etc.) having a touch screen (e.g., 112 in fig. 2) connected (by a wired connection or a wireless connection) to a separate touch-sensitive surface (e.g., a remote touchpad, another portable multifunction device having a touch screen that serves as a remote touchpad, etc.).
In some embodiments, a computing device has an integrated audio system. In some embodiments, the computing device communicates with an audio system that is separate from the computing device. In some embodiments, an audio system (e.g., an audio system integrated in a television unit) is integrated with a separate display 450. In some embodiments, the audio system (e.g., stereo system) is a separate system from the computing device and display 450.
User interface and associated process
Attention is now directed to embodiments of a user interface ("UI") and associated processes that may be implemented using an electronic device, such as a computing device (e.g., one of computing devices a-D in fig. 5), in communication with and/or including a display and a touch-sensitive surface. In some embodiments, the computing device includes one or more sensors to detect intensity of contacts with the touch-sensitive surface. In some embodiments, the computing device includes a display. In some embodiments, the computing device includes an audio system. In some embodiments, the computing device includes neither a display nor an audio system. In some embodiments, the display includes an audio system (e.g., the display and audio system are components of a television). In some embodiments, certain components of the audio system and the display are separate (e.g., the display is a component of a television, and the audio system includes a soundbar separate from the television). In some embodiments, the computing device communicates with a separate remote control through which the computing device receives user input (e.g., the remote control includes a touch-sensitive surface or touch screen through which the user interacts with the computing device). In some embodiments, the remote control includes a motion sensor (e.g., an accelerometer and/or gyroscope) for detecting remote control motion (e.g., a user picking up the remote control).
Although some of the examples below will be given with reference to input on a touch-sensitive surface 451 separate from the display 450, in some embodiments, as shown in fig. 4A, the device detects input on a touch screen display (where the touch-sensitive surface is combined with the display). For ease of illustration, some embodiments will be discussed with reference to operations performed on a device having touch-sensitive display system 112. In such embodiments, the focus selector is optionally: a respective finger or stylus touch point, a representative point corresponding to a finger or stylus touch point (e.g., a centroid of or a point associated with a respective touch point), or centroids of two or more touch points detected on touch-sensitive display system 112. Optionally, however, similar operations are performed on a device having a display 450 and a separate touch-sensitive surface 451 and a focus selector in response to detecting a contact on the touch-sensitive surface 451 while displaying the user interface shown in the figures on the display 450.
Fig. 6A-6Y illustrate exemplary user interfaces for dynamically adjusting the presentation of audio output, according to some embodiments. More specifically, FIGS. 6A-6Y illustrate user interfaces that adjust the volume and/or non-volume characteristics of audio content based on the magnitude of a user input. For example, while playing a song on the device, the user may perform a gesture for requesting the provision of other audio content. For example, a song may be played in a music application, and the user may request another audio content in the same music application or a different application (e.g., the other audio content may be an audio portion of a video sent by another user in an instant messaging application). As explained in more detail below, the blurring of the audio content includes dynamically adjusting a non-volume characteristic (e.g., stereo balance) of the first audio content (e.g., a song played on the device prior to user input).
6A-6H illustrate the following examples: a device (e.g., referred to as device 100 for simplicity) blurs in a second song or album in a music application according to the length of the swipe gesture while presenting the first song or album in the music application.
Fig. 6A shows a music application user interface 602 displayed on the touch screen 112 of the device 100 (the remainder of the device 100 other than the touch screen 112 is not shown for visual clarity). The music application user interface 602 includes a title 604 indicating that the device 100 is currently playing a moonlight song of bazedoxifene. That is, the device 100 is providing sound information 606-1 to the audio system so that the audio system can render a moonlight song of bedoufin (or a track/song of the moonlight song from bedoufin). As a shorthand, the phrase "device 100 presents audio content" or the like is used to denote that device 100 provides sound information to an audio system so that the audio system can present audio output corresponding to the audio content. As explained with reference to fig. 5, the audio system may be integrated into the device 100 or separate from the device 100.
As schematically shown in the audio graph 608, the moonlight tempo of the bedwind is represented by the sound information 606-1. The sound information 606-1 includes volume (represented by the position of the sound information 606-1 on the vertical axis of the audio graph 608) and an audio characteristic other than volume, which in this example is left-right stereo balance ("L-R balance" represented by the position of the sound information 606-1 on the horizontal axis of the audio graph 608). In some implementations, the non-volume audio characteristic is a filtering characteristic (e.g., a cutoff frequency and/or an attenuation factor of a low pass filter).
The music application user interface 602 also includes audio content representations 610 (e.g., audio content representation 610-1 representing a moonlight tempo song of beduofen; audio content representation 610-2 representing an anl's soul of mozart; and audio content representation 610-3 representing a female bass mania of boularmus). The audio content representations 610 are graphical objects (also referred to as "graphical user interface objects") that each occupy a respective area of the user interface 602 on the touch screen 112. In various cases, audio content representation 610 represents: a song, an album, a ringtone, a video content object (e.g., where the video content appears in the text instant message window), an audio file object that appears in the text instant message window, or any other type of media content that includes an audio component. In this example, the audio content representation 610 is a representation of an album and includes displayed album art.
At the beginning of the example shown in fig. 6A-6H, the L-R balance is equalized and the device 100 does not provide any other sound information except the sound information 606-1.
As shown in fig. 6B, while the audio system is presenting a moonlight tempo song of bazedoxifene, the device 100 receives input 612 corresponding to a request to present a soul of mozart (e.g., a swipe gesture). In this example, because the input 612 is received through the audio content representation 610-2 corresponding to the anfetch of mozart, the input 612 is a request for presentation of the anfetch of mozart.
In some implementations, in response to an initial portion of the input 612 (e.g., a slight movement of the swipe gesture), the device 100 suggests (e.g., audibly and/or visually) that it will begin to blur audio. In some embodiments, as used herein, the term "blurred audio" refers to changing audio characteristics so as to change the prominence of the blurred audio as it is played, so that a user can better distinguish other audio that is played simultaneously with the blurred audio. When the audio is "blurred", a "blur fade-in" in the audio corresponds to increasing the saliency of the blurred audio (e.g., by increasing the cut-off frequency of the low-pass filter and/or shifting the audio toward the center channel), while a "blur fade-out" corresponds to decreasing the saliency of the blurred audio (e.g., by decreasing the cut-off frequency of the low-pass filter and/or shifting the audio away from the center channel). In some embodiments, the audio characteristic of the blur fade-in includes volume. In some implementations, the audio characteristics of the blur fade-in include one or more non-volume audio characteristics (e.g., a cut-off frequency or left/right balance of a low-pass filter). For example, the device 100 provides an audible cue that the audio will begin to blur by dynamically adjusting the L-R balance of the sound information 606-1 with the magnitude of the initial portion of the input 612 (e.g., a slight movement of the swipe gesture). In this example, the L-R balance of the sound information 606-1 is slightly shifted to the left. As another example of an audible cue, the volume of the volume information 606-1 is decreased. In some embodiments, device 100 also provides the visual cue by increasing the visual prominence of audio content representation 610-2 (which is schematically illustrated in fig. 6B by a bold border surrounding audio content representation 610-2). In some embodiments, increasing the visual prominence includes visually obscuring other audio content representations in addition to audio content representation 610-2 (and optionally also visually obscuring the rest of the user interface). In some embodiments, as described below, these cues are provided before the soul of mozart aurally "blurs in.
In some embodiments, the visual effect (e.g., blur radius) changes in conjunction with the dynamically adjusted non-volume audio characteristic (e.g., as the swipe lengthens, the user interface 602 (in addition to the audio content representation 610-2) blurs in lockstep by shifting the L-R balance of the sound information 606-1).
As shown in fig. 6C-6D, the device 100 provides sound information 606-2 to the audio system to present the mozart's soul simultaneously with the moonlight tempo of bazedoxifene. In some implementations, the "blurring-in" of the second audio content is accomplished in response to the magnitude of the input 612 exceeding a predetermined threshold (e.g., the swipe movement exceeds a small predetermined amount). In some embodiments, the device 100 statically presents the soul of mozart; that is, once the mozart's soul begins playing, it proceeds with a fixed volume and a fixed L-R balance (e.g., a balanced L-R balance). Alternatively, as shown, the presentation of the soul of mozart is also dynamically adjusted according to the magnitude of the input 612. For example, as the user moves the input 612 further left, the L-R balance of the Mozart's Annonal curve shifts more to the left (as shown by the leftward shift of the sound information 606-2 in the audio graph 608 in FIGS. 6C-6D). The presentation of the moonlight tempo song of bazedoxifene continues to be dynamically adjusted according to the swipe gesture. For example, as the user moves the input 612 further left, the L-R balance of the moonlight tempo of bazedoxifene shifts more to the left (as shown by the leftward shift of the sound information 606-1 in the audio map 608 in fig. 6C-6D). This gives the user the following feeling: the moonlight rhythmic song of bedufen is shifting from the center stage to the left, and the venthon of mozart is shifting from the right side onto the center stage, thereby occupying the position of the moonlight rhythmic song.
In some embodiments, device 100 also dynamically adjusts the volume of the moonlight song of bedoufen according to the magnitude of the swipe gesture (e.g., device 100 decreases the volume as input 612 moves further to the left, as represented by the downward offset of sound information 606-1 in audio map 608 in fig. 6C-6D). In some implementations, the device 100 also dynamically adjusts the volume of the mozart's soul according to the size of the swipe gesture (e.g., as the input 612 moves further to the left, the device 100 increases the volume of the mozart's soul as represented by the upward shift in the sound information 606-2 in the audio graph 608 in fig. 6C-6D). In some implementations, the volume of the moonlight tempo song of bazedoxifene and the soul of mozart is generally proportional to the scores of the representations 610-1 and 610-2, respectively, displayed within a predefined area of the display (e.g., the entire display or a central area of the display). Thus, as the user pulls the audio content representation 610-2 onto the center of the display, the mouzate's soul becomes more audibly prominent and more audibly centered.
Fig. 6E shows device 100 detecting the end of user input 612 (e.g., user input 612 has been released). Thus, user input 612 is not shown in FIG. 6E.
In some embodiments, in response to detecting the end of the user input 612, the device 100 determines whether to continue presenting the first audio content (e.g., and stop presenting the second audio content), or vice versa, based on determining that the magnitude of the input 612 exceeds (or has exceeded) a predetermined threshold.
FIG. 6F shows that in this example, because input 612 has dragged audio content representation 610-2 so that it is displayed more than audio content representation 610-1, the release of input 612 causes audio content representation 610-2 to move to a position in the middle of the screen where device 100 only displays the soul of Mozart. Thus, in response to detecting the end of the input 612, the device 100 stops presenting the first audio content (e.g., a moonlight tempo song) and continues to present the second audio content (a mozart's soul song). In some implementations, the device 100 completes the dynamic adjustment of the soul of mozart (e.g., after releasing the input 612, gradually changes the L-R balance and volume over a period of 0.5 seconds such that the soul of mozart is presented evenly at the preset volume). This is schematically illustrated by the upward and leftward offset of the sound information 606-2 in the audio map 608 in fig. 6E-6F. In some implementations, in response to detecting the end of the input 612, the visual effect (e.g., blur) is also reversed.
Fig. 6G-6H show what would happen in this example if the input 612 did not drag the audio content representation 610-2 to be displayed more than the audio content representation 610-1. Thus, fig. 6G shows the state of user interface 602 immediately after releasing an input that is shorter than input 612 but similar to input 612. In response, as shown in fig. 6H, the device 100 stops presenting (e.g., causes the audio system to stop presenting) the venh song of mozart, and resumes presenting the moonlight song of only bazedoxifene. The moonlight melody of bedlofen is then presented without being dynamically adjusted (e.g., the change in L-R balance and volume is reversed within 0.5 seconds so that the moonlight melody of bedlofen is presented evenly and at a preset volume). This is schematically illustrated by the upward and leftward offset of the sound information 606-1 in the audio map 608 in fig. 6G-6H. In some embodiments, in response to detecting the end of the input, the visual effect (e.g., blur) is also inverted.
In some embodiments, device 100 visually and audibly "pops" the second audio output into place when the magnitude of input 612 meets a second predetermined threshold that is greater than the first predetermined threshold (e.g., there is a first threshold where the second audio output moves into place if the input is released and a second threshold where the second audio output pops into place even before the input is released).
In some embodiments, as an alternative to the examples shown in fig. 6E-6H, in response to detecting the end of the input 612, the device 100 continues to present the adjusted first audio output and continues to present the second audio output.
6I-6O show the following examples: a device (e.g., referred to as device 100 for simplicity) blurs in the audio portion of video sent to a user in an instant messaging application (e.g., an application different from the music application) according to the intensity of the press input when rendering a song or album in the music application. Various aspects of the examples shown in fig. 6I-6O are similar to the examples shown in fig. 6A-6H. These details are not repeated here. The differences between the examples shown in FIGS. 6I-6O and the examples shown in FIGS. 6A-6H are: in FIGS. 6I-6O, the input is a press input, the magnitude of which is the intensity of the press input; and additionally, in fig. 6I-6O, device 100 blurs audio from two different applications (e.g., a music application and an instant messaging application).
6I-6O illustrate blurring audio by dynamically adjusting the low pass filter cutoff frequency instead of the L-R balance. To this end, FIGS. 6I-6O include audio map 613. The vertical axis of the audio map 613 represents the volume and the horizontal axis represents the low pass filter cut-off frequency.
Fig. 6I shows a user interface 614 for an instant messaging application displayed on touch screen 112. User interface 614 displays messages 616, some of which messages 616 are received by a user of device 100 (e.g., messages 616-1 and 616-2), and some of which messages 616 are sent by a user of device 100 (e.g., message 616-3). The user interface 614 includes an avatar 618 indicating a participant in the session (e.g., "Alex" with avatar 618-1, and "Tina" with avatar 618-2). For example, Tina sends a message 616-1 to Alex that is a representation of the video.
In FIG. 6I, audio map 613 includes a representation of sound information 606-3, meaning that the device is rendering audio content. In this example, sound information 606-3 corresponds to female bass mania by Bolams, so first device 100 renders the female bass mania by a music application that is separate or different from the instant messaging application. Female bass mania in bolames is provided at a certain volume and with a low-pass filter cut-off frequency (hereinafter referred to as "cut-off frequency") set to a high value. In some embodiments, the cut-off frequency at the beginning is higher than the maximum frequency that is audible to humans (e.g., about 20kHz) so that the low-pass filter has no audible effect on the audio output when no audio blurring is performed. In some implementations, no low pass filter is applied at the beginning (i.e., when no audio blurring is performed). Conversely, when audio blurring begins, the low pass filter is turned on. For simplicity, FIG. 6I shows sound information 606-3 presented with the cutoff frequency initially set to a high value.
As shown in FIGS. 6J-6K, when device 100 presents a female bass mania, device 100 receives input 620 corresponding to a request to present video message 616-1 (including the audio of video message 616-1) (e.g., the input is on an area of touch screen 112 corresponding to video message 616-1).
In this example, input 620 is a press input (e.g., a press-and-hold input) on touch screen 112. Touch screen 112 has one or more sensors for detecting the intensity of the contact. The intensity of the input 620 is represented on an intensity graph 622. In FIG. 6J, the intensity of the input 620 is above the cue threshold (e.g., IT)H). Thus, device 100 begins to audibly blur the audio portion of video message 616-1 into the female bass mania of Bolams. That is, when the intensity of the input 620 is above the prompt threshold (e.g., IT)H) But not above the peeking threshold (e.g., IT)L) When the intensity of input 620 increases, device 100 decreases the volume of the female bass mania and decreases the cutoff frequency. These effects are schematically illustrated by the downward and leftward offset of the sound information 606-3 in the audio map 613. The effect of reducing the cut-off frequency of the low-pass filter is to filter out more and more of the high frequency components of the female bass mania leaving only its bass.
At the same time, as the intensity of the contact 620 increases (as represented by the upward arrow of the sound information 606-4 corresponding to the video message 616-1), the volume of the video message 616-1 increases (e.g., proportionally). The sound information 606-4 is not actually low-pass filtered (e.g., a low-pass filter filters it with a cutoff frequency above the human audible range, or it is not low-pass filtered at all).
In some embodimentsWhenever the intensity of the input 620 has been above the prompt threshold (e.g., IT)H) But has not exceeded the peeking threshold (e.g., IT)L) The prominence of the visual message 616-1 dynamically changes with the intensity of the input 620 (e.g., the prominence of the visual and the corresponding audio increases with increasing intensity and decreases with decreasing intensity). For example, as shown in FIGS. 6J-6K, increasing the intensity of the input 620 increases the size of the video message 616-1, centers the video message 616-1, and obscures any messages in the user interface 614 other than the video message 616-1 by a blur radius proportional to the intensity of the input 620 (the increased blur radius in these figures is schematically represented by a reduced transparency of the pattern that covers all of the user interface 614 other than the video message 616-1).
IT can be seen by comparing fig. 6J and 6KHAnd ITLWith increased intensity. In some embodiments, the user may repeatedly purge the volume of the female bass mania and the video message 616-1 (and the cut-off frequency of the female bass mania) by increasing and decreasing the intensity of the input 620. For example, if FIGS. 6J and 6K are inverted, the volume and cut-off frequency of the sound information 606-3 will increase and the volume of the sound information 606-4 will decrease.
In this example, when device 100 presents an audio output of video message 616-1, device 100 also presents a video output of video message 616-1. Thus, FIGS. 6J-6O show the video progression (e.g., video message 616-1 is a simple video of a bird flying around a frame).
6L-6M, when the intensity of input 620 reaches the "peek" threshold, device 100 renders the bass reflex with a preset cutoff frequency and a first preset volume (e.g., low volume), and renders video message 616-1 with a second preset volume (e.g., higher volume than the first preset volume) regardless of the intensity of input 620 (e.g., as long as the intensity of input 620 remains below the "pop-up" threshold, which in this example is IT D). For example, device 100 detects that the intensity of input 620 has increased above ITL(FIG. 6L), followingThe intensity of the rear input 620 decreases less than ITL(FIG. 6M), but maintaining the acoustic information 606-3 and 606-4 at their "peek" positions in the audio map 613 (e.g., even if the intensity of the contact is reduced below IT)LThe visual prominence of the video and the auditory prominence of the corresponding audio are also maintained at a level shown at least 6K). The effect is to primarily render audio from the video message 616-1, with only the quiet bass background of the female bass mania remaining audible.
Further, after the intensity of input 620 reaches the "peek" threshold, the visual change is locked. In FIG. 6L, the video message 616-1 is centered, with the remainder of the user interface 614 severely obscured. When the intensity of the input 620 decreases below ITLThis ambiguity is preserved in fig. 6M, along with the center position of the video message 616-1.
However, in some embodiments, release of the input 620 at any one of the points in fig. 6J-6M will cause the device 100 to stop presenting the video message 616-1 and resume fully presenting the female bass mania (e.g., at the second preset volume and without filtering), and optionally return to the state of the user interface shown in fig. 6I.
6N-6O show that in some embodiments, when the intensity of the input 620 is above the pop-up threshold (e.g., IT)D) At this point, device 100 "pops" video message 616-1 into place. Even if the intensity of the input 620 subsequently falls below ITDOr the input 620 may be completely terminated once the intensity of the input 620 is above the deep press threshold ITDThe volume of the video message 616-1 remains at the second preset volume and remains unfiltered (fig. 6O). In some embodiments, device 100 visually pops up video message 616-1 in the appropriate position (e.g., video message 616-1 expands to occupy the entire screen or a large portion of the screen). Further, the apparatus 100 stops presenting the first audio output (e.g., female bass mania).
6P-6Y show the following examples: according to some embodiments, as the characteristic intensity of the contact changes, the device (e.g., referred to as device 100 for simplicity) previews audio from the media item by dynamically changing a set of one or more audio characteristics of the media item. For example, when device 100 receives a contact on a representation of a song, device 100 will play the song with a volume proportional to the intensity of the contact, at least for a range of contact intensities. In some embodiments, when the contact reaches the first intensity threshold, the device 100 will "peek" at the preview media item by maintaining the volume at a preset level even if the intensity of the contact subsequently drops. In some embodiments, when the contact reaches the second intensity threshold, the device 100 will perform another operation (e.g., in addition to previewing the media item), such as performing a selection operation.
6P-6Y also show the following examples: in response to the input, device 100 hides (e.g., obscures) the entire user interface (except for the selected user interface object) or just a portion of the user interface. In some embodiments, obscuring the user interface (e.g., the entire user interface or only a portion of the user interface) is performed when the contact satisfies the intensity-based activation criteria. For example, the intensity rises above the cue threshold ITHWhile a tap gesture results in a different operation being performed (e.g., a selection operation on a user interface object).
Fig. 6P illustrates a user interface 640 for an instant messaging application. The user interface 640 includes an interaction region 642-1. The interaction region 642-1 is a conversation region that includes a plurality of messages 644 (of which message 644-1, message 644-2, and message 644-3 are representative) between conversation participants (e.g., Alex and Tina). The user may interact with the interaction region 642-1 by scrolling through the messages 644 (e.g., locations in the conversation region are shown by the scrollbar 646), or by interacting with the respective messages 644 (e.g., tapping on the message 644-3 to play video, or pressing on (e.g., and holding on) the message 644-3 to "peek" at the video, as described with reference to fig. 6I-6O). Interaction region 642-1 also includes a plurality of affordances (e.g., icon 648-1, icon 648-2, icon 648-3, and icon 648-4, each of which is capable of implementing a particular device function) and shelves 650 (e.g., presentation regions for content that the user has entered but has not yet sent to other session participants).
FIG. 6P shows an input 652 (e.g., a tap gesture) on icon 648-3. As shown in FIG. 6Q, a flick gesture 652 causes the user interface 640 to display an interaction region 642-2 (e.g., a separate interaction region different from interaction region 642-1). In some embodiments, the interactive area 642-1 is a user interface for a host application (e.g., an instant messaging application), and the interactive area 642-2 is configured to display content from different mini-applications operating within the host application. The user may swipe between the mini-applications to change the mini-application displayed in the interactive area 642-2. The interactive area 642-2 also includes an affordance 659 that brings up a list of mini-applications to be displayed in the interactive area 642-2. The indicator 654 indicates which mini-application is currently being displayed in the interaction area 642-2 (e.g., a swipe to the left will result in a different mini-application being displayed in the area 642-2 and a solid dot in the indicator 654 will move it to the right).
In this example, the mini-application is a media selection mini-application 656 for selecting media to be shared in a conversation between participants of the conversation. (e.g., a scrollable form showing the last 30 songs played on device 100). The media selection mini-application includes representations of media items 658 (e.g., representations of media items 658-1 through 658-4). In some embodiments, the media items are songs, albums, videos, and the like. Representation 658-3 includes display information about media items (e.g., artist, song title) and album art artwork. In some embodiments, a tap gesture (not shown) on the respective representation 658 selects the respective representation 658 (e.g., as an audio message such that it is segmented for sending to the conversation) by placing it in the rack 650. In some embodiments, a tap gesture (not shown) on the respective representation 658 selects the respective representation 658 by playing the media item (e.g., playing locally on the device 100).
FIG. 6R shows an input 660 including a contact on representation 658-2 (e.g., a press-and-hold gesture on representation 658-2). By having IT above the cue threshold, as shown in intensity graph 668HContact strength ofThe drop-and-hold gesture satisfies media-prompting criteria (e.g., intensity-based activation criteria). Accordingly, the device 100 begins playing a portion of the media item corresponding to the representation 658-2 (e.g., the beginning portion or the representative portion selected to contain the identifiable portion of the media item). Playback of the media item is represented in the audio diagram 662 by an audio output 664. The audio map 662 shows the volume of a media item versus time.
As shown in fig. 6R-6S, while the media item is playing, the device 100 dynamically changes the volume of the media item as the characteristic intensity of the contact changes (e.g., the volume of the media item increases as the characteristic intensity of the contact increases and decreases as the characteristic intensity of the contact decreases). For example, the strength of the contact from FIG. 6R to FIG. 6S increases (e.g., from just above IT)HTo just below ITL) With a corresponding increase in volume (the arrows in intensity graph 668 in fig. 6R-6V show the change in intensity from the previous graph; similarly, the solid lines in the audio graph 662 in fig. 6R-6U show the changes of the graph corresponding to the arrows in the intensity graph 668, while the dashed lines represent the changes in the previous graph). If the intensity is reversed (e.g., if FIG. 6S occurs before FIG. 6R), the volume of the media item will be reduced.
In some implementations, when different audio outputs have been generated before the media item begins playing, the device 100 blurs the two audio outputs together as described with reference to fig. 6A-6O, method 800, method 840, and fig. 8A-8C.
6R-6S show the device 100 dynamically changing the appearance of the interactive region 642-2 as the volume of the media item changes. For example, as the characteristic intensity of the contact changes, the device 100 dynamically hides (e.g., blurs) the interaction region 642-2 (e.g., in a lockstep fashion with a change in audio such that as the characteristic intensity of the contact increases, the blur radius also increases). Thus, an increase in the intensity of the contact from fig. 6R to fig. 6S is accompanied by a corresponding increase in the degree of ambiguity (the increased degree of ambiguity in these figures is schematically represented by the decreased transparency of the pattern covering the other content in the interaction region 642-2 when the device is moved from fig. 6R to fig. 6S). In this example, device 100 dynamically hides interaction region 642-2 without hiding interaction region 642-1 (e.g., to indicate that the prompt relates to a mini-application portion of the user interface and does not relate to the entire user interface).
Further, the non-hidden representation 658-2 increases in size and moves to the center of the interaction region 642-2 as the contact strength increases. In this example, as long as the contact strength does not reach the "peek" strength threshold (e.g., IT) L) The user may then navigate through the intensity of the "prompt" range (e.g., in IT)HAnd ITLIn between) to repeatedly cleanse visual changes and auditory changes by varying the intensity of the contact.
6T-6U illustrate the results of a contact reaching a peek threshold (e.g., device 100 detecting a higher than ITLAs a result of an increase in the characteristic strength of the contact). In this example, once the intensity of the contact reaches the peek threshold, the volume of the audio output 664 remains fixed (e.g., locked to a preset level, such as any volume to which the user sets the device 100). Therefore, ITLAnd ITDAn increase in the strength of the contact therebetween does not result in an increase in sound volume. Similarly, as shown in FIG. 6U, a undershoot IT is subsequently detectedLHas no effect on volume (e.g., as long as contact is continuously maintained; in some embodiments, the boost gradually or immediately stops presentation of the media item and reverses visual changes to the interaction region 642-2).
In some embodiments, IT is achieved once contact is madeLThe device 100 displays an indication 661 that the increased feature intensity of the contact will cause the device TO perform a selection operation with respect TO the media item (e.g., the utterance "3D TOUCH TO guard"). Lower than IT LThe subsequent reduction in the intensity of the contact of (a) does not modify the masking of the rest of the interactive area 642-2, the size of the representation 658-2, or the "3D TOUCH TO SHARE" indication.
FIG. 6V illustrates device 100 detecting a "pop-up" threshold (e.g., IT)D) The characteristic strength of the contact of (2) is increased. Thus, the device 100 stops playing the media item (and thus, the audio map 662 is not present in FIGS. 6V-6X) and performs a selection operation with respect to the media item (e.g., the device 100 drops the media itemIn a rack 650).
In some embodiments, as shown, upon detection of more than ITDImmediately after the intensity of the contact. Input 660 then becomes invalid. In other words, although input 660 is shown on original representation 658-2 in FIG. 6V, input 660 has no effect on original representation 658-2. The user may reselect the original representation 658-2 by releasing the input 660 and entering a new user input.
In FIG. 6V, the media item is now represented by audio message 644-4, which corresponds to representation 658-2 as shown in FIG. 6Q-6U. In some embodiments, the audio message 644-4 appears the same as the representation 658-2 shown in FIG. 6R. In addition, the device 100 stops obscuring the interaction region 642-2 (e.g., reverse blurring).
Placing the content in the shelf 650 also causes the device 100 to display a "send" button 670. Fig. 6W illustrates a user input 672 (e.g., a tap gesture) selecting the send button 670, which sends the audio message 644-4 to a conversation participant and results in the addition of the audio message 644-4 to the conversation area, as shown in fig. 6X.
FIG. 6Y shows input 674 that includes a contact on audio message 644-4 (e.g., a press and hold gesture on audio message 644-4). Because the contacts in the input 674 contacts are higher than ITHThe device 100 hides the interaction region 642-1 and the interaction region 642-2 except for the audio message 644-4. In some embodiments, in response to a response above ITHThe touch and hold input, the device 100 also previews the audio message 644-4 in a manner similar to that previously described (e.g., including dynamically changing the audio). In contrast, in response to a flick gesture (e.g., to be lower than IT)HIntensity) of the audio message 644-4, without obscuring the second interaction region of the application and without obscuring the first interaction region of the application.
Additional details regarding fig. 6A-6Y (and the user interfaces shown therein) are provided below with reference to fig. 8A-8H.
Fig. 7A-7G illustrate exemplary user interfaces for providing audio output based on an audio profile, according to some embodiments. More specifically, fig. 7A-7G show the following examples: activation of the affordance (e.g., a button on touch screen 112) causes the device to output a sound having an audio profile (e.g., an audio profile that governs the pitch, reverberation, and/or decay of the audio output over time). When the second affordance is activated, the device determines whether audio alteration criteria are met, and if so, modifies the audio profile corresponding to the first affordance. For example, the device causes the sound of the first affordance to decay more quickly when the second affordance is activated shortly after the first affordance is activated.
The user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 9A-9C.
For convenience, fig. 7A-7G are described with reference to device 100. As a shorthand, the phrase "device 100 presents audio corresponding to an affordance" or the like is used to mean that the device 100 provides (or initiates the provision of) sound information to an audio system such that the audio system can present audio output corresponding to the affordance. As explained with reference to fig. 5, the audio system may be integrated into the device 100 or separate from the device 100.
More specifically, fig. 7A-7G show the following examples: when activated at the first affordance for a threshold time TthresUpon internally activating the second affordance, the audio alteration criteria are satisfied.
Fig. 7A illustrates a user interface 702 for a telephony application (e.g., telephony module 138). The user interface displays a keypad 704 through which keypad 704 a user may enter characters, for example, into a dialed number field 706. To this end, keypad 704 includes a set of affordances 708 corresponding to numbers (e.g., affordance 708-1 corresponding to "5" and affordance 708-2 corresponding to "2").
As shown in FIG. 7B, device 100 is at time T1A first input 710 is detected for activating the affordance 708-1 and a "5" is placed in the dialed number field 706. FIG. 7B also illustrates audio providing an audio output representation (e.g., audio profile) over timeFig. 712. Time T when first input 710 is detected1Marked on the horizontal axis of the audio chart 712. The vertical axis of the audio graphic 712 indicates the volume of the audio output shown on the audio graphic 712. The first audio profile 714-1 is a representation of the first audio output that corresponds to the affordance 708-1 and is rendered in response to the affordance 708-1. The first audio profile 714-1 of the first audio output is shown in its entirety in the audio map 712 of fig. 7D. The first audio output is at time T 1Then rises sharply and then finally decays to zero volume. In addition, audio map 712 indicates that the first audio output is produced at a c-liter tone.
FIG. 7C shows the device 100 at a second time T2A second input 716 directed to affordance 708-2 is detected (e.g., a time after T1) (e.g., activation of the dui's "2" button is detected, which places a "2" into dialed number bar 706). In this example, when device 100 detects activation of affordance 708-2, device 100 produces a second audio output represented by a second audio profile 714-2. In some embodiments, the second audio profile 714-2 is the same as the first audio profile 714-1 (e.g., a default audio profile for keypad sounds).
Because the second time T of the second input 716 is detected2At a threshold time TthresThereafter, device 100 continues to render the first audio output according to the first audio profile 714-1. 7E-7G are similar to FIGS. 7B-7D, except that the second time T at which the second input 716 is detected2At a threshold time TthresExcept before. In this case, the audio alteration criteria are met, so the device 100 does not continue to present the first audio output with the first audio profile 714-1, but rather presents a modified first audio output with the modified audio profile 714-3 (the modified first audio output follows the solid line labeled 714-3 in FIGS. 7F-7G; the dashed line labeled 714-1 is used to show what the unmodified audio profile should be if the audio alteration criteria are not met).
In some embodiments, when device 100 presents the first audio output, device 100 presents a visual effect corresponding to the first audio output. For example, as shown in FIGS. 7C-7D, the visual effect includes one or more graphics 718 (e.g., rings, ripples) extending outward (e.g., away) from the affordance 708-1. Similarly, when the device 100 presents the second audio output, the device 100 presents a graphic 720 (e.g., ring, ripple) that extends outward away from the affordance 708-2.
Additional details regarding fig. 7A-7G (and the user interfaces shown therein) are provided below with respect to fig. 9A-9C.
Fig. 8A-8B are flow diagrams depicting a method 800 of dynamically adjusting presentation of audio output, according to some embodiments. Fig. 6A-6O are used to illustrate the method and/or process of fig. 8A-8B. Although some of the examples below will be given with reference to input on a touch sensitive display (where the touch sensitive surface and the display are combined), in some embodiments, the device detects input on a touch sensitive surface 451 separate from the display 450, as shown in fig. 4B.
In some embodiments, method 800 is performed by an electronic device (e.g., portable multifunction device 100 of fig. 1A) and/or one or more components of an electronic device (e.g., I/O subsystem 106, operating system 126, etc.). In some embodiments, method 800 is managed by instructions stored in a non-transitory computer-readable storage medium and executed by one or more processors of a device, such as one or more processors 122 of device 100 (fig. 1A). For ease of illustration, the method 800 performed by the device 100 is described below. In some embodiments, referring to FIG. 1A, the operations of method 800 are performed at least in part by or used by audio preview module 163-1, audio modification module 163-2, and a touch-sensitive display (e.g., touch screen 112). Some operations in method 800 are optionally combined, and/or the order of some operations is optionally changed.
As described below, the method 800 (and associated interface) reduces the number, scope, and/or nature of inputs from a user and results in a more efficient human-machine interface, thereby providing an easy-to-use and intuitive way for a user to interact with a user interface. For battery-powered electronic devices, method 800 enables efficient, seamless, and fast interaction by providing an easily understandable and information-rich audio output that conserves power and increases the time between battery charges (e.g., by reducing the need for extensive and inefficient user interaction that draws battery power).
In some embodiments, the method 800 begins when a device provides (802) first sound information to an audio system in communication with the device for presentation including volume and audio characteristics other than volume ("non-volume audio characteristics"). For example, in fig. 6A-6H, the first audio output is a moonlight tempo of bazedoxifene. In various instances, the first audio output is a song, album, ringtone, audio portion of the video content, or audio from an audio file object that appears in the text instant message window.
In some implementations, the non-loudness audio characteristic is (804) a reverberation time of the first audio output, a low pass filter cut-off (e.g., cut-off frequency) of the first audio output, or a stereo balance (also referred to as left-right balance, "L-R" balance) of the first audio output.
When the audio system presents the first audio output, the device receives (806) an input corresponding to a request for presentation of the second audio output. In some embodiments, an input corresponding to (or otherwise associated with) the second audio output is received (808) on a display in communication with the electronic device when the focus selector is positioned over the graphical object.
For example, the input is a press input on a touch-sensitive surface (e.g., a touch-sensitive display integrated with the electronic device, or a touch-sensitive surface on a remote control in signal communication with the electronic device). A press input is received on a graphical user interface object (e.g., a video content object in a text message window, fig. 6I-6O) associated with the second audio output.
As another example, the input corresponds to a swipe gesture on the touch-sensitive surface that begins on a graphical user interface object (e.g., artwork of album art, fig. 6A-6H) associated with the second audio output.
In some embodiments, when the graphical user interface object represents a song, album, or video, the second audio output is a preview of the song, album, or video (e.g., a portion of the song, album, or video that has been preselected to represent or identify the song, album, or video). In some embodiments, a portion of a song, album, or video has a preset length (e.g., 30 seconds).
For example, the input corresponds to a swipe gesture on the touch-sensitive surface that begins over (or on) a graphical user interface object associated with the second audio output (e.g., the graphical user interface object includes artwork for album art, fig. 6A-6H). Another example of the above is: wherein the press input is received while the focus selector is positioned over a graphical user interface object associated with the second audio output (e.g., the graphical user interface object comprises a video content object in a text message window, fig. 6I-6O). Other examples of graphical user interface objects include a text representation of a ringtone, an audio file object that appears in a text instant message window, and so forth.
In response to receiving an input corresponding to a request to present a second audio output, the device provides (810) information to the audio system to dynamically adjust presentation of the first audio output according to a magnitude of the input. In some implementations, the magnitude of the input is (812) a characteristic intensity of the contact in the input, a length of time of the contact in the input, or a distance traveled by the contact in the input (e.g., a length of a swipe gesture).
As the magnitude of the input changes, the device dynamically adjusts (814) the non-volume audio characteristic. Adjusting the non-volume audio characteristic as the magnitude of the input changes allows the user to effectively preview or listen to the second audio output without interrupting the first audio output, and gives the user additional control over the audio output through a single user input. This enhances the operability of the device and makes the user device interface more efficient (e.g., by reducing the number of user inputs that the user must make when operating/interacting with the device), which in addition reduces power usage and improves the battery life of the device by enabling the user to use the device faster and more efficiently.
In some embodiments, the device shifts the stereo balance of the first audio output while shifting the stereo balance of the second audio output (816) (e.g., as described with reference to fig. 6A-6O). For example, the input corresponds to a swipe gesture on the touch-sensitive surface that moves in a first direction (e.g., swipe from left to right) away from an initial location of the graphical user interface object associated with the second audio output (e.g., the swipe gesture drags artwork of album art toward a center of the display, fig. 6A-6H). In this example, shifting the stereo balance includes shifting the presentation of the first audio output and the second audio output such that they track movement of the swipe input in the first direction (e.g., the stereo balance of the first audio output shifts to the right while the second audio output fades in starting from the left of the audio system). In other embodiments, the stereo balance of the first audio output and the second audio output is shifted based on a change in intensity of the input contact or based on a contact time of the input.
As another embodiment, in some embodiments, the non-volume audio property is a low pass filter cutoff of the first audio output. The device shifts the low pass filter cutoff downward according to an increase in the magnitude of the input. Thus, as the magnitude of the input increases, the first audio content decreases to a bass background (e.g., "roaring"), such that the second audio content can be produced concurrently with the first audio content in a manner that makes the second audio content clearly audible (e.g., because the prominence of the background audio has been reduced by the application of the low-pass filter).
In some embodiments, the device adjusts the volume of the first audio output as a function of the magnitude of the input (818) (e.g., both the volume and the non-audio property vary as a function of the magnitude of the input). In some implementations, the volume of the first audio output decreases as the magnitude of the input increases. Adjusting the volume of the first audio output in this manner also increases the audible prominence of the second audio output without interrupting the first audio output. This makes the blurring of the two audio outputs more effective, thus enhancing the operability of the device and making the user device interface more effective.
In some embodiments, the device dynamically adjusts the non-volume audio attribute as the magnitude of the input changes prior to presentation of the second audio output until the magnitude of the input meets a first predetermined threshold (820) (e.g., the characteristic intensity of the contact in the input remains below the preview threshold but above the cue threshold, whereby the device provides an audible and/or visual cue that the device is about to preview the second audio output, as described with reference to fig. 6B). Once the magnitude of the input meets a first predetermined threshold (e.g., exceeds a slight swipe movement, fig. 6C), the second audio output begins playing, and in some embodiments, the first audio output further adjusts as the magnitude of the input changes. The non-volume audio attribute is dynamically adjusted as the magnitude of the input changes before presentation of the second audio output provides a prompt to the user that audio ambiguity is imminent, which may be used, for example, to alert the user when the user accidentally discovers the device input function described in method 800.
In response to receiving an input corresponding to a request to present a second audio output, the device provides second audio information to the audio system to present the second audio output concurrently with the first audio output (822). In some embodiments, the device provides information to dynamically adjust the presentation of the second audio output according to the magnitude of the input (824). In some implementations, presentation of the second audio output is performed while dynamically adjusting presentation of the first audio output according to the magnitude of the input. In this way, the first audio output and the second audio output are simultaneously dynamically blurred (e.g., the first audio output gradually fades out and the second audio output gradually fades in). Thus, dynamically adjusting the presentation of the second audio output according to the magnitude of the input provides a seamless transition from the prominence of the first audio output to the prominence of the second audio output, which makes the blurring of the audio more pleasing and less alarming to a user who accidentally finds the device input function described in method 800. This in turn increases the likelihood that the user will remain on his device to implement such functionality, and thus conserves battery power by increasing the efficiency of the user's interaction with the device.
In some embodiments, in response to receiving an input corresponding to a request to present a second audio output, the device provides data to the display to display a visual effect that changes in conjunction with dynamically adjusting the non-volume audio attribute (826). For example, the device provides data to the display to visually obscure anything in the user interface other than the graphical user interface object corresponding to the second audio output, wherein the blur radius is proportional to the magnitude of the input. The display of visual effects that vary in conjunction with dynamically adjusting non-volume audio properties provides intuitive visual cues corresponding to the varying audio that facilitate user interaction with their device.
In some embodiments, the device detects that the magnitude of the input meets a second predetermined threshold that is greater than the first predetermined threshold and, in response, causes the audio system to cease presenting the first audio output (and continue presenting the second audio output) (828). In some embodiments, the second predetermined threshold ("pop-up" threshold) corresponds to a higher intensity of contact for the input than the first predetermined threshold (e.g., the pop-up threshold is IT in FIGS. 6I-6O D) A greater distance the input travels, or a greater amount of time the input remains in contact with the touch-sensitive surface. Thus, the audio system ceasing to present the first audio output provides a way for the user to switch to "full presentation" of the second audio output using the same input through which the user previewed the first audio output. This also reduces the amount of user interaction required to implement a particular function.
In some embodiments, the device detects the end of the input (830) (e.g., as shown in fig. 6E-6H). In some alternative embodiments, instead of detecting the end of the input, the device detects that a change in the magnitude of the input has ceased (e.g., the intensity of the contact returns to its original intensity or the input returns to its original location). In some alternative embodiments, instead of detecting the end of the input, the device detects an increase in the magnitude of the input that exceeds a predetermined threshold (e.g., a pop-up threshold). In some alternative embodiments, instead of detecting the end of the input, the device detects a predetermined change in the magnitude of the input. For example, the magnitude of the input falls below a predetermined threshold (e.g., the magnitude of the input falls below a pop-up threshold while the input remains in contact with the touch-sensitive surface). In some embodiments, instead of detecting the end of the input, the device detects that the change in the input has ceased for a predetermined period of time.
In any case, in response to an appropriate condition, the device causes the audio system to perform one of the following operations:
cease to present the second audio output and present the first audio output without dynamic adjustment (832) of the first audio output (e.g., undoing the dynamic adjustment of the first audio output). In some embodiments, the playing of the first audio output is adjusted instead of paused while the second audio output is played, and the first audio output continues to play after the second audio output stops playing, in the event the adjustment is deleted. This provides an intuitive way for the user to return to listening to only the first audio output, thereby reducing the number of user inputs. For example, in fig. 6H, the device stops presenting the second audio output and presents the first audio output without dynamic adjustment.
Ceasing to present the first audio output and continuing to present the second audio output (834). This provides an intuitive way for the user to switch to listen to the second audio output only, thereby reducing the number of user inputs. For example, in fig. 6H, the device stops presenting the first audio output and presents the second audio output without dynamic adjustment.
Cease to present the adjusted first audio output and continue to present the second audio output (836). This provides the user with an intuitive way to continue to blur the audio; or
In accordance with a determination that the magnitude satisfies a predetermined threshold (e.g., exceeds a pop-up threshold during the input), ceasing to present the first audio output and continuing to present the second audio output; and in accordance with a determination that the magnitude does not satisfy the threshold (e.g., the input remains below the pop-up threshold during the input), ceasing to present the second audio output and presenting the first audio output without dynamic adjustment (e.g., resuming dynamic adjustment of the first audio output) (838). In this way, the system automatically determines whether the user actually wants to continue listening to the first audio output or wants to switch to the second audio output, thereby providing the user with an intuitive way to achieve either result and reducing the number of user inputs. For example, in fig. 6E-6H, the device determines whether to stop presenting the second audio output and present the first audio output without dynamic adjustment in accordance with a determination of whether the magnitude of the user input satisfies or does not satisfy the threshold.
It should be appreciated that the particular order in which the operations have been described in fig. 8A-8H is merely an example and is not intended to suggest that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. In some embodiments, one or more operations of method 800 are combined with, supplemented with, or replaced by one or more operations of other methods described herein (e.g., methods 840, 854, 875, and/or 900).
Fig. 8C is a flow diagram illustrating a method 840 of dynamically adjusting presentation of audio output, according to some embodiments. In some embodiments, one or more operations of method 840 are combined with, supplemented with, or replaced by one or more operations of other methods described herein (e.g., method 800). Moreover, many of the operations described with reference to method 840 share the same benefits (e.g., reduced audio intuitiveness ambiguity, number of user inputs that the user must make) as the operations described above with reference to method 800. Thus, method 840 also enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide correct input and reduce user error when operating/interacting with the device), which in addition reduces power usage and improves the battery life of the device by enabling the user to use the device faster and more efficiently.
In some embodiments, method 840 is performed at an electronic device with memory and one or more processors, and the electronic device is in communication with a display and an audio system.
In some embodiments, the method 840 begins when the device provides data to a display to present a user interface including a media object representing at least one media content item (842).
The device provides first sound information to the audio system to present a first audio output that does not correspond to the media object (844).
While providing the first sound information to the audio system, the device detects an input directed to a first portion of a media object (846).
In response to detecting the input pointing to the first portion of the media object, the device: (i) initiating provision of second audio information to the audio system to present a second audio output corresponding to the media object; and (ii) continuing to provide the first sound information to the audio system to present a first audio output that does not correspond to the media object (848).
While providing the first and second sound information to the audio system, the device detects a second portion of the input that is directed at the media object, wherein detecting the second portion of the input includes detecting a change in a parameter of the input (850) (e.g., detecting a change in intensity of a contact with the touch-sensitive surface or detecting a change in position or movement of the focus selector when the focus selector is over the media object).
In response to detecting a change in the parameter of the input, the device:
(i) providing (852) data to the display to dynamically alter the presented user interface (e.g., by visually blurring the user interface background and/or enlarging the media object) in accordance with changes in the input parameters;
(ii) Providing information to the audio system to dynamically alter a first audio output that does not correspond to the media object in accordance with changes in the parameters of the input (e.g., by decreasing volume, increasing reverberation time, decreasing low-pass filter cut-off frequency, and/or moving the L-R balance of the audio to the right or left); and
(iii) information is provided to the audio system to dynamically alter a second audio output (852) that is associated with the media object in accordance with changes in the input parameter (e.g., by decreasing volume, increasing reverberation time, decreasing low pass filter cut-off frequency, and/or moving the L-R balance of the audio to the right or left).
It should be understood that the particular order in which the operations have been described in FIG. 8C is merely an example and is not intended to suggest that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. In some embodiments, one or more operations of method 840 are combined with, supplemented with, or replaced by one or more operations of other methods described herein (e.g., methods 800, 854, 875, and/or 900).
Figures 8D-8F are flow diagrams illustrating a method 854 of dynamically adjusting presentation of audio output, according to some embodiments. Fig. 6P-6Y are used to illustrate the methods and/or processes of fig. 8D-8F. Although some of the subsequent embodiments will be given with reference to input on a touch-sensitive display (where the touch-sensitive surface and the display are combined), in some embodiments the device detects input on a touch-sensitive surface 451 separate from the display 450, as shown in fig. 4B.
In some embodiments, method 854 is performed by an electronic device (e.g., portable multifunction device 100, fig. 1A) and/or one or more components of the electronic device (e.g., I/O subsystem 106, operating system 126, etc.). In some embodiments, the method 854 is governed by instructions stored in a non-transitory computer readable storage medium and executed by one or more processors of the device, such as one or more processors 122 of device 100 (fig. 1A). For ease of explanation, the following describes the method 854 performed by the device 100. In some embodiments, referring to FIG. 1A, the operations of method 854 are performed at least in part by or used at least in part by audio preview module 163-1, audio modification module 163-2, and a touch-sensitive display (e.g., touch screen 112). Some operations in method 854 are optionally combined, and/or the order of some operations is optionally changed.
As described below, the method 854 (and associated interface) reduces the number, scope, and/or nature of inputs from a user and produces a more efficient human-machine interface, thereby providing an easy-to-use and intuitive way for a user to interact with a user interface. For battery-operated electronic devices, the method 854 allows for efficient, seamless, and fast interaction by providing an easily understandable and informative audio output that conserves power and increases the time interval between battery charges (e.g., by reducing the need for intensive and inefficient user interaction that depletes battery power).
In some embodiments, the method 854 begins when the device displays a user interface (855) on a display that includes representations of media items. For example, fig. 6Q illustrates a user interface 640 that includes representations of media items 658.
While displaying the user interface, the device detects (856) an input caused by a contact (e.g., a finger contact or a stylus contact) at a location on the touch-sensitive surface that corresponds to a representation of a media item (e.g., a contact is detected on a representation of a media item on the touch-sensitive display). For example, FIG. 6R illustrates the beginning of a user input 660, which is a press and hold gesture detected on representation 658-2.
In response to detecting the input caused by the contact: in accordance with a determination that the input satisfies media-prompting criteria, wherein the media-prompting criteria include when the contact has a strength above a first strength threshold (e.g., ITHFig. 6R) the characteristic intensity: the device begins playing the corresponding portion of the media item (e.g., the beginning portion or representative portion selected to contain the identifiable portion of the media item) (e.g., playing their audio and/or video) (857). For example, in FIG. 6R, device 100 starts playing Brahms' Alto Rhapbody. Also, as the media item is played, the device dynamically changes a set of one or more audio attributes as the characteristic intensity of the contact changes (e.g., the audio attribute of the media item changes by multiple values as the characteristic intensity of the contact changes by multiple values). In some embodiments, the volume of the media item increases with increasing characteristic intensity of the contact and Decreases as the characteristic strength of the contact decreases. For example, as the intensity of the input 660 increases from FIG. 6R to FIG. 6S, the volume of the Brahms' Alto Rhapbody increases.
In some implementations, the media item played is determined based on the location of the contact in the input (e.g., in accordance with a determination that contact is detected at a location corresponding to a first media item, the above steps are performed for the first media item, and in accordance with a determination that contact is detected at a location corresponding to a second media item, the above steps are performed for the second media item, but not for the first media item). For example, if the input 660 in FIG. 6R is detected on representation 658-1 instead of representation 658-2, device 100 will begin playing the media item corresponding to representation 658-1.
In accordance with a determination that the input does not satisfy the media-cue criteria, the device forgoes beginning to play the respective portion of the media item and forgoes dynamically changing the set of one or more audio attributes of the media item as the characteristic intensity of the contact changes. In some embodiments, the device performs alternative operations with respect to the media item. For example, in response to a tap gesture on the representation of the media item, the device performs a selection operation with respect to the media item.
In some embodiments, the audio attributes of the media item include volume, cut-off frequency of a low-pass filter, and/or equalizer settings (858) (e.g., fig. 6P-6Y show one example in which the device dynamically changes the volume of the media item as the characteristic intensity of the contact changes).
In some embodiments, as the set of one or more audio properties of the media item dynamically changes, the device dynamically changes the visual appearance of the user interface (859). In some implementations, dynamically changing the visual appearance of the user interface includes increasing a size of the representation of the media item as the characteristic intensity of the contact increases (860) (e.g., increasing the size of the representation of the media item as the characteristic intensity of the contact increases and decreasing the size of the representation of the media item as the characteristic intensity of the contact decreases). For example, as the intensity of the input 660 increases from FIG. 6R to FIG. 6S, the volume of the media item and the size of the representation 658-2 both increase. In some embodiments, this is the same representation that was initially displayed in the user interface (as shown in FIG. 6R-6T, representation 658-2 "grows" from its original size and position, moving toward the center portion of the second region of the user interface, also increasing in size). In other embodiments, the initially displayed representation is different from the initial representation while the representation remains co-located with the second region of the user interface, and the second different representation of the media item increases in size and moves toward a center portion of the second region of the user interface as the characteristic intensity of the contact increases (e.g., the different representation is displayed on top of the initial representation).
In some embodiments, dynamically changing the visual appearance of the user interface includes dynamically changing an amount of blurring of a portion of the representation of the proximate media item of the user interface as the characteristic intensity of the contact changes (861). For example, as the intensity of the input 660 increases from FIG. 6R to FIG. 6S, the interaction region 642-2 blurs with a blur radius proportional to the intensity of the contact 660 (in these figures, the increasing blur radius is schematically represented by the decreasing transparency of the pattern covering the interaction region 642-2).
In some embodiments, after the respective portion of the media item begins to be played: the device detects an increase in the characteristic intensity of the contact followed by a decrease in the characteristic intensity of the contact (862) (e.g., fig. 6T shows an increase in the intensity of contact 660; fig. 6U shows a decrease in the intensity of contact 660). In some embodiments, a number of conditional operations are described below that are adjusted in accordance with a determination of whether the intensity characteristics of the contact satisfy media preview criteria. These operations are performed in response to detecting an increase in the characteristic intensity of the contact, followed by detecting a decrease in the characteristic intensity of the contact. In some embodiments, the conditional operations described below are performed when an increase in the characteristic intensity of the contact is detected, followed by a decrease in the characteristic intensity of the contact.
To this end, while the media item is playing, in accordance with a determination that the characteristic intensity of the contact satisfies media preview criteria including when the characteristic intensity of the contact increases, before a decrease in the characteristic intensity of the contact is detectedTo above a first intensity threshold (e.g., IT)H) A higher second intensity threshold (e.g., IT)L6T-6U) that changes the audio property of the media item in a first manner as the characteristic intensity of the contact increases (e.g., increases the audio volume of the media item) and that maintains the audio property in a first state as the characteristic intensity of the contact decreases (e.g., continues playing the media item back and forth at the set audio volume). In some implementations, as the audio properties of the media item change, the audio properties of the background audio also change (e.g., fade in the background audio with decreasing intensity and fade out the background audio with increasing intensity). Having the audio attribute remain in the first state (e.g., continue to play back the media item at the set audio volume) as the characteristic intensity of the contact decreases allows the user to enter a "peek" mode in which the audio attribute of the media item is fixed, which is convenient for a user wishing to listen to a portion of the media item without continuously modifying the audio attribute. Since the audio property is fixed only after the contact intensity increases above the second intensity threshold, the user has the flexibility to select the mode of operation he or she wants. This reduces the number of user inputs required to implement the desired device functionality, thus enhancing the operability of the device and making the user device interface more efficient (e.g., by helping the user provide correct inputs and reducing user errors when operating/interacting with the device), which in addition reduces power usage and improves the battery life of the device by enabling the user to use the device more quickly and efficiently.
For example, the intensity due to contact 660 is higher than IT in fig. 6TLThus, the volume of the media item is "locked" to the preview volume (e.g., full volume) such that the intensity of contact 660 subsequently drops below IT as shown in fig. 6ULNo change in volume results. In accordance with a determination that the characteristic intensity of the contact does not satisfy the media preview criteria (i.e., the characteristic intensity of the contact does not increase above the second intensity threshold), when the media item is played (and when the contact remains on the touch-sensitive surface), the device proceeds in a first manner as the characteristic intensity of the contact increasesTo change the audio properties of the media item (e.g., increase the audio volume of the media item) and in a second manner as the characteristic intensity of the contact decreases (e.g., conversely increase the audio volume). For example, if the sequence shown in FIGS. 6S-6R is reversed, the volume of the media item decreases as the intensity decreases. 6S-6R, the user may scroll the volume of the media item up and down by increasing and decreasing the intensity of contact 660, so long as the intensity of contact 660 does not exceed ITL. In some embodiments, the respective portion of the media item is played continuously as the characteristic intensity of the contact changes, wherein the set of one or more audio parameters changes as the media item is played.
In some embodiments, in response to detecting that the input meets the media preview criteria, the device displays (863) an indication on the display that the characteristic intensity of the contact will cause the device to perform a selection operation with respect to the media item (e.g., "press harder to select" or "touch 3D to share").
In some embodiments, after beginning to play the respective portion of the media item, the device detects an increase in a characteristic intensity of the contact while the contact remains on the touch-sensitive surface (864). In response to detecting an increase in the characteristic intensity of the contact: in accordance with a determination that the characteristic intensity of the contact satisfies media selection criteria, including criteria that are satisfied when the characteristic intensity of the contact is greater than a selection intensity threshold that is higher than the first intensity threshold (and also higher than the second intensity threshold, operation 862), the device stops displaying the respective portion of the media item and performs a selection operation with respect to the media item (e.g., placing the media item in a message composition area or sharing the media item with another user in the instant messaging session). For example, in FIGS. 6V-6W, the intensity of contact 660 exceeds ITDRepresentation 658-2 is thus selected such that the corresponding audio message 644-4 is placed in shelf 650. In accordance with a determination that the characteristic strength of the contact does not satisfy the media selection criteria, the device continues to play the corresponding portion of the media item without performing a selection operation.
In some implementations, when the media item is played, the device detects the end of the input (865) (e.g., detects a liftoff of a contact that causes the media item to begin playing). In response to detecting the end of the input, the device stops playing the media item. In some implementations, stopping playing the media item in response to detecting the end of the input includes reversing a visual blur adjacent to content of the representation of the media item and changing the set of one or more attributes of the audio item to gradually fade out the played media item (866). For example, if the input 660 has terminated in any of FIGS. 6R-6U, the device 100 will stop playing the media item and reverse the blur of the interaction region 642-2.
In some embodiments, after stopping playing the media item in response to detecting the end of the input, the device detects a selection input caused by a second contact on the touch-sensitive surface at a location corresponding to a representation of the media item (e.g., a tap gesture), wherein the second contact does not have a characteristic intensity that reaches a first intensity threshold (867). In response to detecting the selection input, the device performs a selection operation with respect to the media item (e.g., placing the media item in a message composition area such as shelf 650 (fig. 6P) or sharing the media item with another user in the instant messaging session).
Operations 868-870 describe blurring of audio (i.e., media items with background media that have been played on a device). Methods 800 and 840 detail audio blurring. Thus, operations 868-870 may share any of the features described in these methods.
In some implementations, an input is detected 868 (e.g., the input detected in operation 856) while the background media is playing on the device. In accordance with a determination that the input satisfies media-prompting criteria: as the media item is played, the device dynamically changes a set of one or more audio attributes of the background media item as the characteristic intensity of the contact changes (869). Thus, in some embodiments, the device "fades in" the media item over the background media, which provides all of the attendant benefits described with reference to method 800/840.
In accordance with a determination that the input does not satisfy the media cue criteria, the device continues to play the background media without changing the set of one or more audio attributes of the background media. In some embodiments, the device detects the end of the input when the media item is played (870). In response to detecting the end of the input, the device resumes playing the background media item, wherein the set of one or more audio attributes return to their values prior to detecting the input.
It should be appreciated that the particular order in which the operations have been described in fig. 8D-8F is merely an example and is not intended to suggest that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. In some embodiments, one or more operations of method 854 are combined with, supplemented by, or replaced by one or more operations of other methods described herein (e.g., methods 800, 840, 875, and/or 900).
Fig. 8G-8H are flow diagrams illustrating a method 875 of obscuring portions of a graphical user interface, according to some embodiments. FIGS. 6P-6Y are used to illustrate the methods and/or processes of FIGS. 8G-8H. Although some of the examples that follow will be given with reference to input on a touch-sensitive display (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects input on a touch-sensitive surface 451 that is separate from the display 450, as shown in fig. 4B.
In some embodiments, method 875 is performed by an electronic device (e.g., portable multifunction device 100, fig. 1A) and/or one or more components of the electronic device (e.g., I/O subsystem 106, operating system 126, etc.). In some embodiments, method 875 is governed by instructions stored in a non-transitory computer readable storage medium and executed by one or more processors of a device, such as one or more processors 122 of device 100 (fig. 1A). For ease of explanation, the method 875 performed by the apparatus 100 is described below.
As described below, method 875 (and associated interface) reduces the number, scope, and/or nature of inputs from a user and results in a more efficient human-machine interface, providing an easy-to-use and intuitive way for a user to interact with a user interface. For battery-operated electronic devices, method 875 allows for efficient, seamless, and fast interaction by providing an easily understandable and informative audio output that conserves power and increases the time interval between battery charges (e.g., by reducing the need for intensive and inefficient user interaction that depletes battery power).
In some embodiments, the method 875 begins when the device displays a user interface (876) on a display that includes a first interactive region of an application (e.g., interactive region 642-2, fig. 6Q) and a second interactive region of the application (e.g., interactive region 642-1, fig. 6Q). In some embodiments, the interaction region is a user interface region with which a user can interact, such as by providing any of a variety of input gestures (swipe, tap, etc.) that, when detected, cause the electronic device to modify the user interface in accordance with the gestures. In other words, the interactive area is not a status bar through which the user displayed at the top portion cannot perform any interaction.
In some embodiments, the second interactive area is a user interface (877) for a host application (e.g., an instant messaging application). The first interaction region is configured to display content from a different mini-application configured to operate within the host application, and the mini-application displayed in the first interaction region is selected based on user input at the device (e.g., a user may perform swipe gestures in the first interaction region that, when detected, cause the device to display a user interface for the different mini-application in the first interaction region, as described with reference to indicator 654 (fig. 6Q)). For example, the instant messaging application may display a mini-application that interacts with device media (e.g., music, video), device camera, word art, and so forth.
In some embodiments, the second interaction region is a conversation region (878) that includes a plurality of messages in a conversation between conversation participants (e.g., the second interaction region is a conversation region, such as interaction region 642-1 (fig. 6Q), within an instant messaging application (such as a conversation recording with messages between a user of the electronic device and at least one other user.) the first interaction region is a media selection region for selecting media for sharing in the conversation between the conversation participants (e.g., a music sharing region, such as interaction region 642-2 (fig. 6Q), within the instant messaging application that includes a scrollable form that displays representations of the most recently displayed 30 songs on the device).
While displaying the user interface, the device detects a first input (879) on the display caused by a contact (e.g., a finger contact or a stylus contact) on the touch-sensitive surface at a location corresponding to the first user interface element in the first interaction region. For example, FIG. 6R illustrates the beginning of a user input 660, which is a press and hold gesture detected on representation 658-2.
In response to detecting the first input caused by the contact: in accordance with a determination that the first input satisfies intensity-based activation criteria, wherein the intensity-based activation criteria require the contact to have more than a first intensity threshold (e.g., IT)HFig. 6R) to satisfy an intensity-based activation criterion, the device hides a first interaction region of the application without hiding a second interaction region of the application (e.g., obscuring interaction region 642-2 without obscuring representation 658-2, fig. 6R-6V) in addition to the first user interface element (880). In addition to the first user interface element, the first interaction region of the application is obscured without obscuring the second interaction region of the application, indicating to the user that he or she is enabling intensity-based device functionality within, for example, a mini-application. This reduces the chance that the user will cause the device to perform unnecessary operations, which enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide correct input and reduce user error when operating/interacting with the device), which in addition reduces power usage and improves the battery life of the device by enabling the user to use the device more quickly and efficiently.
In accordance with a determination that the first input satisfies first selection criteria, wherein the first selection criteria does not require that the contact have a characteristic intensity above a first intensity threshold in order to satisfy the selection criteria (e.g., the first input is a tap gesture), the device performs a first selection operation corresponding to the first user interface element without obscuring a first interaction region of the application. In some implementations, the first selection operation to occlude the interaction region is determined based on a location of contact (e.g., in accordance with a determination that contact is detected at a location corresponding to a first media item, the above steps being performed for the first media item, and in accordance with a determination that contact is detected at a location corresponding to a second media item, the above steps being performed for the second media item, but not for the first media item. For example, if input 660 in FIG. 6R is detected on representation 658-1, rather than representation 658-2, device 100 hides anything in interactive area 642-2 other than representation 658-1, and not anything in interactive area 642-2 other than representation 658-2.
In some implementations, the first selection criterion is satisfied when the first input is a tap gesture (881).
In some embodiments, obscuring the first interaction region of the application includes dynamically obscuring the first interaction region of the application as the characteristic intensity of the contact changes (882) (e.g., as the characteristic intensity of the contact changes by a plurality of values, a blur radius of a blur applied to the first interaction region changes by a plurality of values). In some embodiments, the blur radius of the blur of the first interaction region increases with increasing characteristic intensity of the contact and decreases with decreasing characteristic intensity of the contact. For example, as the intensity of the input 660 increases from FIG. 6R to FIG. 6S, the interaction region 642-2 blurs with a blur radius proportional to the intensity of the contact 660 (in these figures, the increasing blur radius is schematically represented by the decreasing transparency of the pattern covering the interaction region 642-2).
In some embodiments, while displaying the user interface, the device detects a second input caused by a second contact on the touch-sensitive surface at a location on the display that corresponds to a second user interface element in the second interaction region (883). In response to detecting a second input caused by a second contact: in accordance with a determination that the second input satisfies second intensity-based activation criteria that require the second contact to have a characteristic intensity above a second intensity threshold in order to satisfy the second intensity-based activation criteria, the device hides the first interaction region of the application and the second interaction region of the application except for the second user interface element (e.g., the device fades out content of the first interaction region and the second interaction region while previewing the media item corresponding to the second user interface element that is not obscured). In some embodiments, the second intensity threshold is the same as the first intensity threshold. In some embodiments, the second intensity threshold is different from the first intensity threshold. The first interaction region of the application and the second interaction region of the application are obscured in addition to the second user interface element, indicating to the user that he or she is enabling intensity-based device functionality relative to the second user interface element, for example, within the host application. This reduces the chance that the user will cause the device to perform unnecessary operations, which enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide correct input and reduce user error when operating/interacting with the device), which in addition reduces power usage and improves the battery life of the device by enabling the user to use the device more quickly and efficiently.
For example, as shown in FIG. 6Y, foundation 674 has more than ITHAnd thus both of the interaction regions 642-1 and 642-2 are obscured. Thus, IT is higher on the representation of the media item in the mini-applicationHWhile a similar press input on a representation of a media item in an interaction region corresponding to the host application (e.g., a conversation region of an instant messaging application) results in blurring of the entire user interface including the two interaction regions 642, the press input of (b) results in blurring of the entire user interface.
In accordance with a determination that the second input satisfies second selection criteria, wherein the second selection criteria does not require that the second contact have a characteristic intensity above a second intensity threshold in order to satisfy the second selection criteria (e.g., the second input is a tap gesture), the device performs a second selection operation corresponding to the second user interface element without obscuring a second interaction region of the application and without obscuring a first interaction region of the application (e.g., the device plays back a media item corresponding to the second element). For example, a tap gesture on any of representations 658 in fig. 6Q places media items in shelf 650 (or, according to some embodiments, sends the media items directly to the session).
In some embodiments, after the first user interface element is added to the second user interaction region based on the input in the first interaction region, the second user interface element corresponds to the first user interface element (884) (e.g., the second user interface element corresponds to a media item selected from the first interaction region and added to a conversation displayed in the second interaction region). For example, in FIG. 6Q-6Y, both the audio message 644-4 and the representation 658-2 correspond to the same media item (e.g., Brahms's Alto Rhapbody).
In some embodiments, the device detects an increase in the characteristic intensity of the contact when the first interaction region of the application is obscured (without obscuring the second interaction region of the application) in addition to the first user interface element. In response to detecting an increase in the characteristic intensity of the contact: in accordance with a determination that the characteristic intensity of the contact satisfies a third selection criterion, the third selection criterion includes when the characteristic intensity of the contact is greater than a selection intensity threshold (e.g., IT) that is higher than the first intensity thresholdDFig. 6V), the device performs a third selection operation on the first user interface element and stops obscuring the first interaction area of the application. Thus, since contact 660 exceeds IT in FIG. 6V DAnd thus representation 658-2 pops up into shelf 650 (where the representation is relabeled as audio message 644-4). Performing a selection operation on the first user interface element and ceasing to obscure the first interaction region of the application in accordance with a determination that the characteristic intensity of the contact meets the third selection criterion allows the user to select the first user interface element once the user has initiated the intensity-based operation described above, thereby eliminating the need for additional user input to select the first user interface element and make the human-machine interface more efficient.
In accordance with a determination that the characteristic strength of the input does not satisfy the third selection criterion, the device continues to hide the first interaction region of the application program except for the first user interface element without hiding the second interaction region of the application program.
It should be appreciated that the particular order in which operations have been described in fig. 8G-8H is merely an example and is not intended to suggest that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. In some embodiments, one or more operations of method 875 are combined with, supplemented with, or replaced by one or more operations of other methods described herein (e.g., methods 840, 854, and/or 900).
Fig. 9A-9C are flow diagrams illustrating a method 900 of dynamically adjusting an audio output presentation, according to some embodiments. Fig. 7A-7G are used to illustrate the method and/or process of fig. 9A-9C. Although some of the examples that follow will be given with reference to input on a touch-sensitive display (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects input on a touch-sensitive surface 451 that is separate from the display 450, as shown in fig. 4B.
In some embodiments, method 900 is performed by an electronic device (e.g., portable multifunction device 100, fig. 1A) and/or one or more components of the electronic device (e.g., I/O subsystem 106, operating system 126, etc.). In some embodiments, method 900 is governed by instructions stored in a non-transitory computer readable storage medium and executed by one or more processors of a device, such as one or more processors 122 of device 100 (fig. 1A). For ease of explanation, the following describes method 900 as performed by device 100. In some embodiments, referring to fig. 1A, the operations of method 900 are performed at least in part by or used at least in part by audio profile 402, audio alteration module 163-3, and a touch-sensitive display (e.g., touch-sensitive display system 112). Some operations in method 900 are optionally combined, and/or the order of some operations is optionally changed.
As described below, the method 900 (and associated interface) reduces the number, scope, and/or nature of inputs from a user and produces a more efficient human-machine interface, thereby providing an easy-to-use and intuitive way for a user to interact with a user interface. For battery-operated electronic devices, the method 900 allows for efficient, seamless, and fast interaction by providing an easily understandable and informative audio output that conserves power and increases the time interval between battery charges (e.g., by reducing the need for intensive and inefficient user interaction that depletes battery power).
In some embodiments, method 900(902) is performed on an electronic device in communication with a display and an audio system while displaying a user interface on the display of the electronic device that includes a set of one or more affordances (e.g., the displayed user interface includes a virtual numeric keyboard and the set of one or more affordances includes numeric keys displayed on the virtual numeric keyboard, fig. 7A-7G). The method 900 begins with the device detecting a first input directed to a first affordance in the set of one or more affordances at a first point in time (e.g., detecting a tap on a first key in a virtual keyboard, such as a tap on the "5" key in fig. 7B) (904).
In response to detecting the first input directed to the first affordance, the device begins providing first sound information to the audio system to present a first audio output corresponding to the first affordance (906). The first audio output has a first audio profile (e.g., audio profile 714-1, FIGS. 7B-7G).
In some embodiments, the respective audio profile includes information that over time governs one or more attributes of the corresponding audio output. For example, the respective audio profile over time dictates the pitch, speech, timbre, reverberation, and/or volume (e.g., attenuation) of the corresponding audio output. In some embodiments, the respective audio profile further includes information that governs one or more static properties of the corresponding audio output. For example, the corresponding audio profile includes information for producing an audio output having a C-liter fixed pitch with a non-static volume that increases over time and then decays.
In some embodiments, while the audio system is presenting the first audio output, the apparatus causes (908) the display to present a visual effect corresponding to the first audio output (e.g., one or more graphics extending outward or away from the first affordance, such as a ring, such as a ripple effect, where the ring extends outward or away from a location on the display corresponding to the first input) (e.g., graphics 718, 720, fig. 7C-7D). In some embodiments, the graphics include an animation of one or more graphical objects moving away from the first affordance. Presenting a visual effect corresponding to the first audio output provides the user with an intuitive visual cue corresponding to the audio output that facilitates the user's interaction with their device.
The device detects a second input directed to a second affordance in the set of one or more affordances at a second point in time after the first point in time (e.g., detects a tap on a second key in the virtual keyboard, such as a tap on the "2" key in fig. 7C) (910). In some embodiments, the second affordance and the first affordance are the same affordance (e.g., the same key on a displayed keyboard within a displayed user interface) (912). In some embodiments, the first affordance is different from the second affordance (e.g., a different key on a displayed keyboard within a displayed user interface) (914).
In response to detecting a second input directed to a second affordance and in accordance with a determination that audio alteration criteria are met, the device (916):
(i) causing the audio system to present the altered first audio output corresponding to the first affordance, without continuing to present the first audio output with the first audio profile, wherein the altered first audio output has an altered audio profile that is different from the first audio profile. In some embodiments, causing the audio system to present the altered first audio output includes providing information, such as instructions and/or sound data, to the audio system to enable the audio system to present the altered first audio output; and
(ii) Providing the second audio information to the audio system to render a second audio output corresponding to the second affordance, wherein the second audio output has a second audio profile. In some embodiments, the second audio profile is the same as the first audio profile (e.g., each audio profile is a default audio profile of the audio output generated in response to activation of the affordance). In some embodiments, the second audio profile is different from the first audio profile. Presenting the first audio output corresponding to the first affordance, rather than continuing to present the first audio output with the first audio profile, makes the audio output less distracting and more desirable to the user. This in turn makes it more likely that the user will utilize the audio output functionality.
In some embodiments, in response to detecting the second input directed to the second affordance, the device determines whether the first audio output presented via the audio system at the second point in time satisfies audio alteration criteria (918).
In some embodiments, the audio alteration criteria include when an amount of elapsed time between the first point in time and the second point in time is less than a predetermined amount of time (e.g., T |) Threshold valueFig. 7B-7G) are satisfied (920).
In some embodiments, the audio alteration criteria include a criterion that is met when an elapsed time since the first audio output was initiated is less than a predetermined amount of time (922).
In some embodiments, the audio modification criteria include a criterion that is met when the magnitude of the first audio output falls below a predetermined magnitude at the second point in time (924). In some implementations, determining whether the first audio output satisfies the audio modification criteria includes determining, at a predetermined time (e.g., at or near the second point in time), whether an elapsed time since the first audio output was initiated is less than a predetermined time threshold and/or determining whether a magnitude of the first audio output falls below a predetermined magnitude (e.g., an initial magnitude or half of a maximum magnitude of the first audio output).
In response to detecting a second input directed to a second affordance and in accordance with a determination that audio alteration criteria are not met, the device (926):
(i) causing the audio system to continue to present a first audio output corresponding to the first affordance and having a first audio profile; and
(ii) Providing the second audio information to the audio system to render a third audio output corresponding to the second affordance, wherein the third audio output has a third audio profile. In some embodiments, the third audio profile is the same as the second audio profile. In some embodiments, the second audio profile is different from the third audio profile.
In some implementations, the altered audio profile has a reduced volume compared to the volume that produced the first audio output when the audio system continues to render the first audio output using the first audio profile (e.g., the altered audio profile includes increasing the attenuation of the first audio output). In some implementations, the altered audio profile includes a pitch of the altered first audio output. In some embodiments, the altered audio profile has a non-zero volume for at least a period of time after the device detects the second input.
It should be understood that the particular order in which the operations have been described in fig. 9A-9C is merely an example and is not intended to suggest that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. In some embodiments, one or more operations of method 900 are combined with, supplemented with, or replaced by one or more operations of other methods described herein (e.g., methods 800, 840, 854, and/or 875).
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments described, with various modifications as are suited to the particular use contemplated.

Claims (55)

1. A method (854) for media item management, comprising:
at an electronic device with a display, a touch-sensitive surface, and one or more sensors configured to detect intensity of contacts with the touch-sensitive surface:
displaying (855) a user interface comprising representations of media items on the display;
it is characterized in that
While the user interface is displayed and while background media is playing on the device, detecting (856) a contact-caused input on the touch-sensitive surface at a location corresponding to the representation of the media item; and
In response to detecting the input (857) caused by the contact:
in accordance with a determination that the input satisfies media-prompting criteria, wherein the media-prompting criteria include a criterion that is satisfied when the contact has a characteristic intensity above a first intensity threshold:
beginning to play the respective portion of the media item; and
dynamically changing a set of one or more audio attributes of a media item in accordance with a change in the characteristic intensity of the contact and a set of one or more audio attributes of the background media in accordance with a change in the characteristic intensity of the contact while the media item is playing and the background media is also playing; and the number of the first and second groups,
in accordance with a determination that the input does not satisfy the media-cue criteria, forgo to begin playing the respective portion of the media item, forgo to dynamically change the set of one or more audio attributes of the media item in accordance with a change in the characteristic intensity of the contact, and continue to play the background media without changing the set of one or more audio attributes of the background media; and
detecting an end of the input while the media item is playing; and
In response to detecting the end of the input, resuming playing the background media, the set of one or more audio attributes returning to their values before the input is detected.
2. The method of claim 1, wherein the audio attributes of the media item include volume, cut-off frequency of a low-pass filter, and/or equalizer settings (858).
3. The method according to any one of claims 1-2, comprising:
after beginning to play the respective portion of the media item (862):
detecting an increase in the characteristic intensity of the contact followed by detecting a decrease in the characteristic intensity of the contact
In accordance with a determination that the characteristic intensity of the contact satisfies media preview criteria including criteria that are satisfied when the characteristic intensity of the contact increases above a second intensity threshold that is higher than the first intensity threshold before the decrease in the characteristic intensity of the contact is detected, while the media item is playing, changing the audio attribute of the media item in a first manner in accordance with the increase in the characteristic intensity of the contact and maintaining the audio attribute in a first state as the characteristic intensity of the contact decreases; and
In accordance with a determination that the characteristic intensity of the contact does not satisfy the media preview criteria, while the media item is playing, changing the audio property of the media item in a first manner in accordance with the increase in the characteristic intensity of the contact and changing the audio property of the media item in a second manner in accordance with the decrease in the characteristic intensity of the contact.
4. The method of claim 3, comprising, in response to detecting that the input satisfies the media preview criteria, displaying, on the display, an indication that an increase in the characteristic intensity of the contact will cause the device to perform a selection operation with respect to the media item (863).
5. The method of claim 3, comprising:
detecting an end of the input while the media item is being played;
in response to detecting the end of the input, stopping playing the media item (865).
6. The method of claim 5, wherein stopping playing the media item in response to detecting the end of the input comprises reversing visual blur adjacent to content of the representation of the media item and gradually changing the set of one or more audio properties of the media item to gradually fade out the media item being played (866).
7. The method of claim 5, comprising (867):
after stopping playing the media item in response to detecting the end of the input, detecting a selection input caused by a second contact on the touch-sensitive surface at a location corresponding to the representation of the media item, wherein the second contact does not have a characteristic intensity that reaches the first intensity threshold; and
in response to detecting the selection input, a selection operation is performed with respect to the media item.
8. The method according to any one of claims 1-2, comprising (864):
after beginning to play the respective portion of the media item, detecting an increase in the characteristic intensity of the contact while the contact remains on the touch-sensitive surface; and
in response to detecting the increase in the characteristic intensity of the contact:
in accordance with a determination that the characteristic intensity of the contact satisfies media selection criteria, the media selection criteria including a criterion that is satisfied when the characteristic intensity of the contact is greater than a selection intensity threshold that is higher than the first intensity threshold, stopping playing the respective portion of the media item and performing a selection operation for the media item; and
In accordance with a determination that the characteristic strength of the contact does not satisfy the media selection criteria, continuing to play the respective portion of the media item without performing the selection operation.
9. The method of any of claims 1-2, comprising dynamically changing (859) a visual appearance of the user interface as the set of one or more audio properties of the media item dynamically changes.
10. The method of claim 9, wherein dynamically changing the visual appearance of the user interface comprises increasing (860) a size of a representation of the media item as the characteristic intensity of the contact increases.
11. The method of claim 9, wherein dynamically changing the visual appearance of the user interface comprises dynamically changing (861) an amount of blurring of a portion of the user interface adjacent to the representation of the media item as the characteristic intensity of the contact changes.
12. An electronic device (100) comprising:
a display (112);
a touch-sensitive surface (112);
one or more sensors (165) configured to detect intensity of contacts with the touch-sensitive surface;
One or more processors (122);
a memory (102); and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
displaying (855) a user interface comprising representations of media items on the display;
while the user interface is displayed and while background media is playing on the device, detecting (856) a contact-caused input on the touch-sensitive surface at a location corresponding to the representation of the media item;
in response to detecting the input (857) caused by the contact:
in accordance with a determination that the input satisfies media-prompting criteria, wherein the media-prompting criteria include a criterion that is satisfied when the contact has a characteristic intensity above a first intensity threshold:
beginning to play the respective portion of the media item; and
dynamically changing a set of one or more audio attributes of a media item in accordance with a change in the characteristic intensity of the contact and a set of one or more audio attributes of the background media in accordance with a change in the characteristic intensity of the contact while the media item is playing and the background media is also playing; and the number of the first and second groups,
In accordance with a determination that the input does not satisfy the media-cue criteria, forgo to begin playing the respective portion of the media item, forgo to dynamically change the set of one or more audio attributes of the media item in accordance with a change in the characteristic intensity of the contact, and continue to play the background media without changing the set of one or more audio attributes of the background media; and
detecting an end of the input while the media item is playing; and
in response to detecting the end of the input, resuming playing the background media, the set of one or more audio attributes returning to their values before the input is detected.
13. The electronic device (100) of claim 12, wherein the audio properties of the media item include volume, cut-off frequency of a low-pass filter, and/or equalizer settings (858).
14. The electronic device (100) of any of claims 12-13, the one or more programs including instructions for:
after beginning to play the respective portion of the media item (862):
detecting an increase in the characteristic intensity of the contact followed by a decrease in the characteristic intensity of the contact; and
In accordance with a determination that the characteristic intensity of the contact satisfies media preview criteria including criteria that are satisfied when the characteristic intensity of the contact increases above a second intensity threshold that is higher than the first intensity threshold before the decrease in the characteristic intensity of the contact is detected, while the media item is playing, changing the audio attribute of the media item in a first manner in accordance with the increase in the characteristic intensity of the contact and maintaining the audio attribute in a first state as the characteristic intensity of the contact decreases; and
in accordance with a determination that the characteristic intensity of the contact does not satisfy the media preview criteria, while the media item is playing, changing the audio property of the media item in a first manner in accordance with the increase in the characteristic intensity of the contact and changing the audio property of the media item in a second manner in accordance with the decrease in the characteristic intensity of the contact.
15. The electronic device (100) of claim 14, the one or more programs comprising instructions for: in response to detecting that the input satisfies the media preview criteria, displaying, on the display, an indication that an increase in the characteristic intensity of the contact will cause the device to perform a selection operation with respect to the media item (863).
16. The electronic device (100) of claim 14, the one or more programs comprising instructions for:
while playing the media item, detecting an end of the input;
in response to detecting the end of the input, stopping playing the media item (865).
17. The electronic device (100) of claim 16, wherein stopping playing the media item in response to detecting the end of the input comprises reversing a visual blur adjacent to content of the representation of the media item and gradually changing the set of one or more audio properties of the media item to gradually fade out the played media item (866).
18. The electronic device (100) of claim 16, the one or more programs comprising instructions (867) for:
after stopping playing the media item in response to detecting the end of the input, detecting a selection input caused by a second contact on the touch-sensitive surface at a location corresponding to the representation of the media item, wherein the second contact does not have a characteristic intensity that reaches the first intensity threshold; and
In response to detecting the selection input, a selection operation is performed with respect to the media item.
19. The electronic device (100) of any of claims 12-13, the one or more programs including instructions (864) to:
after beginning to play the respective portion of the media item, detecting an increase in the characteristic intensity of the contact while the contact remains on the touch-sensitive surface; and
in response to detecting the increase in the characteristic intensity of the contact:
in accordance with a determination that the characteristic intensity of the contact satisfies media selection criteria, the media selection criteria including a criterion that is satisfied when the characteristic intensity of the contact is greater than a selection intensity threshold that is higher than the first intensity threshold, stopping playing the respective portion of the media item and performing a selection operation for the media item; and
in accordance with a determination that the characteristic strength of the contact does not satisfy the media selection criteria, continuing to play the respective portion of the media item without performing the selection operation.
20. The electronic device (100) of any of claims 12-13, the one or more programs including instructions for: dynamically changing (859) a visual appearance of the user interface as the set of one or more audio properties of the media item dynamically changes.
21. The electronic device (100) of claim 20, wherein dynamically changing the visual appearance of the user interface comprises increasing (860) a size of a representation of the media item as the characteristic intensity of the contact increases.
22. The electronic device (100) of claim 20, wherein dynamically changing the visual appearance of the user interface comprises dynamically changing (861) an amount of blurring of a portion of the user interface adjacent to the representation of the media item as the characteristic intensity of the contact changes.
23. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display, a touch-sensitive surface, and one or more sensors configured to detect intensity of contacts with the touch-sensitive surface, cause the electronic device to:
displaying (855) a user interface comprising representations of media items on the display;
while the user interface is displayed and while background media is playing on the device, detecting (856) a contact-caused input on the touch-sensitive surface at a location corresponding to the representation of the media item; and
In response to detecting the input (857) caused by the contact:
in accordance with a determination that the input satisfies media-prompting criteria, wherein the media-prompting criteria include a criterion that is satisfied when the contact has a characteristic intensity above a first intensity threshold:
beginning to play the respective portion of the media item; and
dynamically changing a set of one or more audio attributes of a media item in accordance with a change in the characteristic intensity of the contact and a set of one or more audio attributes of the background media in accordance with a change in the characteristic intensity of the contact while the media item is playing and the background media is also playing; and the number of the first and second groups,
in accordance with a determination that the input does not satisfy the media-cue criteria, forgo to begin playing the respective portion of the media item, forgo to dynamically change the set of one or more audio attributes of the media item in accordance with a change in the characteristic intensity of the contact, and continue to play the background media without changing the set of one or more audio attributes of the background media; and
detecting an end of the input while the media item is playing; and
In response to detecting the end of the input, resuming playing the background media, the set of one or more audio attributes returning to their values before the input is detected.
24. The computer-readable storage medium of claim 23, wherein the audio properties of the media item include volume, cut-off frequency of a low-pass filter, and/or equalizer settings (858).
25. The computer-readable storage medium of any of claims 23-24, comprising instructions that, when executed by the electronic device, cause the electronic device to:
after beginning to play the respective portion of the media item (862):
detecting an increase in the characteristic intensity of the contact followed by detecting a decrease in the characteristic intensity of the contact
In accordance with a determination that the characteristic intensity of the contact satisfies media preview criteria including criteria that are satisfied when the characteristic intensity of the contact increases above a second intensity threshold that is higher than the first intensity threshold before the decrease in the characteristic intensity of the contact is detected, while the media item is playing, changing the audio attribute of the media item in a first manner in accordance with the increase in the characteristic intensity of the contact and maintaining the audio attribute in a first state as the characteristic intensity of the contact decreases; and
In accordance with a determination that the characteristic intensity of the contact does not satisfy the media preview criteria, while the media item is playing, changing the audio property of the media item in a first manner in accordance with the increase in the characteristic intensity of the contact and changing the audio property of the media item in a second manner in accordance with the decrease in the characteristic intensity of the contact.
26. The computer-readable storage medium of claim 25, comprising instructions that when executed by the electronic device cause the electronic device to: in response to detecting that the input satisfies the media preview criteria, displaying, on the display, an indication that an increase in the characteristic intensity of the contact will cause the device to perform a selection operation with respect to the media item (863).
27. The computer-readable storage medium of claim 26, comprising instructions that when executed by the electronic device cause the electronic device to:
while playing the media item, detecting an end of the input;
in response to detecting the end of the input, stopping playing the media item (865).
28. The computer-readable storage medium of claim 27, wherein stopping playing the media item in response to detecting the end of the input comprises reversing a visual blur adjacent to content of the representation of the media item and gradually changing the set of one or more audio properties of the media item to gradually fade out the media item as played (866).
29. The computer-readable storage medium of claim 27, comprising instructions that, when executed by the electronic device, cause the electronic device (867) to:
after stopping playing the media item in response to detecting the end of the input, detecting a selection input caused by a second contact on the touch-sensitive surface at a location corresponding to the representation of the media item, wherein the second contact does not have a characteristic intensity that reaches the first intensity threshold; and
in response to detecting the selection input, a selection operation is performed with respect to the media item.
30. The computer-readable storage medium of any of claims 23-24, comprising instructions that, when executed by the electronic device, cause the electronic device (864) to:
After beginning to play the respective portion of the media item, detecting an increase in the characteristic intensity of the contact while the contact remains on the touch-sensitive surface; and
in response to detecting the increase in the characteristic intensity of the contact:
in accordance with a determination that the characteristic intensity of the contact satisfies media selection criteria, the media selection criteria including a criterion that is satisfied when the characteristic intensity of the contact is greater than a selection intensity threshold that is higher than the first intensity threshold, ceasing to display the respective portion of the media item and performing a selection operation with respect to the media item; and
in accordance with a determination that the characteristic strength of the contact does not satisfy the media selection criteria, continuing to play the respective portion of the media item without performing the selection operation.
31. The computer-readable storage medium of any of claims 23-24, comprising instructions that, when executed by the electronic device, cause the electronic device to: dynamically changing (859) a visual appearance of the user interface as the set of one or more audio properties of the media item dynamically changes.
32. The computer-readable storage medium of claim 31, wherein dynamically changing the visual appearance of the user interface comprises increasing (860) a size of a representation of the media item as the characteristic intensity of the contact increases.
33. The computer-readable storage medium of claim 31, wherein dynamically changing the visual appearance of the user interface comprises dynamically changing (861) an amount of blur of a portion of the user interface adjacent to the representation of the media item as the characteristic intensity of the contact changes.
34. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more sensors configured to detect intensity of contacts with the touch-sensitive surface; and
means for displaying a user interface on the display that includes representations of media items;
means, enabled while the user interface is displayed and while background media is playing on the device, for detecting a contact-caused input on the touch-sensitive surface at a location corresponding to the representation of the media item;
means, enabled in response to detecting the input caused by the contact, for:
In accordance with a determination that the input satisfies media-prompting criteria, wherein the media-prompting criteria include a criterion that is satisfied when the contact has a characteristic intensity above a first intensity threshold:
beginning to play the respective portion of the media item; and
dynamically changing a set of one or more audio attributes of a media item in accordance with a change in the characteristic intensity of the contact and a set of one or more audio attributes of the background media in accordance with a change in the characteristic intensity of the contact while the media item is playing and the background media is also playing; and the number of the first and second groups,
in accordance with a determination that the input does not satisfy the media-cue criteria, forgoing to begin playing the respective portion of the media item and forgoing dynamically changing the set of one or more audio attributes of the media item in accordance with a change in the characteristic intensity of the contact, and continuing to play the background media without changing the set of one or more audio attributes of the background media; and
means for detecting an end of the input while the media item is playing; and
means, enabled in response to detecting the end of the input, to resume playing the background media, the set of one or more audio attributes returning to their values before the input was detected.
35. The electronic device of claim 34, wherein the audio properties of the media item include volume, cut-off frequency of a low-pass filter, and/or equalizer settings (858).
36. The electronic device of any of claims 34-35, comprising:
after beginning to play the respective portion of the media item (862), means, enabled for:
detecting an increase in the characteristic intensity of the contact followed by a decrease in the characteristic intensity of the contact; and
in accordance with a determination that the characteristic intensity of the contact satisfies media preview criteria including criteria that are satisfied when the characteristic intensity of the contact increases above a second intensity threshold that is higher than the first intensity threshold before the decrease in the characteristic intensity of the contact is detected, while the media item is playing, changing the audio attribute of the media item in a first manner in accordance with the increase in the characteristic intensity of the contact and maintaining the audio attribute in a first state as the characteristic intensity of the contact decreases; and
In accordance with a determination that the characteristic intensity of the contact does not satisfy the media preview criteria, while the media item is playing, changing the audio property of the media item in a first manner in accordance with the increase in the characteristic intensity of the contact and changing the audio property of the media item in a second manner in accordance with the decrease in the characteristic intensity of the contact.
37. The electronic device of claim 36, comprising means, responsive to detecting that the input satisfies the media preview criteria, for enabling an increase in the characteristic intensity for displaying the contact on the display that will cause the device to perform an indication (863) of a selection operation with respect to the media item.
38. The electronic device of claim 37, comprising:
means, enabled while playing the media item, for detecting an end of the input;
means for stopping playing the media item (865) enabled in response to detecting the end of the input.
39. The electronic device of claim 38, wherein the means for stopping playing the media item in response to detecting the end of the input comprises means for reversing a visual blur adjacent to content of the representation of the media item and means for gradually changing the set of one or more audio properties of the media item to gradually fade out (866) the media item being played.
40. The electronic device of claim 38, comprising:
after stopping playing the media item in response to detecting the end of the input, means, enabled for detecting a selection input caused by a second contact on the touch-sensitive surface at a location corresponding to the representation of the media item, wherein the second contact does not have a characteristic intensity that reaches the first intensity threshold; and
means, enabled in response to detecting the selection input, for performing a selection operation for the media item.
41. The electronic device of any of claims 34-35, comprising (864):
means, enabled after starting to play the respective portion of the media item, for detecting an increase in the characteristic intensity of the contact while the contact remains on the touch-sensitive surface; and
in response to detecting the increase in the characteristic intensity of the contact, means enabled for:
in accordance with a determination that the characteristic intensity of the contact satisfies media selection criteria, the media selection criteria including a criterion that is satisfied when the characteristic intensity of the contact is greater than a selection intensity threshold that is higher than the first intensity threshold, stopping playing the respective portion of the media item and performing a selection operation for the media item; and
In accordance with a determination that the characteristic strength of the contact does not satisfy the media selection criteria, continuing to play the respective portion of the media item without performing the selection operation.
42. The electronic device of any of claims 34-35, comprising means to: dynamically changing (859) a visual appearance of the user interface as the set of one or more audio properties of the media item dynamically changes.
43. The electronic device of claim 42, wherein the means for dynamically changing the visual appearance of the user interface comprises means for increasing (860) a size of a representation of the media item as the characteristic intensity of the contact increases.
44. The electronic device of claim 42, wherein the means for dynamically changing the visual appearance of the user interface comprises means for dynamically changing (861) an amount of blur of a portion of the user interface adjacent to the representation of the media item as the characteristic intensity of the contact changes.
45. An electronic device, comprising:
a display unit configured to display a user interface;
A touch-sensitive surface unit configured to receive a contact;
one or more sensor units configured to detect intensity of contacts with the touch-sensitive surface unit; and
a processing unit coupled with the display unit, the touch-sensitive surface unit, and the one or more sensor units, the processing unit configured to:
displaying, on the display, a user interface comprising representations of media items;
while the user interface is displayed and while background media is playing on the device, detecting an input caused by a contact on the touch-sensitive surface unit at a location corresponding to the representation of the media item;
in response to detecting the input caused by the contact:
in accordance with a determination that the input satisfies media-prompting criteria, wherein the media-prompting criteria include a criterion that is satisfied when the contact has a characteristic intensity above a first intensity threshold:
beginning to play the respective portion of the media item; and
dynamically changing a set of one or more audio attributes of a media item in accordance with a change in the characteristic intensity of the contact and a set of one or more audio attributes of the background media in accordance with a change in the characteristic intensity of the contact while the media item is playing and the background media is also playing; and the number of the first and second groups,
In accordance with a determination that the input does not satisfy the media-cue criteria, forgo to begin playing the respective portion of the media item, forgo to dynamically change the set of one or more audio attributes of the media item in accordance with the change in the characteristic intensity of the contact, and continue to play the background media without changing the set of one or more audio attributes of the background media; and
detecting an end of the input while the media item is playing; and
in response to detecting the end of the input, resuming playing the background media, the set of one or more audio attributes returning to their values before the input is detected.
46. The electronic device of claim 45, wherein the audio properties of the media item include volume, cut-off frequency of a low-pass filter, and/or equalizer settings.
47. The electronic device of any one of claims 45-46, wherein the processing unit is further configured to:
after beginning to play the respective portion of the media item:
detecting an increase in the characteristic intensity of the contact followed by a decrease in the characteristic intensity of the contact; and
In accordance with a determination that the characteristic intensity of the contact satisfies media preview criteria including criteria that are satisfied when the characteristic intensity of the contact increases above a second intensity threshold that is higher than the first intensity threshold before the decrease in the characteristic intensity of the contact is detected, while the media item is playing, changing the audio attribute of the media item in a first manner in accordance with the increase in the characteristic intensity of the contact and maintaining the audio attribute in a first state as the characteristic intensity of the contact decreases; and
in accordance with a determination that the characteristic intensity of the contact does not satisfy the media preview criteria, while the media item is playing, changing the audio property of the media item in a first manner in accordance with the increase in the characteristic intensity of the contact and changing the audio property of the media item in a second manner in accordance with the decrease in the characteristic intensity of the contact.
48. The electronic device of claim 47, wherein the processing unit is further configured to:
in response to detecting that the input satisfies the media preview criteria, displaying, on the display unit, an indication that an increase in the characteristic intensity of the contact will cause the device to perform a selection operation with respect to the media item.
49. The electronic device of claim 48, wherein the processing unit is further configured to:
while playing the media item, detecting an end of the input;
in response to detecting the end of the input, stopping playing the media item.
50. The electronic device of claim 49, wherein stopping playing the media item in response to detecting the end of the input comprises reversing visual blur adjacent to content of the representation of the media item and gradually changing the set of one or more audio attributes of the media item to gradually fade the played media item out.
51. The electronic device of claim 49, wherein the processing unit is further configured to:
after stopping playing the media item in response to detecting the end of the input, detecting a selection input caused by a second contact on the touch-sensitive surface unit at a location corresponding to the representation of the media item, wherein the second contact does not have a characteristic intensity that reaches the first intensity threshold; and
in response to detecting the selection input, a selection operation is performed with respect to the media item.
52. The electronic device of any one of claims 45-46, wherein the processing unit is further configured to:
after beginning to play the respective portion of the media item, detecting an increase in the characteristic intensity of the contact while the contact remains on the touch-sensitive surface unit; and
in response to detecting the increase in the characteristic intensity of the contact:
in accordance with a determination that the characteristic intensity of the contact satisfies media selection criteria, the media selection criteria including a criterion that is satisfied when the characteristic intensity of the contact is greater than a selection intensity threshold that is higher than the first intensity threshold, stopping playing the respective portion of the media item and performing a selection operation for the media item; and
in accordance with a determination that the characteristic strength of the contact does not satisfy the media selection criteria, continuing to play the respective portion of the media item without performing the selection operation.
53. The electronic device of any one of claims 45-46, wherein the processing unit is further configured to: dynamically changing a visual appearance of the user interface as the set of one or more audio properties of the media item dynamically changes.
54. The electronic device of claim 53, wherein dynamically changing the visual appearance of the user interface includes increasing a size of a representation of the media item as the characteristic intensity of the contact increases.
55. The electronic device of claim 53, wherein dynamically changing the visual appearance of the user interface comprises dynamically changing an amount of blurring of a portion of the user interface adjacent to the representation of the media item as the characteristic intensity of the contact changes.
CN201810369048.8A 2016-06-12 2017-05-22 Apparatus, method and graphical user interface for dynamically adjusting presentation of audio output Active CN108829325B (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US201662349056P 2016-06-12 2016-06-12
US62/349,056 2016-06-12
DKPA201670599A DK179033B1 (en) 2016-06-12 2016-08-09 Devices, methods, and graphical user interfaces for dynamically adjusting presentation of audio outputs
DKPA201670597 2016-08-09
DKPA201670599 2016-08-09
DKPA201670597A DK179034B1 (en) 2016-06-12 2016-08-09 Devices, methods, and graphical user interfaces for dynamically adjusting presentation of audio outputs
CN201710364610.3A CN107491283B (en) 2016-06-12 2017-05-22 Apparatus, method and graphical user interface for dynamically adjusting presentation of audio output

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201710364610.3A Division CN107491283B (en) 2016-06-12 2017-05-22 Apparatus, method and graphical user interface for dynamically adjusting presentation of audio output

Publications (2)

Publication Number Publication Date
CN108829325A CN108829325A (en) 2018-11-16
CN108829325B true CN108829325B (en) 2021-01-08

Family

ID=59215873

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201810369048.8A Active CN108829325B (en) 2016-06-12 2017-05-22 Apparatus, method and graphical user interface for dynamically adjusting presentation of audio output
CN201710364610.3A Active CN107491283B (en) 2016-06-12 2017-05-22 Apparatus, method and graphical user interface for dynamically adjusting presentation of audio output

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201710364610.3A Active CN107491283B (en) 2016-06-12 2017-05-22 Apparatus, method and graphical user interface for dynamically adjusting presentation of audio output

Country Status (1)

Country Link
CN (2) CN108829325B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310613A (en) * 2018-03-27 2019-10-08 上海新啊利网络科技有限公司 A kind of method and apparatus for generating color encoded music
US11132406B2 (en) * 2018-05-18 2021-09-28 Google Llc Action indicators for search operation output elements
WO2019236348A1 (en) * 2018-06-03 2019-12-12 Dakiana Research Llc Method and device for presenting a synthesized reality user interface
US10908783B2 (en) * 2018-11-06 2021-02-02 Apple Inc. Devices, methods, and graphical user interfaces for interacting with user interface objects and providing feedback
CN109634499A (en) * 2018-12-12 2019-04-16 广州酷狗计算机科技有限公司 Information display method, device, terminal and storage medium
US20200257442A1 (en) * 2019-02-12 2020-08-13 Volvo Car Corporation Display and input mirroring on heads-up display
CN110321192B (en) * 2019-04-29 2023-03-31 上海连尚网络科技有限公司 Method and equipment for presenting hosted program
CN112000308B (en) * 2020-09-10 2023-04-18 成都拟合未来科技有限公司 Double-track audio playing control method, system, terminal and medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101062204B1 (en) * 2006-10-04 2011-09-05 삼성전자주식회사 Broadcasting receiver and control method
JP4796104B2 (en) * 2008-08-29 2011-10-19 シャープ株式会社 Imaging apparatus, image analysis apparatus, external light intensity calculation method, image analysis method, imaging program, image analysis program, and recording medium
CN102098606A (en) * 2009-12-10 2011-06-15 腾讯科技(深圳)有限公司 Method and device for dynamically adjusting volume
KR101803261B1 (en) * 2011-11-18 2017-11-30 센톤스 아이엔씨. Detecting touch input force
CN104903834B (en) * 2012-12-29 2019-07-05 苹果公司 For equipment, method and the graphic user interface in touch input to transition between display output relation
KR101905174B1 (en) * 2012-12-29 2018-10-08 애플 인크. Device, method, and graphical user interface for navigating user interface hierachies
CN103928037B (en) * 2013-01-10 2018-04-13 先锋高科技(上海)有限公司 A kind of audio switching method and terminal device
EP3189416B1 (en) * 2014-09-02 2020-07-15 Apple Inc. User interface for receiving user input
US9542037B2 (en) * 2015-03-08 2017-01-10 Apple Inc. Device, method, and user interface for processing intensity of touch contacts
CN105117131B (en) * 2015-08-27 2019-02-05 Oppo广东移动通信有限公司 A kind of progress bar control method and device
CN105163186A (en) * 2015-08-27 2015-12-16 广东欧珀移动通信有限公司 Playing operation method and terminal

Also Published As

Publication number Publication date
CN108829325A (en) 2018-11-16
CN107491283A (en) 2017-12-19
CN107491283B (en) 2020-03-27

Similar Documents

Publication Publication Date Title
AU2019257439B2 (en) Devices, methods, and graphical user interfaces for dynamically adjusting presentation of audio outputs
US11921978B2 (en) Devices, methods, and graphical user interfaces for navigating, displaying, and editing media items with multiple display modes
US10613634B2 (en) Devices and methods for controlling media presentation
US11960707B2 (en) Devices, methods, and graphical user interfaces for moving a current focus using a touch-sensitive remote control
US11132120B2 (en) Device, method, and graphical user interface for transitioning between user interfaces
US20220334689A1 (en) Music now playing user interface
CN109313528B (en) Method, electronic device, and computer-readable storage medium for accelerated scrolling
US20190018562A1 (en) Device, Method, and Graphical User Interface for Scrolling Nested Regions
CN108829325B (en) Apparatus, method and graphical user interface for dynamically adjusting presentation of audio output
EP2912542B1 (en) Device and method for forgoing generation of tactile output for a multi-contact gesture
AU2014100585B4 (en) Device and method for generating user interfaces from a template
US20140365895A1 (en) Device and method for generating user interfaces from a template
KR20150094762A (en) Device, method, and graphical user interface for navigating user interface hierachies
EP3255536B1 (en) Devices, methods, and graphical user interfaces for dynamically adjusting presentation of user interfaces
DK179033B1 (en) Devices, methods, and graphical user interfaces for dynamically adjusting presentation of audio outputs
US11966578B2 (en) Devices and methods for integrating video with user interface navigation
US20190369862A1 (en) Devices and Methods for Integrating Video with User Interface Navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant