CN117178250A - User interface for managing media styles - Google Patents

User interface for managing media styles Download PDF

Info

Publication number
CN117178250A
CN117178250A CN202280026338.7A CN202280026338A CN117178250A CN 117178250 A CN117178250 A CN 117178250A CN 202280026338 A CN202280026338 A CN 202280026338A CN 117178250 A CN117178250 A CN 117178250A
Authority
CN
China
Prior art keywords
representation
media processing
style
media
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280026338.7A
Other languages
Chinese (zh)
Inventor
J·B·曼扎里
G·R·克拉克
W·A·索伦蒂诺三世
A·苏扎多斯桑托斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Inc filed Critical Apple Inc
Priority to CN202311602718.3A priority Critical patent/CN117539375A/en
Priority claimed from PCT/US2022/030704 external-priority patent/WO2022256200A1/en
Publication of CN117178250A publication Critical patent/CN117178250A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters

Abstract

The present disclosure relates generally to user interfaces for media processing styles. In some implementations, the present disclosure relates to a user interface for selecting a media processing style. In some implementations, the present disclosure relates to a user interface for editing media processing styles.

Description

User interface for managing media styles
Cross Reference to Related Applications
The present application claims U.S. provisional patent application Ser. No. 63/195,679 entitled "USER INTERFACES FOR MANAGING MEDIA STYLES," filed on 1/6/2021; U.S. provisional patent application Ser. No. 63/243,633, entitled "USER INTERFACES FOR MANAGING MEDIA STYLES," filed on 9 and 13, 2021; and U.S. patent application Ser. No. 17/721,039, entitled "USER INTERFACES FOR MANAGING MEDIA STYLES," filed on 4/14/2022. The contents of these patent applications are hereby incorporated by reference in their entirety.
Technical Field
The present disclosure relates generally to computer user interfaces, and more particularly to techniques for managing media styles applied to visual content of media.
Background
Users of smartphones and other personal electronic devices more frequently capture, store, and edit media for saving memory and sharing with friends. Some prior art techniques allow a user to capture images or video. A user may manage such media by, for example, capturing, storing, and editing the media.
Disclosure of Invention
However, some techniques for managing media patterns applied to visual content of media using electronic devices (e.g., including computer systems) are often cumbersome and inefficient. For example, some prior art techniques use complex and time consuming user interfaces that may include multiple key presses or keystrokes. The prior art requires more time than is necessary, which results in wasted user time and device energy. This latter consideration is particularly important in battery-powered devices.
Thus, the present technology provides faster, more efficient methods and interfaces for electronic devices to manage media styles applied to visual content of media. Such methods and interfaces optionally supplement or replace other methods for managing media styles applied to visual content of media. Such methods and interfaces reduce the cognitive burden on the user and result in a more efficient human-machine interface. For battery-powered computing devices, such methods and interfaces conserve power and increase the time between battery charges and reduce the number of unnecessary, unrelated, and/or repeatedly received inputs.
According to some embodiments, a method performed at a computer system in communication with a display generation component and one or more input devices is described. The method comprises the following steps: displaying, via a display generating component, a style selection user interface comprising a representation of media, wherein a first portion of the representation and a second portion of the representation are displayed using a first media processing style applied to visual content of the media; detecting, via the one or more input devices, an input directed to the representation while displaying the first portion of the representation and the second portion of the representation using the first media processing style; and in response to detecting the input pointing to the representation, and in accordance with a determination that the input is in the first direction, displaying, via the display generating component, a first portion of the representation using the second media processing style while continuing to display a second portion of the representation using the first media processing style, comprising: in response to detecting the first portion of the input pointing to the representation, wherein the first portion of the input has a first input magnitude, displaying the first portion of the representation using a second media processing style while displaying the second portion of the representation and a third portion of the representation located between the first portion of the representation and the second portion of the representation using the first media processing style; and after displaying the first portion of the representation using the second media processing style while displaying the second portion of the representation and the third portion of the representation using the first media processing style, and in response to detecting the second portion of the input directed to the representation, wherein the second portion of the input has a second input magnitude that is greater than the first input magnitude, displaying the first portion of the representation and the third portion of the representation using the second media processing style while displaying the second portion of the representation using the first media processing style.
According to some embodiments, a non-transitory computer readable storage medium is described. The non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with the display generation component and the one or more input devices, the one or more programs comprising instructions for: displaying, via a display generating component, a style selection user interface comprising a representation of media, wherein a first portion of the representation and a second portion of the representation are displayed using a first media processing style applied to visual content of the media; detecting, via the one or more input devices, an input directed to the representation while displaying the first portion of the representation and the second portion of the representation using the first media processing style; and in response to detecting the input pointing to the representation, and in accordance with a determination that the input is in the first direction, displaying, via the display generating component, a first portion of the representation using the second media processing style while continuing to display a second portion of the representation using the first media processing style, comprising: in response to detecting the first portion of the input pointing to the representation, wherein the first portion of the input has a first input magnitude, displaying the first portion of the representation using a second media processing style while displaying the second portion of the representation and a third portion of the representation located between the first portion of the representation and the second portion of the representation using the first media processing style; and after displaying the first portion of the representation using the second media processing style while displaying the second portion of the representation and the third portion of the representation using the first media processing style, and in response to detecting the second portion of the input directed to the representation, wherein the second portion of the input has a second input magnitude that is greater than the first input magnitude, displaying the first portion of the representation and the third portion of the representation using the second media processing style while displaying the second portion of the representation using the first media processing style.
According to some embodiments, a transitory computer readable storage medium is described. The transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for: displaying, via a display generating component, a style selection user interface comprising a representation of media, wherein a first portion of the representation and a second portion of the representation are displayed using a first media processing style applied to visual content of the media; detecting, via the one or more input devices, an input directed to the representation while displaying the first portion of the representation and the second portion of the representation using the first media processing style; and in response to detecting the input pointing to the representation, and in accordance with a determination that the input is in the first direction, displaying, via the display generating component, a first portion of the representation using the second media processing style while continuing to display a second portion of the representation using the first media processing style, comprising: in response to detecting the first portion of the input pointing to the representation, wherein the first portion of the input has a first input magnitude, displaying the first portion of the representation using a second media processing style while displaying the second portion of the representation and a third portion of the representation located between the first portion of the representation and the second portion of the representation using the first media processing style; and after displaying the first portion of the representation using the second media processing style while displaying the second portion of the representation and the third portion of the representation using the first media processing style, and in response to detecting the second portion of the input directed to the representation, wherein the second portion of the input has a second input magnitude that is greater than the first input magnitude, displaying the first portion of the representation and the third portion of the representation using the second media processing style while displaying the second portion of the representation using the first media processing style.
According to some embodiments, a computer system configured to communicate with a display generation component, one or more input devices is described. The computer system includes: one or more processors; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via a display generating component, a style selection user interface comprising a representation of media, wherein a first portion of the representation and a second portion of the representation are displayed using a first media processing style applied to visual content of the media; detecting, via the one or more input devices, an input directed to the representation while displaying the first portion of the representation and the second portion of the representation using the first media processing style; and in response to detecting the input pointing to the representation, and in accordance with a determination that the input is in the first direction, displaying, via the display generating component, a first portion of the representation using the second media processing style while continuing to display a second portion of the representation using the first media processing style, comprising: in response to detecting the first portion of the input pointing to the representation, wherein the first portion of the input has a first input magnitude, displaying the first portion of the representation using a second media processing style while displaying the second portion of the representation and a third portion of the representation located between the first portion of the representation and the second portion of the representation using the first media processing style; and after displaying the first portion of the representation using the second media processing style while displaying the second portion of the representation and the third portion of the representation using the first media processing style, and in response to detecting the second portion of the input directed to the representation, wherein the second portion of the input has a second input magnitude that is greater than the first input magnitude, displaying the first portion of the representation and the third portion of the representation using the second media processing style while displaying the second portion of the representation using the first media processing style.
According to some embodiments, a computer system configured to communicate with a display generation component and one or more input devices is described. The computer system includes: means for: displaying, via a display generating component, a style selection user interface comprising a representation of media, wherein a first portion of the representation and a second portion of the representation are displayed using a first media processing style applied to visual content of the media; means for: detecting, via the one or more input devices, an input directed to the representation while displaying the first portion of the representation and the second portion of the representation using the first media processing style; and means for performing the following: in response to detecting an input pointing to the representation and in accordance with a determination that the input is in a first direction, displaying a first portion of the representation using a second media processing style via the display generation component while continuing to display a second portion of the representation using the first media processing style, comprising: means for: in response to detecting the first portion of the input pointing to the representation, wherein the first portion of the input has a first input magnitude, displaying the first portion of the representation using a second media processing style while displaying the second portion of the representation and a third portion of the representation located between the first portion of the representation and the second portion of the representation using the first media processing style; and means for performing the following: after displaying the first portion of the representation using the second media processing style while displaying the second portion of the representation and the third portion of the representation using the first media processing style, and in response to detecting the second portion of the input directed to the representation, wherein the second portion of the input has a second input magnitude that is greater than the first input magnitude, displaying the first portion of the representation and the third portion of the representation using the second media processing style while displaying the second portion of the representation using the first media processing style.
According to some embodiments, a computer program product is described. The computer program product includes one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component and one or more input devices. The one or more programs include instructions for: displaying, via a display generating component, a style selection user interface comprising a representation of media, wherein a first portion of the representation and a second portion of the representation are displayed using a first media processing style applied to visual content of the media; detecting, via the one or more input devices, an input directed to the representation while displaying the first portion of the representation and the second portion of the representation using the first media processing style; and in response to detecting the input pointing to the representation, and in accordance with a determination that the input is in the first direction, displaying, via the display generating component, a first portion of the representation using the second media processing style while continuing to display a second portion of the representation using the first media processing style, comprising: in response to detecting the first portion of the input pointing to the representation, wherein the first portion of the input has a first input magnitude, displaying the first portion of the representation using a second media processing style while displaying the second portion of the representation and a third portion of the representation located between the first portion of the representation and the second portion of the representation using the first media processing style; and after displaying the first portion of the representation using the second media processing style while displaying the second portion of the representation and the third portion of the representation using the first media processing style, and in response to detecting the second portion of the input directed to the representation, wherein the second portion of the input has a second input magnitude that is greater than the first input magnitude, displaying the first portion of the representation and the third portion of the representation using the second media processing style while displaying the second portion of the representation using the first media processing style.
According to some embodiments, a method performed at a computer system in communication with a display generation component and one or more input devices is described. The method comprises the following steps: displaying, via a display generating component, a user interface comprising a representation of the media, wherein the representation of the media is displayed using a first media processing style applied to visual content of the media; when displaying a representation of media using a first media processing style, concurrently displaying, via a display generation component, a plurality of selectable user interface objects for the first media processing style, comprising: a first selectable user interface object for editing a first parameter of a first media processing style, the first selectable user interface object being displayed with a representation of a current value of the first parameter of the first media processing style; and a second selectable user interface object for editing a second parameter of the first media processing style, the second selectable user interface object being displayed with a representation of a current value of the second parameter of the first media processing style, wherein the first parameter is different from the second parameter; detecting, via the one or more input devices, input directed to the plurality of selectable user interface objects for the first media processing style while the plurality of selectable user interface objects for the first media processing style are displayed; in response to detecting input directed to the plurality of selectable user interface objects for the first media processing style: in accordance with a determination that the input is directed to a first selectable user interface object for editing a first parameter of a first media processing style, displaying, via a display generation component, a first control for adjusting a current value of the first parameter; and in accordance with a determination that the input is directed to a second user interface object for editing a second parameter of the first media processing style, displaying, via the display generating component, a second control for adjusting a current value of the second parameter.
According to some embodiments, a non-transitory computer readable storage medium is described. The non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with the display generation component and the one or more input devices, the one or more programs comprising instructions for: displaying, via a display generating component, a user interface comprising a representation of the media, wherein the representation of the media is displayed using a first media processing style applied to visual content of the media; when displaying a representation of media using a first media processing style, concurrently displaying, via a display generation component, a plurality of selectable user interface objects for the first media processing style, comprising: a first selectable user interface object for editing a first parameter of a first media processing style, the first selectable user interface object being displayed with a representation of a current value of the first parameter of the first media processing style; and a second selectable user interface object for editing a second parameter of the first media processing style, the second selectable user interface object being displayed with a representation of a current value of the second parameter of the first media processing style, wherein the first parameter is different from the second parameter; detecting, via the one or more input devices, input directed to the plurality of selectable user interface objects for the first media processing style while the plurality of selectable user interface objects for the first media processing style are displayed; in response to detecting input directed to the plurality of selectable user interface objects for the first media processing style: in accordance with a determination that the input is directed to a first selectable user interface object for editing a first parameter of a first media processing style, displaying, via a display generation component, a first control for adjusting a current value of the first parameter; and in accordance with a determination that the input is directed to a second user interface object for editing a second parameter of the first media processing style, displaying, via the display generating component, a second control for adjusting a current value of the second parameter.
According to some embodiments, a transitory computer readable storage medium is described. The transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for: displaying, via a display generating component, a user interface comprising a representation of the media, wherein the representation of the media is displayed using a first media processing style applied to visual content of the media; when displaying a representation of media using a first media processing style, concurrently displaying, via a display generation component, a plurality of selectable user interface objects for the first media processing style, comprising: a first selectable user interface object for editing a first parameter of a first media processing style, the first selectable user interface object being displayed with a representation of a current value of the first parameter of the first media processing style; and a second selectable user interface object for editing a second parameter of the first media processing style, the second selectable user interface object being displayed with a representation of a current value of the second parameter of the first media processing style, wherein the first parameter is different from the second parameter; detecting, via the one or more input devices, input directed to the plurality of selectable user interface objects for the first media processing style while the plurality of selectable user interface objects for the first media processing style are displayed; in response to detecting input directed to the plurality of selectable user interface objects for the first media processing style: in accordance with a determination that the input is directed to a first selectable user interface object for editing a first parameter of a first media processing style, displaying, via a display generation component, a first control for adjusting a current value of the first parameter; and in accordance with a determination that the input is directed to a second user interface object for editing a second parameter of the first media processing style, displaying, via the display generating component, a second control for adjusting a current value of the second parameter.
According to some embodiments, a computer system configured to communicate with a display generation component and one or more input devices is described. The computer system includes: one or more processors; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via a display generating component, a user interface comprising a representation of the media, wherein the representation of the media is displayed using a first media processing style applied to visual content of the media; when displaying a representation of media using a first media processing style, concurrently displaying, via a display generation component, a plurality of selectable user interface objects for the first media processing style, comprising: a first selectable user interface object for editing a first parameter of a first media processing style, the first selectable user interface object being displayed with a representation of a current value of the first parameter of the first media processing style; and a second selectable user interface object for editing a second parameter of the first media processing style, the second selectable user interface object being displayed with a representation of a current value of the second parameter of the first media processing style, wherein the first parameter is different from the second parameter; detecting, via the one or more input devices, input directed to the plurality of selectable user interface objects for the first media processing style while the plurality of selectable user interface objects for the first media processing style are displayed; in response to detecting input directed to the plurality of selectable user interface objects for the first media processing style: in accordance with a determination that the input is directed to a first selectable user interface object for editing a first parameter of a first media processing style, displaying, via a display generation component, a first control for adjusting a current value of the first parameter; and in accordance with a determination that the input is directed to a second user interface object for editing a second parameter of the first media processing style, displaying, via the display generating component, a second control for adjusting a current value of the second parameter.
According to some embodiments, a computer system configured to communicate with a display generation component and one or more input devices is described. The computer system includes: means for: displaying, via a display generating component, a user interface comprising a representation of the media, wherein the representation of the media is displayed using a first media processing style applied to visual content of the media; means for: when displaying a representation of media using a first media processing style, concurrently displaying, via a display generation component, a plurality of selectable user interface objects for the first media processing style, comprising: a first selectable user interface object for editing a first parameter of a first media processing style, the first selectable user interface object being displayed with a representation of a current value of the first parameter of the first media processing style; and a second selectable user interface object for editing a second parameter of the first media processing style, the second selectable user interface object being displayed with a representation of a current value of the second parameter of the first media processing style, wherein the first parameter is different from the second parameter; means for: detecting, via the one or more input devices, input directed to the plurality of selectable user interface objects for the first media processing style while the plurality of selectable user interface objects for the first media processing style are displayed; and means for performing the following: in response to detecting input directed to the plurality of selectable user interface objects for the first media processing style: in accordance with a determination that the input is directed to a first selectable user interface object for editing a first parameter of a first media processing style, displaying, via a display generation component, a first control for adjusting a current value of the first parameter; and in accordance with a determination that the input is directed to a second user interface object for editing a second parameter of the first media processing style, displaying, via the display generating component, a second control for adjusting a current value of the second parameter.
According to some embodiments, a computer program product is described. The computer program product includes one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component and one or more input devices. The one or more programs include instructions for: displaying, via a display generating component, a user interface comprising a representation of the media, wherein the representation of the media is displayed using a first media processing style applied to visual content of the media; when displaying a representation of media using a first media processing style, concurrently displaying, via a display generation component, a plurality of selectable user interface objects for the first media processing style, comprising: a first selectable user interface object for editing a first parameter of a first media processing style, the first selectable user interface object being displayed with a representation of a current value of the first parameter of the first media processing style; and a second selectable user interface object for editing a second parameter of the first media processing style, the second selectable user interface object being displayed with a representation of a current value of the second parameter of the first media processing style, wherein the first parameter is different from the second parameter; detecting, via the one or more input devices, input directed to the plurality of selectable user interface objects for the first media processing style while the plurality of selectable user interface objects for the first media processing style are displayed; in response to detecting input directed to the plurality of selectable user interface objects for the first media processing style: in accordance with a determination that the input is directed to a first selectable user interface object for editing a first parameter of a first media processing style, displaying, via a display generation component, a first control for adjusting a current value of the first parameter; and in accordance with a determination that the input is directed to a second user interface object for editing a second parameter of the first media processing style, displaying, via the display generating component, a second control for adjusting a current value of the second parameter. Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
Executable instructions for performing these functions are optionally included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
Thus, faster, more efficient methods and interfaces for managing media styles applied to visual content of media are provided for devices, thereby improving the effectiveness, efficiency, and user satisfaction of such devices. Such methods and interfaces may supplement or replace other methods for managing media styles applied to visual content of media.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following detailed description taken in conjunction with the following drawings, in which like reference numerals designate corresponding parts throughout the several views.
Fig. 1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
FIG. 1B is a block diagram illustrating exemplary components for event processing according to some embodiments.
Fig. 2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
Fig. 4A illustrates an exemplary user interface for a menu of applications on a portable multifunction device in accordance with some embodiments.
Fig. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface separate from a display in accordance with some embodiments.
Fig. 5A illustrates a personal electronic device according to some embodiments.
Fig. 5B is a block diagram illustrating a personal electronic device, according to some embodiments.
Fig. 6A-6Y illustrate an exemplary user interface for selecting a media processing style using a computer system, according to some embodiments.
Fig. 7A-7X illustrate an exemplary user interface for editing media processing styles using a computer system, according to some embodiments.
Fig. 8A-8C illustrate an exemplary user interface for selecting a media processing style using a computer system, according to some embodiments.
Fig. 9 is a flowchart illustrating a method for selecting a media processing style using a computer system, according to some embodiments.
Fig. 10A-10B are flowcharts illustrating methods of editing media processing styles using a computer system according to some embodiments.
Detailed Description
The following description sets forth exemplary methods, parameters, and the like. However, it should be recognized that such description is not intended as a limitation on the scope of the present disclosure, but is instead provided as a description of exemplary embodiments.
There is a need for an electronic device that provides an efficient method and interface for managing media styles applied to visual content of media, such as the methods of selecting media processing styles and editing media processing styles described herein. Such techniques may reduce the cognitive burden on users desiring to edit media, thereby improving productivity. Further, such techniques may reduce processor power and battery power that would otherwise be wasted on redundant user inputs.
1A-1B, 2, 3, 4A-4B, and 5A-5B below provide a description of an exemplary device for performing techniques for managing media processing patterns.
Fig. 6A-6Y illustrate an exemplary user interface for selecting a media processing style using a computer system, according to some embodiments. Fig. 7A-7X illustrate an exemplary user interface for editing media processing styles using a computer system, according to some embodiments. Fig. 8A-8C illustrate an exemplary user interface for selecting a media processing style using a computer system, according to some embodiments. Fig. 9 is a flowchart illustrating a method for selecting a media processing style using a computer system, according to some embodiments. Fig. 10A-10B are flowcharts illustrating methods of editing media processing styles using a computer system according to some embodiments. The user interfaces in fig. 6A to 6Y, 7A to 7X, and 8A to 8C are used to illustrate processes described below, including the processes in fig. 9 and 10A to 10B.
The processes described below enhance operability of a device and make user-device interfaces more efficient (e.g., by helping a user provide appropriate input and reducing user error in operating/interacting with the device) through various techniques, including by providing improved visual feedback to the user, reducing the number of inputs required to perform an operation, providing additional control options without cluttering the user interface with additional display controls, performing an operation when a set of conditions has been met without requiring additional user input and/or additional techniques. These techniques also reduce power usage and extend battery life of the device by enabling a user to use the device faster and more efficiently.
Furthermore, in a method described herein in which one or more steps are dependent on one or more conditions having been met, it should be understood that the method may be repeated in multiple iterations such that during the iteration, all conditions that determine steps in the method have been met in different iterations of the method. For example, if a method requires performing a first step (if a condition is met) and performing a second step (if a condition is not met), one of ordinary skill will know that the stated steps are repeated until both the condition and the condition are not met (not sequentially). Thus, a method described as having one or more steps depending on one or more conditions having been met may be rewritten as a method that repeats until each of the conditions described in the method have been met. However, this does not require the system or computer-readable medium to claim that the system or computer-readable medium contains instructions for performing the contingent operation based on the satisfaction of the corresponding condition or conditions, and thus is able to determine whether the contingent situation has been met without explicitly repeating the steps of the method until all conditions to decide on steps in the method have been met. It will also be appreciated by those of ordinary skill in the art that, similar to a method with optional steps, a system or computer readable storage medium may repeat the steps of the method as many times as necessary to ensure that all optional steps have been performed.
Although the following description uses the terms "first," "second," etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another element. For example, a first touch may be named a second touch and similarly a second touch may be named a first touch without departing from the scope of the various described embodiments. Both the first touch and the second touch are touches, but they are not the same touch.
The terminology used in the description of the various illustrated embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and in the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Depending on the context, the term "if" is optionally interpreted to mean "when..once..once.," in response to determining "or" in response to detecting ". Similarly, the phrase "if determined … …" or "if detected [ stated condition or event ]" is optionally interpreted to mean "upon determining … …" or "in response to determining … …" or "upon detecting [ stated condition or event ]" or "in response to detecting [ stated condition or event ]" depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described herein. In some embodiments, the deviceAs a portable communication device, such as a mobile phone, which also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of the portable multifunction device include, but are not limited to, those from Apple inc (Cupertino, california)Device, iPod->Device, and->An apparatus. Other portable electronic devices, such as a laptop or tablet computer having a touch-sensitive surface (e.g., a touch screen display and/or a touchpad), are optionally used. It should also be appreciated that in some embodiments, the device is not a portable communication device, but rather a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad). In some embodiments, the electronic device is a computer system in communication (e.g., via wireless communication, via wired communication) with the display generation component. The display generation component is configured to provide visual output, such as display via a CRT display, display via an LED display, or display via image projection. In some embodiments, the display generating component is integrated with the computer system. In some embodiments, the display generating component is separate from the computer system. As used herein, "displaying" content includes displaying content (e.g., video data rendered or decoded by display controller 156) by transmitting data (e.g., image data or video data) to an integrated or external display generation component via a wired or wireless connection to visually produce the content.
In the following discussion, an electronic device including a display and a touch-sensitive surface is described. However, it should be understood that the electronic device optionally includes one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.
The device typically supports various applications such as one or more of the following: drawing applications, presentation applications, word processing applications, website creation applications, disk editing applications, spreadsheet applications, gaming applications, telephony applications, video conferencing applications, email applications, instant messaging applications, fitness support applications, photo management applications, digital camera applications, digital video camera applications, web browsing applications, digital music player applications, and/or digital video player applications.
The various applications executing on the device optionally use at least one generic physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device are optionally adjusted and/or changed for different applications and/or within the respective applications. In this way, the common physical architecture of the devices (such as the touch-sensitive surface) optionally supports various applications with a user interface that is intuitive and transparent to the user.
Attention is now directed to embodiments of a portable device having a touch sensitive display. Fig. 1A is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display 112 is sometimes referred to as a "touch screen" for convenience and is sometimes referred to or referred to as a "touch-sensitive display system". Device 100 includes memory 102 (which optionally includes one or more computer-readable storage media), memory controller 122, one or more processing units (CPUs) 120, peripheral interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, input/output (I/O) subsystem 106, other input control devices 116, and external ports 124. The apparatus 100 optionally includes one or more optical sensors 164. The device 100 optionally includes one or more contact intensity sensors 165 for detecting the intensity of a contact on the device 100 (e.g., a touch-sensitive surface, such as the touch-sensitive display system 112 of the device 100). Device 100 optionally includes one or more tactile output generators 167 (e.g., generating tactile output on a touch-sensitive surface, such as touch-sensitive display system 112 of device 100 or touch pad 355 of device 300) for generating tactile output on device 100. These components optionally communicate via one or more communication buses or signal lines 103.
As used in this specification and the claims, the term "intensity" of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of the contact on the touch-sensitive surface (e.g., finger contact), or to an alternative to the force or pressure of the contact on the touch-sensitive surface (surrogate). The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., at least 256). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., weighted average) to determine an estimated contact force. Similarly, the pressure sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch sensitive surface. Alternatively, the size of the contact area and/or its variation detected on the touch-sensitive surface, the capacitance of the touch-sensitive surface and/or its variation in the vicinity of the contact and/or the resistance of the touch-sensitive surface and/or its variation in the vicinity of the contact are optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, surrogate measurements of contact force or pressure are directly used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to surrogate measurements). In some implementations, surrogate measurements of contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). The intensity of the contact is used as an attribute of the user input, allowing the user to access additional device functions that are not otherwise accessible to the user on a smaller sized device of limited real estate for displaying affordances and/or receiving user input (e.g., via a touch-sensitive display, touch-sensitive surface, or physical/mechanical control, such as a knob or button).
As used in this specification and in the claims, the term "haptic output" refers to a physical displacement of a device relative to a previous position of the device, a physical displacement of a component of the device (e.g., a touch sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a centroid of the device, to be detected by a user with a user's feel. For example, in the case where the device or component of the device is in contact with a touch-sensitive surface of the user (e.g., a finger, palm, or other portion of the user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a haptic sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or touch pad) is optionally interpreted by a user as a "press click" or "click-down" of a physically actuated button. In some cases, the user will feel a tactile sensation, such as "press click" or "click down", even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moved. As another example, movement of the touch-sensitive surface may optionally be interpreted or sensed by a user as "roughness" of the touch-sensitive surface, even when the smoothness of the touch-sensitive surface is unchanged. While such interpretation of touches by a user will be limited by the user's individualized sensory perception, many sensory perceptions of touches are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click down," "click up," "roughness"), unless stated otherwise, the haptic output generated corresponds to a physical displacement of the device or component thereof that would generate that sensory perception of a typical (or ordinary) user.
It should be understood that the device 100 is merely one example of a portable multifunction device, and that the device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown in fig. 1A are implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application specific integrated circuits.
Memory 102 optionally includes high-speed random access memory, and also optionally includes non-volatile memory, such as one or more disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller 122 optionally controls access to memory 102 by other components of device 100.
Peripheral interface 118 may be used to couple input and output peripherals of the device to CPU 120 and memory 102. The one or more processors 120 run or execute various software programs, such as computer programs (e.g., including instructions), and/or sets of instructions stored in the memory 102 to perform various functions of the device 100 and process data. In some embodiments, peripheral interface 118, CPU 120, and memory controller 122 are optionally implemented on a single chip, such as chip 104. In some other embodiments, they are optionally implemented on separate chips.
The RF (radio frequency) circuit 108 receives and transmits RF signals, also referred to as electromagnetic signals. RF circuitry 108 converts/converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 108 optionally includes well known circuitry for performing these functions including, but not limited to, an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and the like. RF circuitry 108 optionally communicates via wireless communication with networks such as the internet (also known as the World Wide Web (WWW)), intranets, and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), and other devices. The RF circuitry 108 optionally includes well-known circuitry for detecting a Near Field Communication (NFC) field, such as by a short-range communication radio. Wireless communications optionally use any of a variety of communication standards, protocols, and technologies including, but not limited to, global system for mobile communications (GSM), enhanced Data GSM Environment (EDGE), high Speed Downlink Packet Access (HSDPA), high Speed Uplink Packet Access (HSUPA), evolution, pure data (EV-DO), HSPA, hspa+, dual cell HSPA (DC-HSPDA), long Term Evolution (LTE), near Field Communications (NFC), wideband code division multiple access (W-CDMA), code Division Multiple Access (CDMA), time Division Multiple Access (TDMA), bluetooth low energy (BTLE), wireless fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, and/or IEEE 802.11 ac), voice over internet protocol (VoIP), wi-MAX, email protocols (e.g., internet Message Access Protocol (IMAP) and/or Post Office Protocol (POP)), messages (e.g., extensible message handling and presence protocol (XMPP), protocols for instant messaging and presence using extended session initiation protocol (sime), messages and presence (IMPS), instant messaging and/or SMS (SMS) protocols, or any other suitable communications protocol not yet developed herein.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between the user and device 100. Audio circuitry 110 receives audio data from peripheral interface 118, converts the audio data to electrical signals, and transmits the electrical signals to speaker 111. The speaker 111 converts electrical signals into sound waves that are audible to humans. The audio circuit 110 also receives electrical signals converted from sound waves by the microphone 113. The audio circuitry 110 converts the electrical signals into audio data and transmits the audio data to the peripheral interface 118 for processing. The audio data is optionally retrieved from and/or transmitted to the memory 102 and/or the RF circuitry 108 by the peripheral interface 118. In some embodiments, the audio circuit 110 also includes a headset jack (e.g., 212 in fig. 2). The headset jack provides an interface between the audio circuit 110 and removable audio input/output peripherals such as output-only headphones or a headset having both an output (e.g., a monaural or binaural) and an input (e.g., a microphone).
I/O subsystem 106 couples input/output peripheral devices on device 100, such as touch screen 112 and other input control devices 116, to peripheral interface 118. The I/O subsystem 106 optionally includes a display controller 156, an optical sensor controller 158, a depth camera controller 169, an intensity sensor controller 159, a haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive electrical signals from/transmit electrical signals to other input control devices 116. The other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click-type dials, and the like. In some implementations, the input controller 160 is optionally coupled to (or not coupled to) any of the following: a keyboard, an infrared port, a USB port, and a pointing device such as a mouse. One or more buttons (e.g., 208 in fig. 2) optionally include an up/down button for volume control of speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206 in fig. 2). In some embodiments, the electronic device is a computer system that communicates (e.g., via wireless communication, via wired communication) with one or more input devices. In some implementations, the one or more input devices include a touch-sensitive surface (e.g., a touch pad as part of a touch-sensitive display). In some embodiments, the one or more input devices include one or more camera sensors (e.g., one or more optical sensors 164 and/or one or more depth camera sensors 175), such as for tracking gestures (e.g., hand gestures) of a user as input. In some embodiments, one or more input devices are integrated with the computer system. In some embodiments, one or more input devices are separate from the computer system.
The quick press of the push button optionally disengages the lock of the touch screen 112 or optionally begins the process of unlocking the device using gestures on the touch screen, as described in U.S. patent application Ser. No. 11/322,549 (i.e., U.S. patent No.7,657,849) entitled "Unlocking a Device by Performing Gestures on an Unlock Image," filed on even 23, 12, 2005, which is hereby incorporated by reference in its entirety. Long presses of a button (e.g., 206) optionally cause the device 100 to power on or off. The function of the one or more buttons is optionally customizable by the user. Touch screen 112 is used to implement virtual buttons or soft buttons and one or more soft keyboards.
The touch sensitive display 112 provides an input interface and an output interface between the device and the user. Display controller 156 receives electrical signals from touch screen 112 and/or transmits electrical signals to touch screen 112. Touch screen 112 displays visual output to a user. Visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively, "graphics"). In some embodiments, some or all of the visual output optionally corresponds to a user interface object.
Touch screen 112 has a touch-sensitive surface, sensor, or set of sensors that receives input from a user based on haptic and/or tactile contact. Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or interruption of the contact) on touch screen 112 and translate the detected contact into interactions with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on touch screen 112. In an exemplary embodiment, the point of contact between touch screen 112 and the user corresponds to a user's finger.
Touch screen 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, but in other embodiments other display technologies are used. Touch screen 112 and display controller 156 optionally detect contact and any movement or interruption thereof using any of a variety of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112. In an exemplary embodiment, projected mutual electricity is used Capacitive sensing techniques such as those described in Apple inc (Cupertino, california)And iPod->Techniques used in the above.
The touch sensitive display in some implementations of touch screen 112 is optionally similar to the multi-touch sensitive touch pad described in the following U.S. patents: 6,323,846 (Westerman et al), 6,570,557 (Westerman et al) and/or 6,677,932 (Westerman et al) and/or U.S. patent publication 2002/0015024A1, each of which is hereby incorporated by reference in its entirety. However, touch screen 112 displays visual output from device 100, while touch sensitive touchpads do not provide visual output.
Touch sensitive displays in some implementations of touch screen 112 are described in the following applications: (1) U.S. patent application Ser. No. 11/381,313, "Multipoint Touch Surface Controller", filed on 5/2/2006; (2) U.S. patent application Ser. No. 10/840,862, "Multipoint Touchscreen", filed 5/6/2004; (3) U.S. patent application Ser. No. 10/903,964, "Gestures For Touch Sensitive Input Devices", filed 7.30.2004; (4) U.S. patent application Ser. No. 11/048,264, "Gestures For Touch Sensitive Input Devices", filed 1/31/2005; (5) U.S. patent application Ser. No. 11/038,590, "Mode-Based Graphical User Interfaces For Touch Sensitive Input Devices", filed 1/18/2005; (6) U.S. patent application Ser. No. 11/228,758, "Virtual Input Device Placement On A Touch Screen User Interface", filed 9/16/2005; (7) U.S. patent application Ser. No. 11/228,700, "Operation Of A Computer With A Touch Screen Interface", filed 9/16/2005; (8) U.S. patent application Ser. No. 11/228,737, "Activating Virtual Keys Of A Touch-Screen Virtual Keyboard", filed on 9/16/2005; and (9) U.S. patent application Ser. No. 11/367,749, "Multi-Functional Hand-Held Device," filed 3/2006. All of these applications are incorporated by reference herein in their entirety.
Touch screen 112 optionally has a video resolution in excess of 100 dpi. In some implementations, the touch screen has a video resolution of about 160 dpi. The user optionally uses any suitable object or appendage, such as a stylus, finger, or the like, to make contact with touch screen 112. In some embodiments, the user interface is designed to work primarily through finger-based contact and gestures, which may not be as accurate as stylus-based input due to the large contact area of the finger on the touch screen. In some embodiments, the device translates the finger-based coarse input into a precise pointer/cursor position or command for performing the action desired by the user.
In some embodiments, the device 100 optionally includes a touch pad for activating or deactivating a particular function in addition to the touch screen. In some embodiments, the touch pad is a touch sensitive area of the device that, unlike the touch screen, does not display visual output. The touch pad is optionally a touch sensitive surface separate from the touch screen 112 or an extension of the touch sensitive surface formed by the touch screen.
The apparatus 100 also includes a power system 162 for powering the various components. The power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in the portable device.
The apparatus 100 optionally further comprises one or more optical sensors 164. FIG. 1A shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The optical sensor 164 optionally includes a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The optical sensor 164 receives light projected through one or more lenses from the environment and converts the light into data representing an image. In conjunction with imaging module 143 (also called a camera module), optical sensor 164 optionally captures still images or video. In some embodiments, the optical sensor is located on the rear of the device 100, opposite the touch screen display 112 on the front of the device, so that the touch screen display can be used as a viewfinder for still image and/or video image acquisition. In some embodiments, the optical sensor is located on the front of the device such that the user's image is optionally acquired for video conferencing while viewing other video conference participants on the touch screen display. In some implementations, the position of the optical sensor 164 may be changed by the user (e.g., by rotating a lens and sensor in the device housing) such that a single optical sensor 164 is used with the touch screen display for both video conferencing and still image and/or video image acquisition.
The device 100 optionally further includes one or more depth camera sensors 175. FIG. 1A shows a depth camera sensor coupled to a depth camera controller 169 in the I/O subsystem 106. The depth camera sensor 175 receives data from the environment to create a three-dimensional model of objects (e.g., faces) within the scene from a point of view (e.g., depth camera sensor). In some implementations, in conjunction with the imaging module 143 (also referred to as a camera module), the depth camera sensor 175 is optionally used to determine a depth map of different portions of the image captured by the imaging module 143. In some embodiments, a depth camera sensor is located at the front of the device 100 such that a user image with depth information is optionally acquired for a video conference while the user views other video conference participants on a touch screen display, and a self-photograph with depth map data is captured. In some embodiments, the depth camera sensor 175 is located at the back of the device, or at the back and front of the device 100. In some implementations, the position of the depth camera sensor 175 can be changed by the user (e.g., by rotating a lens and sensor in the device housing) such that the depth camera sensor 175 is used with a touch screen display for both video conferencing and still image and/or video image acquisition.
The apparatus 100 optionally further comprises one or more contact intensity sensors 165. FIG. 1A shows a contact intensity sensor coupled to an intensity sensor controller 159 in the I/O subsystem 106. The contact strength sensor 165 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other strength sensors (e.g., sensors for measuring force (or pressure) of a contact on a touch-sensitive surface). The contact strength sensor 165 receives contact strength information (e.g., pressure information or a surrogate for pressure information) from the environment. In some implementations, at least one contact intensity sensor is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on the rear of the device 100, opposite the touch screen display 112 located on the front of the device 100.
The device 100 optionally further includes one or more proximity sensors 166. Fig. 1A shows a proximity sensor 166 coupled to the peripheral interface 118. Alternatively, the proximity sensor 166 is optionally coupled to the input controller 160 in the I/O subsystem 106. The proximity sensor 166 optionally performs as described in the following U.S. patent applications: no.11/241,839, entitled "Proximity Detector In Handheld Device"; no.11/240,788, entitled "Proximity Detector In Handheld Device"; no.11/620,702, entitled "Using Ambient Light Sensor To Augment Proximity Sensor Output"; no.11/586,862, entitled "Automated Response To And Sensing Of User Activity In Portable Devices"; and No.11/638,251, entitled "Methods And Systems For Automatic Configuration Of Peripherals," which are hereby incorporated by reference in their entirety. In some embodiments, the proximity sensor is turned off and the touch screen 112 is disabled when the multifunction device is placed near the user's ear (e.g., when the user is making a telephone call).
The device 100 optionally further comprises one or more tactile output generators 167. FIG. 1A shows a haptic output generator coupled to a haptic feedback controller 161 in the I/O subsystem 106. The tactile output generator 167 optionally includes one or more electroacoustic devices such as speakers or other audio components; and/or electromechanical devices for converting energy into linear motion such as motors, solenoids, electroactive polymers, piezoelectric actuators, electrostatic actuators, or other tactile output generating means (e.g., means for converting an electrical signal into a tactile output on a device). The contact intensity sensor 165 receives haptic feedback generation instructions from the haptic feedback module 133 and generates a haptic output on the device 100 that can be perceived by a user of the device 100. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., inward/outward of the surface of device 100) or laterally (e.g., backward and forward in the same plane as the surface of device 100). In some embodiments, at least one tactile output generator sensor is located on the rear of the device 100, opposite the touch screen display 112 located on the front of the device 100.
The device 100 optionally further includes one or more accelerometers 168. Fig. 1A shows accelerometer 168 coupled to peripheral interface 118. Alternatively, accelerometer 168 is optionally coupled to input controller 160 in I/O subsystem 106. Accelerometer 168 optionally performs as described in the following U.S. patent publications: U.S. patent publication No.20050190059, entitled "acception-based Theft Detection System for Portable Electronic Devices" and U.S. patent publication No.20060017692, entitled "Methods And Apparatuses For Operating APortable Device Based On An Accelerometer", both of which are incorporated herein by reference in their entirety. In some implementations, information is displayed in a portrait view or a landscape view on a touch screen display based on analysis of data received from one or more accelerometers. The device 100 optionally includes a magnetometer and a GPS (or GLONASS or other global navigation system) receiver in addition to the accelerometer 168 for obtaining information about the position and orientation (e.g., longitudinal or lateral) of the device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or instruction set) 128, a contact/motion module (or instruction set) 130, a graphics module (or instruction set) 132, a text input module (or instruction set) 134, a Global Positioning System (GPS) module (or instruction set) 135, and an application program (or instruction set) 136. Furthermore, in some embodiments, memory 102 (fig. 1A) or 370 (fig. 3) stores device/global internal state 157, as shown in fig. 1A and 3. The device/global internal state 157 includes one or more of the following: an active application state indicating which applications (if any) are currently active; display status, indicating what applications, views, or other information occupy various areas of the touch screen display 112; sensor status, including information obtained from the various sensors of the device and the input control device 116; and location information relating to the device location and/or pose.
Operating system 126 (e.g., darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or embedded operating systems such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.), and facilitates communication between the various hardware components and software components.
The communication module 128 facilitates communication with other devices through one or more external ports 124 and also includes various software components for processing data received by the RF circuitry 108 and/or the external ports 124. External port 124 (e.g., universal Serial Bus (USB), firewire, etc.) is adapted to be coupled directly to other devices or indirectly via a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external port is in communication withThe 30-pin connector used on the (Apple inc. Trademark) device is the same or similar and/or compatible with a multi-pin (e.g., 30-pin) connector.
The contact/motion module 130 optionally detects contact with the touch screen 112 (in conjunction with the display controller 156) and other touch sensitive devices (e.g., a touchpad or physical click wheel). The contact/motion module 130 includes various software components for performing various operations related to contact detection, such as determining whether a contact has occurred (e.g., detecting a finger press event), determining the strength of the contact (e.g., the force or pressure of the contact, or a substitute for the force or pressure of the contact), determining whether there is movement of the contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger drag events), and determining whether the contact has ceased (e.g., detecting a finger lift event or a contact break). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining a velocity (magnitude), a speed (magnitude and direction), and/or an acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single finger contacts) or simultaneous multi-point contacts (e.g., "multi-touch"/multiple finger contacts). In some embodiments, the contact/motion module 130 and the display controller 156 detect contact on the touch pad.
In some implementations, the contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether the user has "clicked" on an icon). In some implementations, at least a subset of the intensity thresholds are determined according to software parameters (e.g., the intensity thresholds are not determined by activation thresholds of particular physical actuators and may be adjusted without changing the physical hardware of the device 100). For example, without changing the touchpad or touch screen display hardware, the mouse "click" threshold of the touchpad or touch screen may be set to any of a wide range of predefined thresholds. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more intensity thresholds in a set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting multiple intensity thresholds at once with a system-level click on an "intensity" parameter).
The contact/motion module 130 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different movements, timings, and/or intensities of the detected contacts). Thus, gestures are optionally detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger press event, and then detecting a finger lift (lift off) event at the same location (or substantially the same location) as the finger press event (e.g., at the location of an icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event, then detecting one or more finger-dragging events, and then detecting a finger-up (lift-off) event.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other displays, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual attribute) of the displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including but not limited to text, web pages, icons (such as user interface objects including soft keys), digital images, video, animation, and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphic module 132 receives one or more codes for designating graphics to be displayed from an application program or the like, and also receives coordinate data and other graphic attribute data together if necessary, and then generates screen image data to output to the display controller 156.
Haptic feedback module 133 includes various software components for generating instructions used by haptic output generator 167 to generate haptic output at one or more locations on device 100 in response to user interaction with device 100.
Text input module 134, which is optionally a component of graphics module 132, provides a soft keyboard for entering text in various applications (e.g., contacts 137, email 140, IM 141, browser 147, and any other application requiring text input).
The GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to the phone 138 for use in location-based dialing, to the camera 143 as picture/video metadata, and to applications that provide location-based services, such as weather gadgets, local page gadgets, and map/navigation gadgets).
The application 136 optionally includes the following modules (or sets of instructions) or a subset or superset thereof:
contact module 137 (sometimes referred to as an address book or contact list);
a telephone module 138;
video conferencing module 139;
email client module 140;
an Instant Messaging (IM) module 141;
a fitness support module 142;
a camera module 143 for still and/or video images;
an image management module 144;
a video player module;
a music player module;
browser module 147;
Calendar module 148;
a gadget module 149, optionally comprising one or more of: weather gadgets 149-1, stock gadgets 149-2, calculator gadget 149-3, alarm gadget 149-4, dictionary gadget 149-5, and other gadgets obtained by the user, and user-created gadgets 149-6;
a gadget creator module 150 for forming a user-created gadget 149-6;
search module 151;
a video and music player module 152 that incorporates the video player module and the music player module;
a note module 153;
map module 154; and/or
An online video module 155.
Examples of other applications 136 optionally stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, contacts module 137 is optionally used to manage an address book or contact list (e.g., in application internal state 192 of contacts module 137 stored in memory 102 or memory 370), including: adding one or more names to the address book; deleting the name from the address book; associating a telephone number, email address, physical address, or other information with the name; associating the image with the name; classifying and classifying names; providing a telephone number or email address to initiate and/or facilitate communications through telephone 138, video conferencing module 139, email 140, or IM 141; etc.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, telephone module 138 is optionally used to input a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contact module 137, modify the entered telephone number, dial a corresponding telephone number, conduct a conversation, and disconnect or hang up when the conversation is completed. As described above, wireless communication optionally uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, text input module 134, contacts module 137, and telephony module 138, videoconferencing module 139 includes executable instructions to initiate, conduct, and terminate a videoconference between a user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions for creating, sending, receiving, and managing emails in response to user instructions. In conjunction with the image management module 144, the email client module 140 makes it very easy to create and send emails with still or video images captured by the camera module 143.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, instant message module 141 includes executable instructions for: inputting a character sequence corresponding to an instant message, modifying previously inputted characters, transmitting a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for phone-based instant messages or using XMPP, SIMPLE, or IMPS for internet-based instant messages), receiving an instant message, and viewing the received instant message. In some embodiments, the transmitted and/or received instant message optionally includes graphics, photographs, audio files, video files, and/or other attachments supported in an MMS and/or Enhanced Messaging Service (EMS). As used herein, "instant message" refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module, workout support module 142 includes executable instructions for creating a workout (e.g., with time, distance, and/or calorie burn targets); communicate with a fitness sensor (exercise device); receiving fitness sensor data; calibrating a sensor for monitoring fitness; selecting and playing music for exercise; and displaying, storing and transmitting the fitness data.
In conjunction with touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions for: capturing still images or videos (including video streams) and storing them in the memory 102, modifying features of still images or videos, or deleting still images or videos from the memory 102.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions for arranging, modifying (e.g., editing), or otherwise manipulating, tagging, deleting, presenting (e.g., in a digital slide or album), and storing still and/or video images.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions for browsing the internet according to user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, email client module 140, and browser module 147, calendar module 148 includes executable instructions for creating, displaying, modifying, and storing calendars and data associated with calendars (e.g., calendar entries, to-do items, etc.) according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, gadget module 149 is a mini-application (e.g., weather gadget 149-1, stock gadget 149-2, calculator gadget 149-3, alarm gadget 149-4, and dictionary gadget 149-5) or a mini-application created by a user (e.g., user created gadget 149-6) that is optionally downloaded and used by a user. In some embodiments, gadgets include HTML (hypertext markup language) files, CSS (cascading style sheet) files, and JavaScript files. In some embodiments, gadgets include XML (extensible markup language) files and JavaScript files (e.g., yahoo | gadgets).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, gadget creator module 150 is optionally used by a user to create gadgets (e.g., to transform user-specified portions of a web page into gadgets).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions for searching memory 102 for text, music, sound, images, video, and/or other files that match one or more search criteria (e.g., one or more user-specified search terms) according to user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuit 110, speaker 111, RF circuit 108, and browser module 147, video and music player module 152 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, as well as executable instructions for displaying, rendering, or otherwise playing back video (e.g., on touch screen 112 or on an external display connected via external port 124). In some embodiments, the device 100 optionally includes the functionality of an MP3 player such as an iPod (trademark of Apple inc.).
In conjunction with the touch screen 112, the display controller 156, the contact/movement module 130, the graphics module 132, and the text input module 134, the notes module 153 includes executable instructions for creating and managing notes, backlog, and the like according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 is optionally configured to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data related to shops and other points of interest at or near a particular location, and other location-based data) according to user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuit 110, speaker 111, RF circuit 108, text input module 134, email client module 140, and browser module 147, online video module 155 includes instructions for: allowing a user to access, browse, receive (e.g., by streaming and/or downloading), play back (e.g., on a touch screen or on an external display connected via external port 124), send an email with a link to a particular online video, and otherwise manage online video in one or more file formats such as h.264. In some embodiments, the instant messaging module 141 is used to send links to particular online videos instead of the email client module 140. Additional description of online video applications can be found in U.S. provisional patent application Ser. No.60/936,562, titled "Portable Multifunction Device, method, and Graphical User Interface for Playing Online Videos," filed on even date 6, 20, 2007, and U.S. patent application Ser. No.11/968,067, titled "Portable Multifunction Device, method, and Graphical User Interface for Playing Online Videos," filed on even date 12, 31, 2007, the contents of both of which are hereby incorporated by reference in their entirety.
Each of the modules and applications described above corresponds to a set of executable instructions for performing one or more of the functions described above, as well as the methods described in this patent application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented in a separate software program, such as a computer program (e.g., including instructions), process, or module, and thus the various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. For example, the video player module is optionally combined with the music player module into a single module (e.g., video and music player module 152 in fig. 1A). In some embodiments, memory 102 optionally stores a subset of the modules and data structures described above. Further, memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device in which the operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or touch pad. By using a touch screen and/or a touch pad as the primary input control device for operating the device 100, the number of physical input control devices (e.g., push buttons, dials, etc.) on the device 100 is optionally reduced.
A predefined set of functions performed solely by the touch screen and/or touch pad optionally includes navigation between user interfaces. In some embodiments, the touchpad, when touched by a user, navigates the device 100 from any user interface displayed on the device 100 to a main menu, home menu, or root menu. In such implementations, a touch pad is used to implement a "menu button". In some other embodiments, the menu buttons are physical push buttons or other physical input control devices, rather than touch pads.
FIG. 1B is a block diagram illustrating exemplary components for event processing according to some embodiments. In some embodiments, memory 102 (FIG. 1A) or memory 370 (FIG. 3) includes event sorter 170 (e.g., in operating system 126) and corresponding applications 136-1 (e.g., any of the aforementioned applications 137-151, 155, 380-390).
The event classifier 170 receives the event information and determines the application view 191 of the application 136-1 and the application 136-1 to which the event information is to be delivered. The event sorter 170 includes an event monitor 171 and an event dispatcher module 174. In some embodiments, the application 136-1 includes an application internal state 192 that indicates one or more current application views that are displayed on the touch-sensitive display 112 when the application is active or executing. In some embodiments, the device/global internal state 157 is used by the event classifier 170 to determine which application(s) are currently active, and the application internal state 192 is used by the event classifier 170 to determine the application view 191 to which to deliver event information.
In some implementations, the application internal state 192 includes additional information, such as one or more of the following: restoration information to be used when the application 136-1 resumes execution, user interface state information indicating that the information is being displayed or ready for display by the application 136-1, a state queue for enabling the user to return to a previous state or view of the application 136-1, and a repeat/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripheral interface 118. The event information includes information about sub-events (e.g., user touches on the touch sensitive display 112 as part of a multi-touch gesture). The peripheral interface 118 transmits information it receives from the I/O subsystem 106 or sensors, such as a proximity sensor 166, one or more accelerometers 168, and/or microphone 113 (via audio circuitry 110). The information received by the peripheral interface 118 from the I/O subsystem 106 includes information from the touch-sensitive display 112 or touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to peripheral interface 118 at predetermined intervals. In response, the peripheral interface 118 transmits event information. In other embodiments, the peripheral interface 118 transmits event information only if there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or receiving an input exceeding a predetermined duration).
In some implementations, the event classifier 170 also includes a hit view determination module 172 and/or an active event identifier determination module 173.
When the touch sensitive display 112 displays more than one view, the hit view determination module 172 provides a software process for determining where within one or more views a sub-event has occurred. The view is made up of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected optionally corresponds to a level of programming within the application's programming or view hierarchy. For example, the lowest horizontal view in which a touch is detected is optionally referred to as a hit view, and the set of events that are recognized as correct inputs is optionally determined based at least in part on the hit view of the initial touch that begins a touch-based gesture.
Hit view determination module 172 receives information related to sub-events of the touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 172 identifies the hit view as the lowest view in the hierarchy that should process sub-events. In most cases, the hit view is the lowest level view in which the initiating sub-event (e.g., the first sub-event in a sequence of sub-events that form an event or potential event) occurs. Once the hit view is identified by the hit view determination module 172, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as a hit view.
The activity event recognizer determination module 173 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some implementations, the active event identifier determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, the activity event recognizer determination module 173 determines that all views that include the physical location of a sub-event are actively engaged views, and thus determines that all actively engaged views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely localized to an area associated with one particular view, the higher view in the hierarchy will remain the actively engaged view.
The event dispatcher module 174 dispatches event information to an event recognizer (e.g., event recognizer 180). In embodiments that include an active event recognizer determination module 173, the event dispatcher module 174 delivers event information to the event recognizers determined by the active event recognizer determination module 173. In some embodiments, the event dispatcher module 174 stores event information in an event queue that is retrieved by the corresponding event receiver 182.
In some embodiments, the operating system 126 includes an event classifier 170. Alternatively, the application 136-1 includes an event classifier 170. In yet another embodiment, the event classifier 170 is a stand-alone module or part of another module stored in the memory 102, such as the contact/motion module 130.
In some embodiments, application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for processing touch events that occur within a respective view of the user interface of the application. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, the respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of the event recognizers 180 are part of a separate module that is a higher level object from which methods and other properties are inherited, such as the user interface toolkit or application 136-1. In some implementations, the respective event handlers 190 include one or more of the following: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or invokes data updater 176, object updater 177, or GUI updater 178 to update the application internal state 192. Alternatively, one or more of application views 191 include one or more corresponding event handlers 190. Additionally, in some implementations, one or more of the data updater 176, the object updater 177, and the GUI updater 178 are included in a respective application view 191.
The corresponding event identifier 180 receives event information (e.g., event data 179) from the event classifier 170 and identifies events based on the event information. Event recognizer 180 includes event receiver 182 and event comparator 184. In some embodiments, event recognizer 180 further includes at least a subset of metadata 183 and event transfer instructions 188 (which optionally include sub-event delivery instructions).
Event receiver 182 receives event information from event sorter 170. The event information includes information about sub-events such as touches or touch movements. The event information also includes additional information, such as the location of the sub-event, according to the sub-event. When a sub-event relates to movement of a touch, the event information optionally also includes the rate and direction of the sub-event. In some embodiments, the event includes rotation of the device from one orientation to another orientation (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about a current orientation of the device (also referred to as a device pose).
The event comparator 184 compares the event information with predefined event or sub-event definitions and determines an event or sub-event or determines or updates the state of the event or sub-event based on the comparison. In some embodiments, event comparator 184 includes event definition 186. Event definition 186 includes definitions of events (e.g., a predefined sequence of sub-events), such as event 1 (187-1), event 2 (187-2), and others. In some implementations, sub-events in the event (187) include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition of event 1 (187-1) is a double click on the displayed object. For example, a double click includes a first touch on the displayed object for a predetermined length of time (touch start), a first lift-off on the displayed object for a predetermined length of time (touch end), a second touch on the displayed object for a predetermined length of time (touch start), and a second lift-off on the displayed object for a predetermined length of time (touch end). In another example, the definition of event 2 (187-2) is a drag on the displayed object. For example, dragging includes touching (or contacting) on the displayed object for a predetermined period of time, movement of the touch on the touch-sensitive display 112, and lift-off of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some implementations, the event definitions 187 include definitions of events for respective user interface objects. In some implementations, the event comparator 184 performs a hit test to determine which user interface object is associated with a sub-event. For example, in an application view that displays three user interface objects on touch-sensitive display 112, when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test to determine which of the three user interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 190, the event comparator uses the results of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects an event handler associated with the sub-event and the object that triggered the hit test.
In some embodiments, the definition of the respective event (187) further includes a delay action that delays delivery of the event information until it has been determined that the sequence of sub-events does or does not correspond to an event type of the event recognizer.
When the respective event recognizer 180 determines that the sequence of sub-events does not match any of the events in the event definition 186, the respective event recognizer 180 enters an event impossible, event failed, or event end state after which subsequent sub-events of the touch-based gesture are ignored. In this case, the other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the ongoing touch-based gesture.
In some embodiments, the respective event recognizer 180 includes metadata 183 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to the actively engaged event recognizer. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate how event recognizers interact or are able to interact with each other. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to different levels in a view or programmatic hierarchy.
In some embodiments, when one or more particular sub-events of an event are identified, the corresponding event recognizer 180 activates an event handler 190 associated with the event. In some implementations, the respective event identifier 180 delivers event information associated with the event to the event handler 190. The activate event handler 190 is different from sending (and deferring) sub-events to the corresponding hit view. In some embodiments, event recognizer 180 throws a marker associated with the recognized event, and event handler 190 associated with the marker retrieves the marker and performs a predefined process.
In some implementations, the event delivery instructions 188 include sub-event delivery instructions that deliver event information about the sub-event without activating the event handler. Instead, the sub-event delivery instructions deliver the event information to an event handler associated with the sub-event sequence or to an actively engaged view. Event handlers associated with the sequence of sub-events or with the actively engaged views receive the event information and perform a predetermined process.
In some embodiments, the data updater 176 creates and updates data used in the application 136-1. For example, the data updater 176 updates a telephone number used in the contact module 137 or stores a video file used in the video player module. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, the object updater 177 creates a new user interface object or updates the location of the user interface object. GUI updater 178 updates the GUI. For example, the GUI updater 178 prepares the display information and sends the display information to the graphics module 132 for display on a touch-sensitive display.
In some embodiments, event handler 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, the data updater 176, the object updater 177, and the GUI updater 178 are included in a single module of the respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It should be appreciated that the above discussion regarding event handling of user touches on a touch sensitive display also applies to other forms of user inputs that utilize an input device to operate the multifunction device 100, not all of which are initiated on a touch screen. For example, mouse movements and mouse button presses optionally in conjunction with single or multiple keyboard presses or holds; contact movement on the touchpad, such as tap, drag, scroll, etc.; inputting by a touch pen; movement of the device; verbal instructions; detected eye movement; inputting biological characteristics; and/or any combination thereof is optionally used as input corresponding to sub-events defining the event to be distinguished.
Fig. 2 illustrates a portable multifunction device 100 with a touch screen 112 in accordance with some embodiments. The touch screen optionally displays one or more graphics within a User Interface (UI) 200. In this and other embodiments described below, a user can select one or more of these graphics by making a gesture on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figures) or one or more styluses 203 (not drawn to scale in the figures). In some embodiments, selection of one or more graphics will occur when a user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up and/or down), and/or scrolling of a finger that has been in contact with the device 100 (right to left, left to right, up and/or down). In some implementations or in some cases, inadvertent contact with the graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture that swipes over an application icon optionally does not select the corresponding application.
The device 100 optionally also includes one or more physical buttons, such as a "home" or menu button 204. As previously described, menu button 204 is optionally used to navigate to any application 136 in a set of applications that are optionally executed on device 100. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on touch screen 112.
In some embodiments, the device 100 includes a touch screen 112, menu buttons 204, a press button 206 for powering the device on/off and for locking the device, one or more volume adjustment buttons 208, a Subscriber Identity Module (SIM) card slot 210, a headset jack 212, and a docking/charging external port 124. Pressing button 206 is optionally used to turn on/off the device by pressing the button and holding the button in the pressed state for a predefined time interval; locking the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or unlock the device or initiate an unlocking process. In an alternative embodiment, the device 100 also accepts voice input through the microphone 113 for activating or deactivating certain functions. The device 100 also optionally includes one or more contact intensity sensors 165 for detecting the intensity of contacts on the touch screen 112, and/or one or more haptic output generators 167 for generating haptic outputs for a user of the device 100.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, the device 300 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child learning toy), a gaming system, or a control device (e.g., a home controller or an industrial controller). The device 300 generally includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. Communication bus 320 optionally includes circuitry (sometimes referred to as a chipset) that interconnects and controls communications between system components. The device 300 includes an input/output (I/O) interface 330 with a display 340, typically a touch screen display. The I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and a touchpad 355, a tactile output generator 357 (e.g., similar to the tactile output generator 167 described above with reference to fig. 1A), a sensor 359 (e.g., an optical sensor, an acceleration sensor, a proximity sensor, a touch sensitive sensor, and/or a contact intensity sensor (similar to the contact intensity sensor 165 described above with reference to fig. 1A)) for generating tactile output on the device 300. Memory 370 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and optionally includes non-volatile memory such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 370 optionally includes one or more storage devices located remotely from CPU 310. In some embodiments, memory 370 stores programs, modules, and data structures, or a subset thereof, similar to those stored in memory 102 of portable multifunction device 100 (fig. 1A). Furthermore, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk editing module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (fig. 1A) optionally does not store these modules.
Each of the above elements in fig. 3 is optionally stored in one or more of the previously mentioned memory devices. Each of the above-described modules corresponds to a set of instructions for performing the above-described functions. The above-described modules or computer programs (e.g., sets of instructions or instructions) need not be implemented in a separate software program (such as a computer program (e.g., instructions), process or module, and thus the various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures described above. Further, memory 370 optionally stores additional modules and data structures not described above.
Attention is now directed to embodiments of user interfaces optionally implemented on, for example, portable multifunction device 100.
Fig. 4A illustrates an exemplary user interface of an application menu on the portable multifunction device 100 in accordance with some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
Signal strength indicators 402 for wireless communications such as cellular signals and Wi-Fi signals;
time 404;
bluetooth indicator 405;
battery status indicator 406;
tray 408 with icons for commonly used applications, such as:
an icon 416 labeled "phone" of the o phone module 138, the icon 416 optionally including an indicator 414 of the number of missed calls or voice mails;
an icon 418 labeled "mail" of the o email client module 140, the icon 418 optionally including an indicator 410 of the number of unread emails;
icon 420 labeled "browser" of the omicron browser module 147; and
an icon 422 labeled "iPod" of the omicron video and music player module 152 (also known as iPod (trademark of Apple inc.) module 152); and
icons of other applications, such as:
icon 424 labeled "message" of omicron IM module 141.
Icon 426 "labeled" calendar "of calendar module 148.
Icon 428 labeled "photo" of image management module 144.
Icon 430 "labeled" camera "of the omicron camera module 143.
Icon 432 labeled "online video" of online video module 155.
Icon 434 labeled "stock market" of the o stock market gadget 149-2.
Icon 436 labeled "map" of the omicron map module 154.
Icon 438 labeled "weather" for the o weather gadget 149-1.
The icon 440 "labeled" alarm "of the o alarm widget 149-4.
Icon 442 labeled "fitness support" of the omicron fitness support module 142.
Icon 444 labeled "note" of the omicron note module 153; and
an icon 446 labeled "set" for a set application or module that provides access to the settings of device 100 and its various applications 136.
It should be noted that the iconic labels shown in fig. 4A are merely exemplary. For example, the icon 422 of the video and music player module 152 is labeled "music" or "music player". Other labels are optionally used for various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 of fig. 3) having a touch-sensitive surface 451 (e.g., tablet or touchpad 355 of fig. 3) separate from a display 450 (e.g., touch screen display 112). The device 300 also optionally includes one or more contact intensity sensors (e.g., one or more of the sensors 359) for detecting the intensity of the contact on the touch-sensitive surface 451 and/or one or more tactile output generators 357 for generating tactile outputs for a user of the device 300.
While some of the examples below will be given with reference to inputs on touch screen display 112 (where the touch sensitive surface and the display are combined), in some embodiments the device detects inputs on a touch sensitive surface separate from the display, as shown in fig. 4B. In some implementations, the touch-sensitive surface (e.g., 451 in fig. 4B) has a primary axis (e.g., 452 in fig. 4B) that corresponds to the primary axis (e.g., 453 in fig. 4B) on the display (e.g., 450). According to these embodiments, the device detects contact (e.g., 460 and 462 in fig. 4B) with the touch-sensitive surface 451 at a location corresponding to a respective location on the display (e.g., 460 corresponds to 468 and 462 corresponds to 470 in fig. 4B). In this way, when the touch-sensitive surface (e.g., 451 in FIG. 4B) is separated from the display (e.g., 450 in FIG. 4B) of the multifunction device, user inputs (e.g., contacts 460 and 462 and movement thereof) detected by the device on the touch-sensitive surface are used by the device to manipulate the user interface on the display. It should be appreciated that similar approaches are optionally used for other user interfaces described herein.
Additionally, while the following examples are primarily given with reference to finger inputs (e.g., finger contacts, single-finger flick gestures, finger swipe gestures), it should be understood that in some embodiments one or more of these finger inputs are replaced by input from another input device (e.g., mouse-based input or stylus input). For example, a swipe gesture is optionally replaced with a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, a flick gesture is optionally replaced by a mouse click (e.g., instead of detection of contact, followed by ceasing to detect contact) when the cursor is over the position of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be appreciated that multiple computer mice are optionally used simultaneously, or that the mice and finger contacts are optionally used simultaneously.
Fig. 5A illustrates an exemplary personal electronic device 500. The device 500 includes a body 502. In some embodiments, device 500 may include some or all of the features described with respect to devices 100 and 300 (e.g., fig. 1A-4B). In some implementations, the device 500 has a touch sensitive display 504, hereinafter referred to as a touch screen 504. In addition to or in lieu of touch screen 504, device 500 has a display and a touch-sensitive surface. As with devices 100 and 300, in some implementations, touch screen 504 (or touch-sensitive surface) optionally includes one or more intensity sensors for detecting the intensity of an applied contact (e.g., touch). One or more intensity sensors of the touch screen 504 (or touch sensitive surface) may provide output data representative of the intensity of the touch. The user interface of the device 500 may respond to touches based on the intensity of the touches, meaning that touches of different intensities may invoke different user interface operations on the device 500.
Exemplary techniques for detecting and processing touch intensity are found, for example, in the following related patent applications: international patent application sequence No. pct/US2013/040061, filed 5/8 a 2013, entitled "Device, method, and Graphical User Interface for Displaying User Interface Objects Corresponding to an Application", issued as WIPO patent publication No. wo/2013/169849; and international patent application sequence No. pct/US2013/069483, filed 11/2013, entitled "Device, method, and Graphical User Interface for Transitioning Between Touch Input to Display Output Relationships", published as WIPO patent publication No. wo/2014/105276, each of which is hereby incorporated by reference in its entirety.
In some embodiments, the device 500 has one or more input mechanisms 506 and 508. The input mechanisms 506 and 508 (if included) may be in physical form. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, the device 500 has one or more attachment mechanisms. Such attachment mechanisms, if included, may allow for attachment of the device 500 with, for example, a hat, glasses, earrings, necklace, shirt, jacket, bracelet, watchband, bracelet, pants, leash, shoe, purse, backpack, or the like. These attachment mechanisms allow the user to wear the device 500.
Fig. 5B depicts an exemplary personal electronic device 500. In some embodiments, the apparatus 500 may include some or all of the components described with reference to fig. 1A, 1B, and 3. The device 500 has a bus 512 that operatively couples an I/O section 514 with one or more computer processors 516 and memory 518. The I/O portion 514 may be connected to a display 504, which may have a touch sensitive component 522 and optionally an intensity sensor 524 (e.g., a contact intensity sensor). In addition, the I/O portion 514 may be connected to a communication unit 530 for receiving application and operating system data using Wi-Fi, bluetooth, near Field Communication (NFC), cellular, and/or other wireless communication technologies. The device 500 may include input mechanisms 506 and/or 508. For example, the input mechanism 506 is optionally a rotatable input device or a depressible input device and a rotatable input device. In some examples, the input mechanism 508 is optionally a button.
In some examples, the input mechanism 508 is optionally a microphone. Personal electronic device 500 optionally includes various sensors, such as a GPS sensor 532, an accelerometer 534, an orientation sensor 540 (e.g., compass), a gyroscope 536, a motion sensor 538, and/or combinations thereof, all of which are operatively connected to I/O section 514.
The memory 518 of the personal electronic device 500 may include one or more non-transitory computer-readable storage media for storing computer-executable instructions that, when executed by the one or more computer processors 516, may, for example, cause the computer processors to perform the techniques described below, including processes 900 and 1000 (fig. 9 and 10A-10B). A computer-readable storage medium may be any medium that can tangibly contain or store computer-executable instructions for use by or in connection with an instruction execution system, apparatus, and device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer readable storage medium may include, but is not limited to, magnetic storage devices, optical storage devices, and/or semiconductor storage devices. Examples of such storage devices include magnetic disks, optical disks based on CD, DVD, or blu-ray technology, and persistent solid state memories such as flash memory, solid state drives, etc. The personal electronic device 500 is not limited to the components and configuration of fig. 5B, but may include other components or additional components in a variety of configurations.
As used herein, the term "affordance" refers to a user-interactive graphical user interface object that is optionally displayed on a display screen of device 100, 300, and/or 500 (fig. 1A, 3, and 5A-5B). For example, an image (e.g., an icon), a button, and text (e.g., a hyperlink) optionally each constitute an affordance.
As used herein, the term "focus selector" refers to an input element for indicating the current portion of a user interface with which a user is interacting. In some implementations that include a cursor or other position marker, the cursor acts as a "focus selector" such that when the cursor detects an input (e.g., presses an input) on a touch-sensitive surface (e.g., touch pad 355 in fig. 3 or touch-sensitive surface 451 in fig. 4B) above a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted according to the detected input. In some implementations including a touch screen display (e.g., touch sensitive display system 112 in fig. 1A or touch screen 112 in fig. 4A) that enables direct interaction with user interface elements on the touch screen display, the contact detected on the touch screen acts as a "focus selector" such that when an input (e.g., a press input by a contact) is detected on the touch screen display at the location of a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations, the focus is moved from one area of the user interface to another area of the user interface without a corresponding movement of the cursor or movement of contact on the touch screen display (e.g., by moving the focus from one button to another using a tab key or arrow key); in these implementations, the focus selector moves according to movement of the focus between different areas of the user interface. Regardless of the particular form that the focus selector takes, the focus selector is typically controlled by the user in order to deliver a user interface element (or contact on the touch screen display) that is interactive with the user of the user interface (e.g., by indicating to the device the element with which the user of the user interface desires to interact). For example, upon detection of a press input on a touch-sensitive surface (e.g., a touchpad or touch screen), the position of a focus selector (e.g., a cursor, contact, or selection box) over a respective button will indicate that the user desires to activate the respective button (rather than other user interface elements shown on the device display).
As used in the specification and claims, the term "characteristic intensity" of a contact refers to the characteristic of a contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. The characteristic intensity is optionally based on a predefined number of intensity samples or a set of intensity samples acquired during a predetermined period of time (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, 5 seconds, 10 seconds) relative to a predefined event (e.g., after detection of contact, before or after detection of lift-off of contact, before or after detection of start of movement of contact, before or after detection of end of contact, and/or before or after detection of decrease in intensity of contact). The characteristic intensity of the contact is optionally based on one or more of: maximum value of intensity of contact, average value of intensity of contact, value at first 10% of intensity of contact, half maximum value of intensity of contact, 90% maximum value of intensity of contact, etc. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an operation. For example, the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold. In this example, contact of the feature strength that does not exceed the first threshold results in a first operation, contact of the feature strength that exceeds the first strength threshold but does not exceed the second strength threshold results in a second operation, and contact of the feature strength that exceeds the second threshold results in a third operation. In some implementations, a comparison between the feature strength and one or more thresholds is used to determine whether to perform one or more operations (e.g., whether to perform or forgo performing the respective operations) rather than for determining whether to perform the first or second operations.
Attention is now directed to embodiments of a user interface ("UI") and associated processes implemented on an electronic device, such as portable multifunction device 100, device 300, or device 500.
Fig. 6A-6Y illustrate an exemplary user interface for accessing media processing styles using a computer system, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 9 and 10A-10B.
Fig. 6A shows a computer system 600 displaying a camera user interface that includes a live preview 630 extending from the top of the display to the bottom of the display. In some implementations, the live preview 630 extends into only a portion of the display, such as a portion of the display having the camera display area 604. In some embodiments, computer system 600 includes one or more features of device 100, device 300, and/or device 500.
Live preview 630 shows a particular scene in the field of view of one or more cameras of computer system 600 (e.g., a person standing in front of a mountain and next to a flower in fig. 6A). Live preview 630 is a representation of the (e.g., partial) field of view ("FOV") of at least a first camera of the one or more cameras of computer system 600. Live preview 630 is based on the image detected in the FOV. In some embodiments, computer system 600 captures images using multiple camera sensors and combines them to display live preview 630. In some implementations, the computer system 600 captures images using a single camera sensor to display the live preview 630.
The camera user interface of fig. 6A includes an indication identification area 602 and a control area 606. The indication identification area 602 and the control area 606 are overlaid on the live preview 630 such that the indication identification and control may be displayed simultaneously with the live preview 630. The camera display area 604 is positioned between the indication identifier area 602 and the control area 606 and does not substantially overlap with the indication identifier or control (e.g., affordance).
As shown in fig. 6A, the indication mark area 602 includes indication marks such as a flash indication mark 602a, a media processing style indication mark 602b, an animated image indication mark 602c, and an original indication mark 602e. Flash indicator 602a indicates whether the flash is on, off, or in another mode (e.g., an automatic mode). In fig. 6A, a flash indication flag 602a indicates that the flash is off. The media processing style indication identifier 602b indicates whether the computer system 600 is displaying a media processing style user interface and/or selectable user interface objects for controlling a media processing style applied to visual content (e.g., data) captured by one or more cameras of the computer system 600. In fig. 6A, media processing style indication identifier 602b is being displayed in an inactive state indicating that the plurality of selectable user interface objects for controlling the media processing style are not displayed. In some implementations, the media processing style has (e.g., defines and/or is defined by) a set of media processing parameters. In some embodiments, one or more of the parameters represent visual characteristics (e.g., color temperature, hue, brightness, saturation, chroma, color, saturation, coldness, and/or harmony) and/or depth parameters) that the computer system 600 may use to alter visual content captured by one or more cameras of the computer system 600. In some implementations, each parameter is associated with (or has) a value that affects how the computer system 600 changes visual content when a particular media processing style with the corresponding parameter is applied to the visual content of the media. In some embodiments, one or more of the media processing styles are predefined and not created by a user of computer system 600 (e.g., pre-installed on the computer system without the user of computer system 600 defining the media processing style). In some embodiments, one or more of the media processing styles are customized, modified, and/or created by a user of computer system 600. In some implementations, each media processing style has the same type of parameters (e.g., parameters corresponding to the same type of visual characteristics). In some embodiments, one or more of the media processing styles have the same type of parameters, but have different values for one or more of the types of parameters corresponding to the one or more media processing styles.
The animated image indication identifier 602c indicates whether the camera is configured to capture a single image or multiple images (e.g., in response to detecting a request to capture media). In some embodiments, the indication identification area 602 is overlaid on the live preview 630 and optionally includes a colored (e.g., gray and/or translucent) overlay. The original capture indication identifier 602e indicates whether the computer system 600 is configured to store and/or capture media in an original media format. In fig. 6A, a raw capture indication identifier 602e is displayed in an inactive state that indicates that the computer system 600 is configured to store and capture media in a non-raw format (e.g., joint Photographic Experts Group (JPEG) format and/or High Efficiency Image Coding (HEIC) format). In some embodiments, the file size of the media stored in the original format is greater than the file size of the same media stored in a non-original format. In some embodiments, media stored in a native format includes more information than the same media stored in a non-native format. In some embodiments, this more information enables media stored in the original format to be edited after capture in a more manner than media stored in a non-original format. In some embodiments (as further discussed below with respect to fig. 6V-6Y), computer system 600 stops displaying media processing style indication identifier 602b when original capture indication identifier 602e is displayed in an active state (e.g., as shown in fig. 6X) and/or when computer system 600 is configured to store and capture media in an original format. In some embodiments, when the original capture indication identifier 602e is displayed in an active state, the computer system 600 displays the media processing style indication identifier 602b in an inactive state.
As shown in fig. 6A, the camera display area 604 includes a live preview 630 and a zoom control 622. Zoom control 622 includes 0.5x zoom control 622a, 1x zoom control 622b, and 2x zoom control 622c. In fig. 6A, a 1x zoom control 622b is selected that indicates that live preview 630 is displayed at a 1x zoom level.
As shown in fig. 6A, the control region 606 includes a shutter control 610, a camera switcher control 614, a representation 612 of a media collection, and a camera mode control 620. The shutter control 610, when activated, causes the computer system 600 to capture media (e.g., photos) using one or more camera sensors based on the current state of the live preview 630 and the current state of the camera application. The captured media is stored locally at the computer system 600 and/or sent to a remote server for storage. The camera switcher control 614, when activated, causes the computer system 600 to switch to displaying the fields of view of different cameras in the live preview 630, such as by switching between a backward camera sensor and a forward camera sensor. The representation of the media collection 612 shown in FIG. 6A is a representation of the most recently captured media (images, videos) by the computer system 600. In some embodiments, in response to detecting an input on the media collection 612, the computer system 600 displays a user interface similar to the user interfaces shown in fig. 6S-6U (discussed below).
As shown in fig. 6A, camera mode controls 620 include a slow motion mode control 620a, a video mode control 620b, a photo mode control 620c, a portrait mode control 620d, and a panoramic mode control 620e. As shown in fig. 6A, photo mode control 620c is selected, as indicated by the photo mode control 620c being bolded. When photo mode control 620c is selected, computer system 600 is operating in a photo capture mode and capturing of photo media (e.g., still photos) is initiated (e.g., and/or captured) in response to computer system 600 detecting input directed to shutter control 610. Photo media captured by computer system 600 represents live preview 630 that is displayed when (or after) an input directed to shutter control 610 is detected. In some embodiments, in response to detecting an input directed to slow motion mode control 620a, computer system 600 operates in a slow motion media capture mode and initiates capture of media (e.g., slow motion video, video with slow motion effects applied) that is played back at a slower speed than when the media was captured. In some implementations, in response to detecting an input directed to video mode control 620b, computer system 600 operates in a video capture mode and initiates capture of video media (e.g., video). In some embodiments, in response to detecting an input directed to portrait mode control 620d, computer system 600 operates in a portrait mode and initiates capture of a portrait media (e.g., a still photograph with simulated foreground or simulated depth effect applied). In some implementations, in response to detecting an input directed to panoramic mode control 620e, computer system 600 operates in panoramic mode and initiates capture of panoramic media (e.g., panoramic photos). In some embodiments, the indicators and/or controls displayed on the camera user interface are based on the selected mode (e.g., and/or the computer system 600 is configured to operate based on the selected camera mode).
At fig. 6A, computer system 600 is displaying live preview 630 using standard style 634a, indicated by live preview 630 overlaid with a horizontal line pattern. It should be appreciated that the computer system 600 need not display the pattern shown in the figures (e.g., a collection of lines in a particular direction (e.g., horizontal, vertical, oblique)) across respective portions of the representation of the media (e.g., live preview 630 and/or a representation of previously captured media, such as in fig. 6S-6U). For example, at FIG. 6A, computer system 600 need not display a collection of horizontal lines when displaying live preview 630. However, the pattern shown in the figures is a symbol of a particular portion of the representation of the media that is displayed using a particular media processing style (e.g., standard style 634a in fig. 6A). As discussed above, when a particular media processing style is applied to the visual content of the media, the visual characteristics (e.g., color temperature, hue, brightness, saturation, chroma, hue, visual chroma, coldness, and/or harmony) and/or depth parameters) of the representation of the media displayed using that media processing style appear to be different than the visual characteristics of the representation of the media displayed without the media processing style.
At fig. 6A, computer system 600 displays live preview 630 using standard style 634a, because standard style 634a is the currently selected media processing style (e.g., as further discussed with respect to fig. 6B and 6I-6J), and computer system 600 is configured to operate in a media processing style application mode (e.g., as further discussed with respect to fig. 8A-8C). In some implementations, when the other media processing style is the currently selected media processing style, computer system 600 uses the other media processing style at fig. 6A to display live preview 630. In some embodiments, when the computer system 600 is not configured to operate in the media processing style application mode, the computer system 600 does not display the media processing style indication identifier 602b and/or does not apply the media processing style to a representation of the media (e.g., the live preview 630 and/or previously captured media) (e.g., as further described with respect to fig. 8A-8C). At fig. 6A, computer system 600 detects tap input 650a on (e.g., pointing to and/or at a location corresponding with) shutter control 610.
At fig. 6B, in response to detecting tap input 650a, computer system 600 initiates capture of media represented by the FOV and updates media collection 612 to include a representation of the captured media (e.g., live preview 630 of fig. 6A). Notably, the representation in the media collection 612 of fig. 6B ("the representation of fig. 6B") appears different from the representation in the media collection 612 of fig. 6A (e.g., "the representation of fig. 6A"). The representation of fig. 6B applies the standard pattern 634a (e.g., horizontal lines), and the representation of fig. 6A does not apply the standard pattern 634a. The representation in media collection 612 of fig. 6B applies standard style 634a because computer system 600 is using standard style 634a to display live preview 630 and/or standard style 634a is the currently selected media processing style to be applied to the captured media when tap input 650a is detected. On the other hand, the representation in the media collection 612 of fig. 6A does not have the standard style 634a applied. In some embodiments, this is because media represented by the representation in the media collection 612 of fig. 6A is not captured when the standard style 634a is used to display the live preview 630 and/or when a request to capture media corresponding to the representation of fig. 6A is captured where the standard style 634a is the currently selected media processing style. In some implementations, to display the representation of fig. 6B, the computer system 600 changes visual characteristics of the media corresponding to the representation of fig. 6B (e.g., in addition to normal and/or default (e.g., non-user-modified) changes in visual characteristics of captured media that may occur when the computer system 600 is not operating in the media processing style application mode). In some implementations, the representation of fig. 6A is not displayed by changing the visual characteristics of the media represented by the representation of fig. 6A (e.g., except for normal and/or default changes in the visual characteristics of the captured media that may occur when the computer system 600 is not operating in the media processing style application mode and/or may occur based on one or more elements in the FOV (e.g., illumination, shading). At fig. 6B, computer system 600 detects tap input 650B on media processing style indication identifier 602B.
As shown in fig. 6C, in response to detecting tap input 650b, computer system 600 displays a media processing style user interface and/or selectable user interface objects for controlling the media processing style. In particular, computer system 600 uses different media processing styles to display different portions of live preview 630. As shown in fig. 6C, in response to detecting tap input 650b, computer system 600 continues to display a portion of the center of live preview 630 ("middle section") using standard style 634a and stops displaying a portion of live preview 630 to the left of the middle section ("left section") and a portion of live preview 630 to the right of the middle section ("right section") using standard style 634 a. For illustrative purposes only, fig. 6C shows left and right boundaries 642a and 642b (e.g., which are optionally displayed by computer system 600 at fig. 6C). As used herein and for ease of explanation, the left section may be a portion of live preview 630 to the left of left boundary 642a, the middle section may be a portion of live preview 630 between left boundary 642a and right boundary 642b, and the right section may be a portion of live preview 630 to the right of right boundary 642 b. However, these respective sections are referenced for ease of discussion, and computer system 600 may apply one or more media processing styles to any number of portions/sections of a representation of media and many different sizes, shapes, and/or configurations of portions/sections of a representation of media.
As shown in fig. 6C, in response to detecting tap input 650B, left and right side sections are displayed without using any media processing pattern (e.g., without using any pattern) and/or without using any different media processing pattern that was not displayed prior to detecting tap input 650B (e.g., the left side section of fig. 6C is shown as being the same as the left side section of fig. 6B). In response to detecting tap input 650b, the computer system reduces the visual saliency of the right and left sections by displaying gray overlays on those sections. Thus, as shown in FIG. 6C, the right and left hand sections have been visually combined into an indication logo area 602 and a control area 606, which are also displayed in a gray overlay. As shown in fig. 6C, the right section includes a visual element 660b that is part of the boundary/box line of the object. The visual element 660b indicates that another media processing style different from the standard style 634a may be applied to the live preview 630. Here, visual element 660b is displayed in the right section because computer system 600 has determined that one or more other media processing styles (e.g., indicated by paging point 638 and described below) may be selected in response to detecting an input in a particular direction (e.g., the direction of a movement input (e.g., swipe input, drag input) 650D of fig. 6D described below). In addition, the visual element is not displayed in the left section because the computer system 600 has determined that one or more other media processing styles (e.g., indicated by the paging point 638 and described below) cannot be selected in response to detecting an input in another direction (e.g., a direction opposite the movement input 650D of fig. 6D described below). As shown in FIG. 6C, computer system 600 also displays a separation 640 between the middle section and the right section. The separator 640 is displayed as a visual element that does not apply a media processing style and/or does not reduce/increase the visual saliency of the separator 640. In some embodiments, a separation 640 is displayed between portions to which different media processing styles are applied and/or which include visual elements. In some implementations, the partitions 640 are used to define and/or define respective portions of the live preview 630 to which the media processing style is applied, and another portion of the live preview 630 to which the media processing style is not applied. In some implementations, the separation 640 is only displayed between applications of two adjacent and/or different media processing styles to representations of media (such as live preview 630).
As shown in fig. 6C, in response to detecting tap input 650b, computer system 600 also displays a standard style identification 636a (e.g., "standard"), which indicates that standard style 634a is the currently selected media processing style. As shown in fig. 6C, a majority of live preview 630 in camera display area 604 is displayed using the currently selected media processing style (e.g., standard style 634 a). In some implementations, a majority of live previews 630 in the camera display area 604 are not displayed using the currently selected media processing style.
As shown in FIG. 6C, in response to detecting tap input 650b, computer system 600 also displays paging points 638, including standard paging point 638a, vivid paging point 638b, gorgeous paging point 638C, and retro paging point 638d. Here, the standard paging point 638a is shown as selected (e.g., represented by an open paging point) because the middle section of the representation of the media is displayed using the standard media processing style. In some implementations, the standard paging point 638a is displayed as selected because a larger portion of the representation of the media (e.g., live preview 630) is displayed using the standard style 634a, but not any other media processing style.
Further, as shown in fig. 6C, each paging point (e.g., paging point 638) corresponds to a media processing style in the set of available media processing styles. The set of available media processing styles is a media processing style that computer system 600 may use to display a portion of a representation of media. Thus, in FIG. 6C, computer system 600 may display a portion of a representation of media using at least four available media processing styles. In some implementations, when a paging point is added to the paging point 638, the computer system 600 adds a media processing style corresponding to (e.g., represented by) the added paging point to the set of available media processing styles. In some implementations, when a paging point is removed from the paging point 638, the computer system 600 removes a media processing style from the set of available media processing styles that corresponds to (e.g., is represented by) the removed media processing style. In some implementations, the computer system 600 displays one or more other indications (e.g., other than the paging point) to represent a variety of media processing styles that may be used to display a representation of the media and/or a current media processing style that may be used to display a representation of the media.
As shown in fig. 6C, in response to detecting tap input 650B, computer system 600 stops displaying zoom control 622 and displays standard style control 626a at the location where zoom control 622 was previously displayed in fig. 6B. Standard style controls 626a include controls 628 (e.g., discussed below with respect to fig. 7S in response to detecting tap input 750S), hue parameter controls 626a1, and color temperature parameter controls 626a2. In some implementations, the color temperature parameter controls a color temperature deviation of the media item (e.g., whether the colors in the media item are shifted toward a cool color (such as blue, green, and/or purple) and/or toward a warm color (such as red, yellow, and/or orange). In some implementations, the hue parameter controls the saturation of the media item. In some implementations, the hue parameter controls the difference (e.g., luminosity, contrast, brightness, and/or chromaticity) between bright and dark regions of an image of the media item. In some implementations, the hue parameter controls the saturation of the media item. In some embodiments, the hue parameter controls the saturation of the image and the difference between the bright and dark regions (e.g., increases the saturation and the difference between the bright and dark regions in one direction and decreases the saturation and the difference between the bright and dark regions in the other direction). In some embodiments, computer system 600 is aware of elements in the scene (e.g., humans, animals, pets, trees, flowers, birds, buildings, sky, landscape, mountain, clothing, skin, sunset, and/or water), and adjustment of the hue parameters has different effects on different elements of the scene, such that adjustment of the hue parameters results in different degrees of adjustment between bright and dark regions of the image and/or different adjustment of saturation for different elements in the scene (e.g., increasing saturation of sky or mountain or landscape more than increasing saturation of humans or pets). For example, when the hue parameter is increased, the amount of saturation applied to the skin of the person is less than the amount of saturation applied to the clothing of the person (optionally the amount of saturation increase applied to the skin of the person is zero or close to zero), and/or the amount of saturation applied to the clothing of the person is less than the amount of saturation applied to the scenery or sunset behind the person.
Tone parameter control 626a1 includes a tone parameter identification 626a1a, a current tone value 626a1b, and a tone value range indication identification 626a1c. Tone parameter identification 626a1a represents the type of parameter (e.g., tone parameter) controlled by tone parameter control 626a1 a. The current hue value 626a1b indicates the current value of the hue parameter of the standard pattern 634 a. The hue value range indication identifier 626a1c is part of a value range that includes the current value of the hue parameter. Similarly, the color temperature parameter control 626a2 includes a color temperature parameter identification 626a2a, a current color temperature value 626a2b, and a color temperature value range indication identification 626a2c. The color temperature parameter identification 626a2a represents the type of parameter (e.g., color temperature parameter) controlled by the color temperature parameter control 626a2 a. The color temperature hue value 626a2b is the current value of the color temperature parameter of the standard pattern 634 a. The color temperature value range indication flag 626a1c is part of a value range of color temperature values including a color temperature parameter. It is noted that the current hue value 626a1b and the current color temperature value 626a2b are default values (e.g., "0") of the hue parameter of the standard pattern 634a and the color temperature parameter of the standard pattern 634a, respectively. In some implementations, default values of respective parameters of the media processing style are predefined and set without user input.
As shown in fig. 6C, in response to detecting tap input 650b, computer system 600 updates the visual appearance of media processing style indication identifier 602 b. In particular, computer system 600 updates media processing style indication identifier 602b to an active state, which indicates that a media processing style user interface and/or selectable user interface objects for controlling a media processing style are displayed. In some embodiments, computer system 600 otherwise updates media processing style indication identifier 602b to indicate that the media processing style application mode is active, such as changing color, removing a tap therefrom (e.g., a tap on flash indication identifier 602 a), highlighting, and/or reducing/increasing the size of media processing style indication identifier 602 b. In some embodiments, computer system 600 updates media processing style indication identifier 602b to indicate the currently selected media processing style. Thus, in some embodiments, when the new media processing style becomes the current media processing style, computer system 600 updates the appearance of media processing style indication identifier 602 b. In some implementations, in response to detecting a tap input on media processing style indication identifier 602b at fig. 6C, the computer system redisplays the user interface of fig. 6A. In some embodiments, in response to detecting a tap input on animated image indication identifier 602c, computer system 600 redisplays the user interface of fig. 6A. At fig. 6C, computer system 600 detects tap input 650C on shutter control 610.
As shown in fig. 6D, in response to detecting tap input 650C, computer system 600 initiates capture of media represented by the FOV and updates media collection 612 to include a representation of the captured media (e.g., live preview 630 of fig. 6C) ("representation of fig. 6D"). The representation of the captured media of fig. 6D has the standard style 634a applied (e.g., comprising a set of horizontal lines) and is the same as the representation of fig. 6B with the standard style 634a applied (e.g., without a set of horizontal lines). The representation of FIG. 6B is the same as the representation of FIG. 6D, although the representation of FIG. 6B is captured when the standard style 634a is used to display the entire live preview 630, while the representation of FIG. 6D is captured when the standard style 634a is used to display only the middle section. At fig. 6D, standard style 634a has been applied to the right, middle, and left sections of the representation of fig. 6D, although standard style 634a is not used to display the right and left sections of live preview 630 (e.g., in response to detecting tap input 650 c). At fig. 6D, the standard style 634a is applied to a larger portion (and/or all) of the visual content of the captured media than is applied to the visual content in the FOV used to display the live preview 630 of fig. 6C. At fig. 6D, computer system 600 detects a first portion of movement input 650D in a left direction over live preview 630. It should be appreciated that the movement input 650d (e.g., any portion of the movement input 650 d) may be detected at any location of the live preview 630. In some implementations, no portion of the movement input 650d is detected on the left section, the visual element 660b, and/or the particular media processing style (e.g., including the standard style 634a, the vivid style 634b, and/or the visual element 660b, as discussed below with respect to fig. 6E).
As shown in fig. 6E, in response to detecting a first portion of the movement input 650d in the left direction (e.g., and while continuing to detect the movement input 650 d), the computer system 600 uses the standard style 634a to display a portion of the live preview 630 (e.g., including the left portion and the left section of the middle section) and uses the vivid style 634b to display a portion of the live preview 630 (e.g., including the right portion and the right section of the middle section). At fig. 6E, the vivid pattern 634b is indicated by a set of upward oblique lines (e.g., lines moving in the northeast direction). Referring back to FIG. 6D, in FIG. 6D computer system 600 does not use vivid style 634b to display a portion of live preview 630. As shown in fig. 6E, the standard style 634a is indicated by a different set of lines than the sharp style 634b to indicate how these media processing styles affect how the live preview 630 is displayed in a different manner. The size of a portion of live preview 630 displayed using standard style 634a and a portion of live preview 630 displayed using vivid style 634b is based on the magnitude of the movement characteristics (e.g., including speed, direction, acceleration, and/or time) of the first portion of movement input 650 d. In some implementations, in response to detecting a portion of the movement input 650D having a higher magnitude than a portion of the movement input 650D of fig. 6D, the computer system 600 displays a portion of the live preview 630 using the standard style 634a that is smaller than a portion of the live preview 630 of fig. 6E displayed using the standard style 634a and displays a portion of the live preview 630 using the vivid style 634b that is larger than a portion of the live preview 630 of fig. 6E displayed using the vivid style 634 b. In some implementations, in response to detecting a portion of the movement input 650D having a lower magnitude than a portion of the movement input 650D of fig. 6D, the computer system 600 displays a portion of the live preview 630 using the standard style 634a that is greater than a portion of the live preview 630 of fig. 6E displayed using the standard style 634a and displays a portion of the live preview 630 using the vivid style 634b that is less than a portion of the live preview 630 of fig. 6E displayed using the vivid style 634 b. Thus, in some implementations, the amount of live preview 630 displayed using the respective media processing style is based on the magnitude of the movement input. In some implementations, the computer system 600 uses the movement input 650d to move the application of the standard style 634a and the vivid style 634 b. In some implementations, computer system 600 stops displaying visual element 660b as part of displaying the portion of live preview 630 using vivid style 634b in fig. 6E. In some implementations, as part of ceasing to display visual element 660b, computer system 600 uses vivid style 634b in fig. 6E to display an animation (e.g., dissolve animation) of visual element 660b transitioning into at least a subset of the part of live preview 630. In some implementations, in response to detecting the movement input 650d (e.g., before any movement of the input is detected), the computer system 600 uses the vivid style 634b in fig. 6E to display an animation of the visual element 660b transitioning into at least a subset of the portion of the live preview 630. In some embodiments, when the movement input 650d is detected, the computer system 600 detects a tap input on the shutter control 610 and, in response to detecting a tap input on the shutter control 610, captures media to which the standard style 634a is applied (e.g., because the standard style 634a is currently selected in fig. 6E). At FIG. 6E, computer system 600 detects a second portion of movement input 650d in the left direction over live preview 630.
As shown in fig. 6F, in response to detecting the second portion of the movement input 650d in the left direction (e.g., and while continuing to detect the movement input 650 d), the computer system 600 moves the application of the standard pattern 634a and the vivid pattern 634b to the left (e.g., in the direction of the second portion of the movement input 650 d). As shown in fig. 6F, in response to detecting the second portion of the input 650d in the left direction, the computer system 600 uses the standard style 634a to display a portion of the live preview 630 (e.g., including a reduced portion of the middle section and a left section as compared to fig. 6E) and uses the vivid style 634b to display a portion of the live preview 630 (e.g., an increased portion of the middle section and a right section as compared to fig. 6E). Thus, based on the second portion of the movement input 650d, the size of a portion of the live preview 630 displayed using the standard style 634a of fig. 6F is smaller than the size of a portion of the live preview 630 displayed using the standard style 634a of fig. 6E. In addition, based on the second portion of the movement input 650d, a size of a portion of the live preview 630 displayed using the vivid style 634b of fig. 6F is larger than a size of a portion of the live preview 630 displayed using the vivid style 634b of fig. 6E. As the computer system 600 moves the application of the standard style 634a and the vivid style 634b to the left, the computer system 600 moves the separation 640 while maintaining the separation 640 (e.g., based on and/or consistent with the movement of the input 650 d) between a portion of the live preview 630 displayed using the standard style 634a and a portion of the live preview 630 displayed using the vivid style 634 b.
At fig. 6F, in response to detecting the second portion of the movement input 650d, the computer system 600 ceases to display the standard style identification 636a and displays a vivid style identification 636b (e.g., "vivid") (e.g., at a location where the standard style identification 636a was previously displayed). In addition, computer system 600 also updates paging point 638 to indicate that vivid paging point 638b (e.g., open/open paging point) is selected and standard paging point 638a (e.g., solid/closed paging point) is not selected. Here, the computer system 600 displays the vivid style identification 636b and displays the vivid paging point 638b as selected because it has been determined that a larger portion (or equal portion) of the live preview 630 is displayed using the vivid style 634b than a portion of the live preview 630 displayed using the standard style 634 a. Because of this determination, computer system 600 sets the vivid style 634b as the currently selected media processing style and replaces standard style control 626a with vivid style control 626 b. As shown in fig. 6F, the vivid style controls 626b include a control 628, a tone parameter control 626b1 (e.g., for controlling the tone parameters of the vivid style 634 b), and a color temperature parameter control 626b2 (e.g., for controlling the color temperature parameters of the vivid style 634 b), which are displayed using techniques similar to those described above (e.g., with respect to control 628, tone parameter control 626b1, and color temperature parameter control 626b2, respectively) of fig. 6C. The current color tone value 626b1b is a default value (e.g., "80") of the color tone parameter of the vivid style 634b, and the current color temperature value 626b2b is a default value (e.g., "0") of the color temperature parameter of the vivid style 634 b. It is noted that the default values of the tone parameters of the standard pattern 634a are different from those of the clear pattern 634 b. Also, hue value range indication identifier 626a1c is different from hue value range indication identifier 626b1c because current hue value 626a1b and current hue value 626b1b are different (e.g., because each respective current value is located in a different range of the scale of the hue parameter). In some implementations, differences in default values of particular types of parameters of respective media processing styles result in the definition of different media processing styles.
Referring back to fig. 6E, computer system 600 continues to display standard style control 626a and standard style identification 636a and displays standard paging point 638a as selected because it is not determined that a larger portion (or equal portion) of live preview 630 is displayed using vivid style 634b than a portion of live preview 630 displayed using standard style 634a in fig. 6E (e.g., and/or that a larger portion (or equal portion) of live preview 630 is displayed using standard style 634a than a portion of live preview 630 is displayed using vivid style 634 b) is determined. Returning to fig. 6F, in some embodiments, computer system 600 displays vivid style control 626b and vivid style identification 636b and displays vivid paging point 638b as selected because vivid style 634b is displayed using a particular portion of live preview 630 (e.g., a portion at/near the center of the display of live preview 630 and/or computer system 600) and/or vivid style 634b is currently being displayed at boundary locations of live preview 630 (e.g., in/near the center of the display of live preview 630 and/or computer system 600). At fig. 6F, computer system 600 detects a third portion of movement input 650d in the right direction over live preview 630. The third portion of the movement input 650d is detected to move in a direction opposite to the first and second portions of the movement input 650 d.
As shown in fig. 6F, in response to detecting the second portion of the movement input 650d, the computer system 600 changes the appearance of the media processing style indication identifier 602 b. At fig. 6F, the appearance of the media processing style indication identifier 602b changes because the currently selected media processing style has changed from the standard style 634a (e.g., fig. 6E) to the vivid style 634b (e.g., in fig. 6F), and the vivid style 634b has a parameter value (e.g., tone parameter) that is different from the parameter value of the standard style 634 a. As shown in fig. 6F, computer system 600 displays a line that runs around the perimeter of media processing style indication indicator 602b in a clockwise direction (e.g., with a starting point near the middle of the top portion of the perimeter (and/or boundary) of media processing style indication indicator 602 b). The line is displayed as representing the current value of the hue parameter (e.g., "80" in fig. 6F). As shown in fig. 6F, the line travels around the perimeter of the media processing style indication identifier 602b based on the relationship between the current value of the hue parameter and the minimum (e.g., "-100") and/or maximum (e.g., 100) values to which the hue parameter may be set. Thus, as shown in fig. 6F, the line travels approximately eighty percent around the perimeter of the media processing pattern, as the current value (e.g., "80") is eighty percent of the exemplary maximum value (e.g., "100"). Referring back to fig. 6E, the media processing style indication identifier 602b does not include a line running around the perimeter of the media processing style indication identifier 602b, because the current value of the hue parameter in fig. 6E (e.g., "0") is zero percent of the minimum/maximum value to which the hue parameter may be set. In some implementations, when the current value is a different value, the line travels around the perimeter of the media processing style indication identifier 602b and/or occupies a perimeter of a different amount (e.g., more or less).
As shown in fig. 6F, the line travels around the perimeter of the media processing style indication indicator 602b in a clockwise direction because the current value of the tonal parameter is positive and/or above the median (e.g., "0"). In some implementations, when the current value of the tonal parameter is negative or below the median (e.g., "0"), the line travels around the perimeter of the media processing style indication identifier 602b in a counter-clockwise direction. Thus, in some implementations, the direction in which the line travels around the perimeter of the media processing pattern indication identifier 602b indicates whether the value of the tonal parameter is positive (or above the median) or negative (or below the median). In some embodiments, when the current value of the hue parameter (or another parameter) changes, the computer system 600 changes one or more other visual aspects of the media processing style indication identifier 602b (e.g., in addition to the line around the perimeter), such as changing the color of a portion of the media processing style indication identifier 602b and/or the size of a portion of the media processing style indication identifier 602 b. In some embodiments, computer system 600 displays a visually changing fade animation. In some implementations, the animation includes a line around the perimeter of the media processing style indication identifier 602b moving in a clockwise or counterclockwise direction from a position corresponding to a previous value of the respective parameter to a position corresponding to a current value of the respective parameter.
As shown in fig. 6G, in response to detecting the third portion of the movement input 650d (e.g., and while continuing to detect the movement input 650 d), the computer system 600 moves the portion of the live preview displayed using (e.g., using alone) the standard style 634a and the vivid style 634b to the right based on the magnitude of the third portion of the movement input 650 d. As shown in fig. 6G, in response to detecting the third portion of the movement input 650d, the computer system 600 displays the portions of the live preview 630 using the respective media processing style and/or one or more user interface objects (e.g., standard style identification 636a, selected standard paging point 638a, standard style control 626 a) using one or more techniques (e.g., techniques described above with respect to fig. 6C-6E). At fig. 6G, computer system 600 detects a fourth portion of movement input 650d in the left direction over live preview 630. As shown in fig. 6G, in response to detecting the third portion of the movement input 650d, the computer system 600 changes the appearance of the media processing style indication identifier 602b by removing lines around the perimeter of the media processing style indication identifier 602 b. The line around the perimeter of media processing style indication identifier 602b is removed because the current value of the hue perimeter at fig. 6G is zero and/or the current value of the hue perimeter at fig. 6G is zero percent of the minimum/maximum value to which the hue parameter may be set. In some implementations, in response to detecting the third portion of the movement input 650d, the computer system 600 displays an animation (e.g., shown in fig. 6F) that shrinks in a counterclockwise direction (e.g., toward the top and center positions of the media processing style indication identifier 602 b) around the line of the perimeter of the media processing style indication identifier 602b until the line is no longer displayed around the perimeter of the media processing style indication identifier 602b (e.g., as shown in fig. 6G).
As shown in fig. 6H, in response to detecting the fourth portion of the movement input 650d (e.g., and while continuing to detect the movement input 650 d), the computer system 600 moves the portion of the live preview 630 displayed using the standard style 634a and the vivid style 634b to the left based on the magnitude of the third portion of the movement input 650 d. Because the fourth portion of the movement input 650d has a greater magnitude than the third portion of the movement input 650d, the computer system 600 translates portions of the live preview 630 displayed using the standard style 634a and the vivid style 634b across a greater distance in response to detecting the third portion of the movement input 650d (e.g., in fig. 6G-6H) than the distance of the application of the standard style 634a and the vivid style 634b in response to detecting the fourth portion of the movement input 650d (e.g., in fig. 6F-6G). As shown in fig. 6H, in response to detecting the fourth portion of the movement input 650d, the computer system 600 displays a majority of the live preview 630 using the vivid style 634b and a smaller portion of the live preview 630 using the standard style 634 a. As shown in fig. 6H, in response to detecting the fourth portion of the movement input 650d, the computer system 600 also displays one or more user interface objects (e.g., the vivid style identification 636b, the selected vivid paging point 638b, the vivid style control 626 b) using one or more techniques (such as those described above with respect to fig. 6F). At fig. 6H, computer system 600 detects the end (e.g., lift-off) of movement input 650d (e.g., at the location where movement input 650d is shown in fig. 6H).
As shown in fig. 6I, in response to detecting the end of the movement input 650d, the computer system 600 displays the middle section using the vivid pattern 634 b. Here, the computer system 600 displays the middle section using the vivid style 634b because it is determined that when the end of the movement input 650d is detected (and/or before (e.g., immediately before) and/or after (e.g., immediately after)), a larger portion of the live preview 630 is displayed using the vivid style 634b than is displayed using another media processing style (e.g., the standard style 634 a). Thus, at FIG. 6I, computer system 600 has made the vivid style 634b the currently selected media processing style based on the determination. In some implementations, the computer system 600 displays the middle section using the standard style 634a when it is determined to display a larger portion of the live preview 630 using the standard style 634a than using another media processing style (e.g., the vivid style 634 b) when the end of the movement input 650d is detected. In some implementations, the computer system 600 displays an animation (e.g., a flash animation) of the application of the standard style 634a and the vivid style 634b across the display from the position where each of the standard style 634a and the vivid style 634b is displayed in fig. 6H to the position where each of the standard style 634a and the vivid style 634b is displayed in fig. 6I.
As shown in fig. 6I, in response to detecting the end of the movement input 650d (e.g., and because it is determined that a larger portion of the live preview 630 is displayed using the vivid style 634b than is displayed using another media processing style when the end of the movement input 650d is detected), the computer system 600 moves the application of the standard style 634a to the left to display the left section using the standard style 634 a. In addition, the computer system 600 also displays the right section using the gorgeous style 634c. At fig. 6I, the gorgeous pattern 634c is indicated by a set of downward sloping lines (e.g., lines that move in the southeast direction when scanning from left to right). In some embodiments, the computer system 600 uses the gorgeous style 634c to display the right section because it is determined that the gorgeous style 634c can be selected via a movement input (e.g., a movement input such as movement input 650K1, as described below with respect to fig. 6K-6L).
As shown in fig. 6J, shortly after detecting the end of the movement input 650d, the computer system 600 displays visual element 660a over the left section and visual element 660b over the right section, and reduces the visual saliency of the left and right sections. In some implementations, as part of displaying visual element 660a over the left section and visual element 660b over the right section, computer system 600 displays an animation of standard style 634a and gorgeous style 634c fading (e.g., and/or dissolving into visual element 660a and/or visual element 660 b). At fig. 6J, computer system 600 detects tap input 650J on shutter control 610.
As shown in fig. 6K, in response to detecting tap input 650J, computer system 600 initiates capture of media represented by the FOV and updates media collection 612 to include a representation of the captured media (e.g., live preview 630 of fig. 6J). The representation of the captured media of fig. 6K ("the representation of fig. 6K") has a sharp pattern 634B applied (e.g., comprising a set of upward slopes when scanning from left to right), as opposed to the representation of fig. 6B and the representation of fig. 6D. Further, the vivid style 634b has been applied to the right, middle, and left sections of the captured representation of the media, although the vivid style 634b is not used to display the right and left sections of the live preview 630 (e.g., at/when tap input 650j is detected). At fig. 6K, the vivid style 634b is applied to a larger portion (and/or all) of the visual content of the captured media than is applied to the visual content in the FOV used to display the live preview 630. In some implementations, in response to detecting a tap input on the shutter control 610 and detecting the movement input 650d, the computer system 600 captures media with a display representation of the captured media having a media processing style that applies to a largest portion of the live preview 630 when a tap input on the shutter control 610 is detected (e.g., regardless of whether another media processing style applies to a smaller portion of the live preview 630 when a tap input on the shutter control is detected) (e.g., and without any other media processing style applied to a representation of the media regardless of whether another media processing style applies to a smaller portion of the live preview 630 when a tap input on the shutter control is detected). At fig. 6K, the computer system 600 detects a movement input 650K1 in a leftward direction (e.g., the same direction as the first portion of the movement input 650D in fig. 6D) or a movement input 650K2 in a rightward direction (e.g., the opposite direction to the movement input 650D in fig. 6D).
As shown in fig. 6L, in response to detecting the movement input 650K1 (e.g., while continuing to detect the movement input 650K 1), the computer system 600 uses the vivid style 634b to display a portion of the live preview 630 (e.g., including a left portion of the middle section) and uses the gorgeous style 634C to display a portion of the live preview 630 (e.g., including a right portion and a right section of the middle section displayed using the vivid style 634b in fig. 6K) (e.g., using one or more techniques as discussed above with respect to fig. 6C-6F). At fig. 6L, in response to detecting the movement input 650k1, the computer system 600 stops displaying the vivid style identification 636b (e.g., "vivid") and displays the gorgeous style identification 636c (e.g., "gorgeous") (e.g., at the location where the vivid style identification 636b was previously displayed). In addition, the computer system 600 also updates the paging points 638 to indicate that the gorgeous paging point 638c (e.g., open/open paging point) is selected and the vivid paging point 638b (e.g., solid/closed paging point) is not selected. Here, the computer system 600 displays the gorgeous style identification 636c and displays the gorgeous paging point 638c as selected because it has been determined that a larger portion (or equal portion) of the live preview 630 is displayed using the gorgeous style 634c than a portion of the live preview 630 displayed using the vivid style 634b, and/or that the gorgeous style 634c should be set to the currently selected media processing style (e.g., using one or more techniques as described above with respect to detecting the movement input 650d in fig. 6H-6I). Because of this determination, computer system 600 also replaces vivid style control 626b with gorgeous style control 626c, as shown in FIG. 6L. The gorgeous style controls 626C include a control 628, a tone parameter control 626C1 (e.g., for controlling the tone parameter of the gorgeous style 634C), and a color temperature parameter control 626C2 (e.g., for controlling the color temperature parameter of the gorgeous style 634C), which are displayed using techniques similar to those described above (e.g., with respect to control 628, tone parameter control 626b1, and color temperature parameter control 626b2, respectively) of fig. 6C. The current hue value 626c1b is a default value (e.g., "50") of the hue parameter of the gorgeous pattern 634c, and the current color temperature value 626c2b is a default value (e.g., "70") of the color temperature parameter of the gorgeous pattern 634 c. It is noted that default values of the tone parameter and the color temperature parameter of the gorgeous pattern 634C are different from those of the standard pattern 634a (for example, tone: 0, color temperature: 0, as shown in fig. 6C) and the clear pattern 634b (for example, tone: 80, color temperature: 0, as shown in fig. 6F). This means that the predefined gorgeous style 634c is different from the predefined standard style 634a and the vivid style 634 b. In some implementations, at fig. 6K, in response to movement input 650K2 in a rightward direction, computer system 600 displays one of the user interfaces in fig. 6D-6H (e.g., where the displayed user interface depends on the magnitude of movement input 650K 2). At fig. 6L, computer system 600 detects the end (e.g., lift-off) of movement input 650K1 (e.g., at the location where movement input 650K1 is shown in fig. 6K).
As shown in fig. 6L, in response to detecting the movement input 650k1, the computer system 600 changes the appearance of the media processing style indication identifier 602 b. In particular, computer system 600 changes both visual aspects of media processing style indication identifier 602 b. At fig. 6L, computer system 600 changes both visual aspects of media processing style indication identifier 602b such that the change in the respective visual aspect represents a change (or current value) of the respective parameter. Although fig. 6L shows the computer system 600 changing the line around the perimeter of the media processing style indication identifier 602b based on a change in the current value of the hue parameter and changing the color of the media processing style indication identifier 602b based on a change in the current value of the color temperature parameter, the manner in which the media processing style indication identifier 602b is changed in fig. 6L is merely exemplary. In some embodiments, computer system 600 changes different visual aspects of media processing style indication identifier 602b based on the current values of the color temperature parameter, the hue parameter, and/or the different parameters. In some embodiments, computer system 600 changes the line around the perimeter of media processing style indication identifier 602b based on a change in the current value of the color temperature parameter and/or changes the color of media processing style indication identifier 602b based on the current value of the hue parameter.
As shown in fig. 6L, because the current value of the hue parameter in fig. 6L (e.g., "50") is less than the previous value of the hue parameter in fig. 6K (e.g., "80"), computer system 600 updates the line (e.g., the first visual aspect) around the perimeter of media processing pattern indication identifier 602b (e.g., using one or more techniques as discussed above with respect to fig. 6F-6G) to occupy less of the perimeter of media processing pattern indication identifier 602b (e.g., shrink the line in a counterclockwise direction). However, because the current value of the hue parameter in fig. 6L (e.g., 50) is positive, as is the previous value of the hue parameter in fig. 6K (e.g., "80"), computer system 600 continues to display that the line around media processing style indication identifier 602b is traveling in a clockwise direction (e.g., oriented such that the line appears to travel in a clockwise direction). As shown in fig. 6L, computer system 600 changes the color (e.g., second visual aspect) of media processing style indication identifier 602 b. Here, the computer system 600 changes the color of the media processing style indication flag 602b to represent a current value (e.g., "70") of the color temperature parameter that is different from a previous value (e.g., "0") of the color temperature parameter. As shown in fig. 6L, the color of the media processing style indication identifier 602b of fig. 6L is a darker gray than the color of the media processing style indication identifier 602b of fig. 6K. As the current value of the color temperature parameter increases, the computer system 600 adds more dark gray to the color of the media processing style indication indicator 602b, wherein the amount of dark gray is approximately equal to the current value of the color temperature parameter (e.g., "70") and the percentage of the maximum value (e.g., "100") and/or minimum value of the color temperature parameter. Thus, as shown in fig. 6L, computer system 600 displays media processing style indication indicator 602b as seventy percent of the maximum amount having dark gray. In some embodiments, as the value of the color temperature parameter increases and/or decreases, the computer system 600 darkens the color of the media processing style indication identifier 602 b. In some implementations, as the current value increases above the median value (e.g., "0"), the computer system 600 increases the amount of the first color (e.g., red) that constitutes the color of the media processing style indication identifier 602 b. In some implementations, as the value decreases toward the median (e.g., "0") (e.g., between the maximum value and the median), the computer system 600 decreases the amount of the first color (e.g., red) that constitutes the color of the media processing style indication identifier 602 b. In some implementations, as the current value decreases below the median value (e.g., "0"), the computer system 600 increases the amount of the second color (e.g., blue) that constitutes the color of the media processing style indication identifier 602 b. In some embodiments, as the value increases toward the median (e.g., "0") (e.g., between the minimum and median), the computer system 600 decreases the amount of the first color (e.g., red) that constitutes the color of the media processing style indication identifier 602 b. In some implementations, the computer system 600 changes the color of the line around the perimeter of the media processing style indication identifier 602b based on the change in the current value of the color temperature parameter.
As shown in fig. 6M, in response to detecting the end of the movement input 650k1, the computer system 600 displays the middle section using the gorgeous style 634c, the left section using the vivid style 634b, and the right section using the antique style 634 d. The computer system 600 uses the gorgeous style 634c to display the middle section, the vivid style 634b to display the left section, and the antique style 634d to display the right section, as it is determined that the gorgeous style 634c should be set to the currently selected media processing style (e.g., using one or more techniques as described above with respect to detecting the end of the movement input 650d in fig. 6H-6I). At FIG. 6M, computer system 600 detects tap input 650M on mode and setting switch 616.
As shown in fig. 6N, in response to detecting tap input 650m, computer system 600 ceases to display sharp style identification 636b and paging point 638 and displays zoom control 622 (e.g., at a location where one or more of sharp style identification 636b and paging point 638 were previously displayed). As shown in fig. 6N, in response to detecting tap input 650M, the computer system stops displaying the gorgeous style control 626c and displays the camera mode control 620 (e.g., at the location where the gorgeous style control 626c was previously displayed in fig. 6M). In response to detecting the tap input 650m, the computer system 600 updates the display of the media processing style indication identifier 602b to an inactive state. Thus, in response to detecting tap input 650M, computer system 600 continues to display a portion of the representation using gorgeous style 634c (e.g., selected in FIG. 6M when a style is displayed in the middle section of computer system 600 without an input being detected). However, in response to detecting tap input 650m, computer system 600 stops displaying the right and left side sections using a media processing style different from the gorgeous style 634c and/or with visual elements. In some implementations, the computer system 600 stops using media processing styles different from the gorgeous style 634c and/or displays the right and left sections in visual elements because movement input on the live preview 630 will not result in displaying portions of the live preview 630 using different media processing styles in fig. 6N (e.g., will not result in a change in which media processing styles are used). At FIG. 6N, the computer system detects a movement input 650N1 in the left direction over the camera mode control 620 or a movement input 650N2 in the left direction over the live preview 630.
At fig. 6O, in response to detecting the movement input 650n1 or 650n2, the computer system 600 transitions from operating in the photo capture mode to operating in the portrait capture mode. As shown in fig. 6O, in response to detecting movement input 650n1 or 650n2, computer system 600 moves camera mode control 620 to the left and displays portrait mode control 620d as selected (e.g., bold portrait mode control 620 d). When operating in portrait mode, computer system 600 uses the gorgeous style 634d (e.g., selected in 6M) to maintain a display of at least a portion of live preview 630. Thus, media processing styles may be applied to representations of media (e.g., live preview 630) while computer system 600 is configured to capture other types of media (e.g., photo media, video media, and/or portrait media, panoramic media). In some embodiments, in response to detecting the movement input 650n2 when the media processing style indication identifier 602b is displayed as active (and/or when the style user interface object is displayed), the computer system 600 does not transition to operate in a different capture mode (e.g., the computer system 600 continues to operate in the same capture mode in which the computer system 600 was operating prior to detecting the movement input 650n 2). In some embodiments, in response to detecting the movement input 650N2 when the media processing style indication identifier 602b is in an inactive state, the computer system 600 transitions to operate in a different capture mode (e.g., as shown in fig. 6N-6O). In some embodiments, computer system 600 will transition to operate in portrait capture mode in response to detecting input 650n1 regardless of whether media processing style indication identifier 602b is in an active state or an inactive state. Thus, in some implementations, the computer system 600 may respond differently to a movement input based on the location of the movement input and whether the computer system is currently displaying a set of style user interface objects. As shown in fig. 6O, in response to detecting the movement input 650N1 or 650N2, the computer system 600 also displays an indication identifier for the portrait mode (e.g., aperture indication identifier 602 d) in indication identifier area 602 and displays controls for the portrait mode (e.g., lighting effect control 678, zoom control 622 b) in control area 606, which are not displayed when the computer system 600 is operating in the photo capture mode in fig. 6N. Further, in response to detecting the movement inputs 650N1 and 650N2, the computer system 600 ceases to display the indication identifier for the photo capture mode (e.g., the animated image indication identifier 602 c) in the indication identifier area 602 and ceases to display the controls for the photo mode (e.g., the zoom control 622a and the zoom control 622 c) in the control area 606, which controls are displayed when the computer system 600 operates in portrait mode in fig. 6N. In some implementations, in response to detecting a movement input on live preview 630 at fig. 6N, computer system 600 transitions from operating in a portrait capture mode to operating in a different capture mode (e.g., using techniques similar to those described above with respect to movement inputs 650N1 or 650N 2) because the computer system is not displaying the style user interface and/or the plurality of selectable user interface objects for controlling the media processing style. In some implementations, in response to detecting a movement input (e.g., at a similar location of movement input 650n1 or movement input 650n 2), computer system 600 transitions from operating in a portrait capture mode to operating in a different capture mode and maintains (or simultaneously maintains) the display of at least a portion of live preview 630 using the currently selected style (e.g., the gorgeous style 634d selected in 6M). In some implementations, in response to detecting that the movement input 650n1 or the movement input 650n2 has a particular magnitude (e.g., a greater magnitude) and/or is in a different mode than the movement input 650n1 or the movement input 650n2, the computer system 600 transitions from operating in the photo mode to operating in a different mode than the portrait mode (e.g., a panoramic mode and/or a video mode) and maintains (or simultaneously maintains) the display of at least a portion of the live preview 630 using the currently selected style (e.g., the gorgeous style 634d selected in 6M). In some embodiments, in response to detecting a request to capture media, computer system 600 captures portrait media and applies a currently selected media processing style (e.g., gorgeous style 634 c) to the captured portrait media. Thus, in some embodiments, computer system 600 may apply the currently selected media processing style to different types of media (e.g., portrait media at fig. 6O and photo media at fig. 6C). At fig. 6O, the computer system detects a tap input 650O on media processing style indication identifier 602 b.
As shown in fig. 6P, in response to detecting tap input 650o, computer system 600 redisplays one or more style user interface objects, including a gorgeous style identification 636c, a paging point 638, a gorgeous style control 626c, and visual elements 660a and 660b (e.g., using one or more techniques as described above with respect to fig. 6M), while continuing to operate in portrait mode. In some implementations, in response to detecting the movement input on live preview 630, computer system 600 displays different portions of live preview 630 using different media processing styles (e.g., using one or more of the techniques described above with respect to fig. 6C-6P) while continuing to operate in portrait mode. Thus, in some embodiments, different media processing styles may be selected when computer system 600 is operating in different capture modes. In some embodiments, in response to detecting a tap input on the shutter control 610, the computer system 600 captures portrait media where a representation of the portrait media is displayed as a gorgeous style 634c (e.g., because style 634c was selected in fig. 6P). In some embodiments, in response to detecting an input on the iris indication identifier 602d, the computer system 600 redisplays the user interface of fig. 6O (e.g., stops displaying the style user interface and/or the plurality of selectable user interface objects for controlling the media processing style). At fig. 6P, computer system 600 detects tap input 650P on media collection 612.
As shown in fig. 6Q, in response to detecting tap input 650p, computer system 600 displays a media viewer user interface including a control area 670, a media viewer area 672, and a control area 674. The control region 670 includes a return control 670a, a current time 670b, and a media gallery control 670c. The control region 674 includes multiple controls and thumbnail representations of media 676, including thumbnail representations 676a through 676d. Thumbnail representations 676A through 676d were previously shown in fig. 6A through 6Q as part of media collection 612. The media viewer area 672 includes a media representation 680d. As shown in fig. 6Q, media representation 680d is a representation of media captured in response to detecting input 650 j. As shown in fig. 6Q, when media corresponding to media representation 680d is captured, media representation 680d is displayed using a vivid style 634b, which is the currently selected media processing style. At FIG. 6Q, computer system 600 detects tap input 650Q on return control 670 a.
As shown in FIG. 6R, in response to detecting tap input 650q, computer system 600 redisplays the user interface of FIG. 6O, with the live preview 630 at FIG. 6R displayed using the gorgeous style 634 c. At FIG. 6R, live preview 630 is displayed using vivid style 634b because computer system 600 has kept magnificent style 634c as the currently selected media processing style even though computer system 600 has navigated from the camera application to the media viewer application. Thus, in some embodiments, the computer system 600 maintains the currently selected media processing style between sessions using the camera application. In some embodiments, the computer system 600 maintains the gorgeous style 634c as the currently selected media processing style until a new media processing style is selected and/or a vivid style 634b is modified (e.g., as discussed below in fig. 7A-7X). In some embodiments, in response to detecting tap input 650Q, computer system 600 redisplays the user interface of fig. 6Q instead of fig. 6O, thereby illustrating a style user interface and/or selectable user interface objects for controlling media processing styles. At fig. 6R, computer system 600 detects tap input 650R on media collection 612.
As shown in FIG. 6S, in response to detecting tap input 650r, computer system 600 redisplays the user interface of FIG. 6Q including media representation 680d. As shown in FIG. 6S, computer system 600 detects movement input 650S on media representation 680d. As shown in fig. 6T, in response to detecting the movement input 650s, the computer system 600 replaces the media representation 680d with the media representation 680 c. The standard style 634a is used to display the media representation 680C because it is a representation of the media that was captured in response to detecting the input 650C when the standard style 634a is the currently selected media processing style in fig. 6C. In some implementations, the media viewer user interface includes one or more options to change the media processing style applied to media that has been captured (such as the media represented by media representation 680c and 680 d). Thus, in some embodiments, the computer system applies a different media processing style to previously captured media that was not originally captured when the different media processing style was the currently selected processing style. At FIG. 6T, computer system 600 detects tap input 650T on media gallery control 670 c.
As shown in FIG. 6U, in response to detecting tap input 650t, computer system 600 displays a media gallery user interface. The gallery user interface includes a return control 686 and representations of media that have been captured using media processing styles (e.g., standard style 634a (e.g., represented by a pattern including horizontal lines), vivid style 634b (e.g., represented by a pattern including upward diagonal lines), magnificent style 634c (e.g., represented by a pattern including downward diagonal lines), and antique style 634d (e.g., represented by a pattern including vertical lines). In some embodiments, the media represented by the representations of media included in the media gallery user interface is different types of media (e.g., still photo media, portrait media, video media, panoramic media, slow motion media, etc.) in some embodiments, media represented by the representations of media included in the media gallery user interface is captured when different media processing styles are selected and/or when computer system 600 is configured to operate in different capture modes.
As shown in fig. 6V, in response to detecting the tap input 650U, the computer system 600 ceases to display the media gallery user interface (e.g., of fig. 6U) and redisplays the camera user interface of fig. 6R, with the live preview 630 at fig. 6R displayed using the gorgeous style 634 c. At fig. 6V, computer system 600 detects tap input 650V on original capture indication identifier 602 e. As shown in fig. 6W, in response to detecting tap input 650v, computer system 600 stops displaying media processing style indication identifier 602b and stops applying the media processing style to a portion of live preview 630. As shown in fig. 6W, in response to detecting tap input 650v, computer system 600 slides original capture indication identifier 602e to the left to the location where media processing style indication identifier 602b was previously displayed. At fig. 6W, in response to detecting tap input 650v, computer system 600 transitions from being configured to store and/or capture media in a non-original media format to storing and/or capturing media in an original format and displays original capture control 602e as an active state. In some implementations (as discussed above), the computer system 600 does not apply the selected media processing style to media stored in the original format. Thus, in some embodiments, computer system 600 cannot be configured to apply a media processing style to captured media while being configured to store and/or capture media in a raw format. At fig. 6W, computer system 600 detects swipe-up input 650W at a location on the camera user interface (e.g., on and/or under one or more camera mode affordances 620).
As shown in fig. 6X, in response to detecting the swipe up input 650W, the computer system 600 replaces the camera mode control 620 of fig. 6W with a camera settings control 688. The camera settings controls 688 include: a flash setting control 668a that, when selected, causes the computer system 600 to display one or more options for adjusting the flash mode (e.g., turning the flash mode on and/or off); media processing style control 688b; an exposure compensation control 688f, which when selected, causes the computer system 600 to display one or more options (e.g., a slider) for adjusting the exposure compensation value; a timer control 688g that, when selected, causes the computer system 600 to display one or more options for adjusting the duration of the timer; a filter control 688h that, when selected, causes the computer system 600 to display one or more options for adjusting a filter applied to media; and an aperture control 688i that, when selected, causes the computer system 600 to display one or more options for adjusting the aperture value. In particular, the camera settings control displayed in FIG. 6X reflects some of the camera settings that are available when computer system 600 is operating in portrait mode (e.g., as shown in FIG. 6W with "portrait" bolded). In some embodiments, when computer system 600 is operating in a different camera mode, one or more other camera setting controls and/or one or more of the same camera setting controls are displayed in response to detecting swipe-up input 650 w. At fig. 6X, computer system 600 detects tap input 650X1 on media processing style control 688 b.
As shown in fig. 6Y, in response to detecting the tap input 650x1, the computer system 600 displays the original capture control 602e in an inactive state (and/or the computer system is configured to capture non-original media and not configured to capture original media) and reapply, redisplay the camera user interface of fig. 6L, wherein a portion of the live preview 630 at fig. 6R is displayed using the gorgeous style 626c (e.g., along with one or more other portions of the live preview 630 displayed in other media processing styles) and displays the gorgeous style control 634c. In some implementations, in response to detecting the tap input 650x1, the computer system 600 redisplays the camera user interface of fig. 6R, with the live preview 630 at fig. 6R displayed using the gorgeous style 634c. In some embodiments, in response to detecting tap input 650X2 on original capture control 602e in fig. 6X, computer system 600 redisplays the camera user interface of fig. 6L or redisplays the camera user interface of fig. 6R. In some embodiments, in response to detecting tap input 650X2 on original capture control 602e in fig. 6X, computer system 600 displays original capture control 602e in an inactive state (and/or the computer system is configured to capture non-original media and not configured to capture original media) while not applying a media processing style to a portion of live preview 630 (e.g., continuing to display live preview 630 of fig. 6X without applying a media processing style).
Fig. 7A-7X illustrate an exemplary user interface for editing media processing styles using a computer system, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 9 and 10A-10B.
Fig. 7A illustrates computer system 600 displaying a camera user interface (e.g., using one or more techniques as described above with respect to fig. 6C). In particular, computer system 600 is displaying media processing style indication identifier 602b in an active state, displaying the middle section using standard style 634a, and displaying the right and left sections without using media processing style (e.g., using one or more techniques as described above with respect to fig. 6C). Further, computer system 600 displays a right section with visual element 660b, a paging point 638 where standard paging point 638a is selected, and standard style controls 626a including control 628, hue parameter control 626a1, and color temperature parameter control 626a2 (e.g., using one or more techniques as described above with respect to fig. 6C). As shown in fig. 7A, computer system 600 is also displaying a hue parameter control 626a1 and a color temperature parameter control 626a2, the hue parameter control comprising a hue parameter identification 626a1a, a current hue value 626a1b, and a hue value range indication identification 626a1C, the color temperature parameter control comprising a color temperature parameter identification 626a2a, a current color temperature value 626a2b, and a color temperature value range indication identification 626a2C (e.g., using one or more techniques described above with respect to fig. 6C). In FIG. 7A, computer system 600 detects a portion of movement input 750a on tone parameter control 626a 1.
As shown in fig. 7B, in response to detecting a portion of the movement input 750a at fig. 7A (e.g., and while continuing to detect the movement input 750 a), the computer system 600 expands the hue parameter control 626a1 and stops displaying the color temperature parameter control 626a2. Specifically, at fig. 7B, computer system 600 expands tone parameter control 626a1 and/or tone value range indication identifier 626a1c in line (e.g., across the location and/or along the line in which tone parameter control 626a1 is displayed in fig. 7B). When the tone parameter control 626a1 is expanded, the computer system 600 increases the size of the tone value range indication identifier 626a1c such that the tick marks of the tone value range indication identifier 626a1c of FIG. 7B are larger and farther apart than the tick marks of the tone value range indication identifier 626a1c of FIG. 7A. Further, the scale line of the tone value range indication flag 626a1c of fig. 7B represents more values than the scale line of the tone value range indication flag 626a1c of fig. 7A (e.g., the tone value range indication flag 626a1c of fig. 7B has more scale lines than the tone value range indication flag 626a1c of fig. 7A). In other words, in response to detecting a portion of the movement input 750a at fig. 7A, the computer system 600 enlarges the tone parameter control 626a1 and/or the tone value range indication identifier 626a1c (in some embodiments, this makes it easier for the user to change the value of the tone parameter control 626a 1). As shown in fig. 7B, in response to detecting a portion of the movement input 750a at fig. 7A, the computer system 600 displays the current hue value 626a1B of fig. 7B (e.g., "0") on a different location on the display than the current hue value 626a1B of fig. 7A. Although computer system 600 has moved current hue value 626a1B of fig. 7B, computer system 600 continues to display current hue value 626a1B of fig. 7B in the center position of hue value range indication identifier 626a1c (e.g., when current hue value 626a1B of fig. 7A is displayed). Further, in response to detecting a portion of the movement input 750a at FIG. 7A, the computer system 600 moves the hue parameter identification 626a1a to the right of the control region 606. At fig. 7B, computer system 600 detects another portion of movement input 750a on hue value range indication identifier 626a1c (e.g., when movement 750d continues to be detected). The computer system 600 detects a portion of the movement input 750 in the left direction. In some embodiments, a portion of the movement input 750a detected at fig. 7A is a tap input and a portion of the movement input 750a detected at fig. 7B is a movement input (e.g., in some embodiments, a user may tap to expand a control and then provide an input to adjust a current value of a parameter of the control). In some implementations, a portion of the movement input 750a detected at fig. 7A and a portion of the movement input 750a detected at fig. 7B are independent inputs that are independently detected. In some implementations, in response to receiving a portion of the movement input 750a detected at fig. 7A (e.g., a tap input), the computer system 600 expands the tone parameter control 626a1 regardless of whether the detection of the portion of the movement input 750a detected at fig. 7A continues. In some embodiments, in response to detecting a portion of the movement input 750a as a tap input, the computer system 600 expands the hue parameter control 626a1 and contracts the hue parameter control 626a1 and redisplays the color temperature parameter control 626b1 if no additional input is detected on the expanded hue parameter control 626a1 and/or on the camera user interface for a threshold period of time (e.g., 5, 20, 30, 40, 75 seconds).
As shown in fig. 7C, in response to detecting a portion of the movement input 750a at fig. 7B (e.g., being moved in a left direction), the computer system 600 moves the tick mark of the hue value range indication identifier 626a1C to the left (e.g., based on the magnitude of the movement input 750a detected at fig. 7B), and updates the current hue value 626a1B from "0" (e.g., in fig. 7B) to "100" (e.g., in fig. 7C). When the current hue value 626a1B is updated from "0" (e.g., in fig. 7B) to "100" (e.g., in fig. 7C) based on the movement characteristics (e.g., speed, acceleration, and/or velocity) of a portion of the movement input 750a detected at fig. 7B, the computer system 600 updates the appearance of the media processing style indication identifier 602B. At fig. 7C, computer system 600 updates the appearance of media processing style indication identifier 602B by increasing the size of a line around the perimeter of media processing style indication identifier 602B in a clockwise direction based on the movement characteristics of a portion of movement input 750a detected at fig. 7B. Notably, although the current value (e.g., "100") is the maximum value to which the hue parameter may be set, the computer system 600 does not have the line completely encompass the perimeter of the media processing style indication identifier 602 b. The line does not completely surround the perimeter of the media processing style indication identifier 602b (e.g., a gap is displayed near the top of the perimeter of the media processing style indication identifier 602 b) to show that the line has traveled in a clockwise direction around the media processing style indication identifier 602b (e.g., to represent a positive value) while the hue parameter is set to a maximum value. In some embodiments, when the current value of the tonal parameter is set to the minimum value of the tonal parameter (e.g., "-100") the computer system 600 displays a gap on the other side of the top portion of the media processing pattern indication flag 602b (e.g., a vertical line showing the start of the line would be connected to the left portion of the line in fig. 7C, and there is a gap between the vertical line and the right portion of the line in fig. 7C) to show that the line has traveled in a counter-clockwise direction around the media processing pattern indication flag 602b (e.g., to represent a negative value), while the current value of the tonal parameter is set to the minimum value of the tonal parameter.
In addition to updating the current hue value 626a1b and the media processing style indication identifier 602b, the computer system 600 replaces the standard style identifier 636a with the custom standard style identifier 636aa and adds the custom standard paging point 638aa to the left of the standard paging point 638a in the paging points 638. In other words, at fig. 7C, in response to updating the current hue value 626a1b from the default value ("0") to the modified value ("100"), the computer system 600 adds a customized version of the standard style 634a to the set of available styles. As shown in fig. 7C, standard style identification 636aa includes the word "rich contrast" because the current hue value 626a1B at fig. 7C is greater than the default hue value ("0" in fig. 7B) of standard style 634 a. Thus, the computer system 600 may determine the name of the custom media processing style based on how the parameters of the custom media processing style differ from the default parameters of the media processing style. Thus, at fig. 7C, the computer system 600 does not update the default values of the tone parameters of the standard pattern 634a (e.g., as shown in fig. 7C), but creates a modified version of the standard pattern 634a with the updated values. Thus, in some embodiments, the user may access a modified version of the standard style 634a (e.g., the custom standard style 634 aa) at a later time.
As shown in fig. 7C, in response to detecting a portion of the movement input 750a at fig. 7B, the computer system 600 updates the middle section of the live preview 630 such that the middle section is displayed using the custom standard style 634aa in fig. 7C instead of using the standard style 634a in fig. 7B. It should be appreciated that the computer system 600 displays the custom standard style 634aa in FIG. 7C with an increased amount of hue (e.g., "100") over the hue (e.g., "0") of the standard style 634a to reflect the change in the value of the hue parameter of the standard media processing style. For illustration purposes only, the custom standard style 634aa (e.g., in fig. 7C) is shown with the same pattern (e.g., horizontal line) as the standard style 634a (e.g., in fig. 7B) to indicate that the custom standard style 634aa is a modified version of the standard style 634 a. However, the lines of the custom standard pattern 634aa are broken lines instead of solid lines as the lines of the standard pattern 634a of fig. 7B to show that the custom standard pattern 634aa is different from the standard pattern 634 a. At fig. 7C, computer system 600 continues to display an expanded version of tone parameter control 626a1 because computer system 600 is still detecting movement input 750a at fig. 7C. In some embodiments, computer system 600 continues to display an expanded version of tone parameter control 626a1, regardless of whether computer system 600 continues to detect movement input 750a at FIG. 7B. At fig. 7C, computer system 600 detects the end of movement input 750a (e.g., lift-off).
As shown in fig. 7D, in response to detecting the end of the movement input 750a, the computer system 600 constricts (e.g., reduces the size of) the hue parameter control 626a1 and redisplays the color temperature parameter control 626a2. As shown in fig. 7D, the hue parameter control 626a1 and the color temperature parameter control 626a2 are displayed in the same position and in the same size as they were displayed in fig. 7B. At fig. 7D, computer system 600 displays the same version of color temperature parameter control 626a2 (e.g., color temperature parameter control 626a2 of fig. 7B) that was displayed prior to detection of movement input 750 a. Accordingly, the display of the color temperature parameter identification 626a2a, the current color temperature value 626a2B, and the color temperature value range indication identification 626a2c is unchanged (e.g., when comparing fig. 7B and fig. 7D). However, because tone parameter control 626a1 is updated in response to detecting movement input 750a, computer system 600 displays an updated version of tone parameter control 626a1, where current tone value 626a1B of FIG. 7D is different from current tone value 626a1B of FIG. 7B, and tone value range indication identifier 626a1c of FIG. 7D is different from tone value range indication identifier 626a1c of FIG. 7B. Notably, at fig. 7D, the hue-value-range-indication identifier 626a1c includes a set of enlarged graduations (e.g., 5) representing the current hue value 626a1b (e.g., "100") because it is associated with the scale of the hue-value-range-indication identifier 626a1c (e.g., all graduations to the right of the center graduation are enlarged and fully filled to represent that "100" is a value that occupies 100% of the range above zero (e.g., 0 to 100) on the scale of the hue-value-range-indication 626a1 c). Referring back to fig. 7B, the hue value range indication identifier 626a1c does not include any enlarged and/or filled tick marks because the current hue value 626a1B of fig. 7B is "0" and is not a value occupying any range below or above zero on the scale of the hue value range indication 626a1 c. Returning to fig. 7D, it should be appreciated that the hue parameter control 626a1 and the color temperature parameter control 626a2 represent the current values (e.g., 626a1b, 626a2 b) of each respective parameter of the custom standard style 634aa (e.g., as indicated by the custom standard style indication identifier 636aa remaining displayed). Thus, one or more controls of the media processing style may also be used to adjust one or more parameters of the modified version of the media processing style.
As shown in fig. 7D, in response to detecting the end of the movement input 750a, the computer system 600 displays a reset control 722. Here, the computer system displays reset control 722 because the value of the parameter of the media processing style is not the default value of the media processing style (e.g., because the current hue value 626a1b has changed from the default value of "0" to "100" for standard style 634 a). At fig. 7D, computer system 600 detects a portion of movement input 750D on color temperature parameter control 626a 2.
As shown in fig. 7E, in response to detecting a portion of the movement input 750D at fig. 7D, the computer system 600 stops displaying the hue parameter control 626a1 and expands the color temperature parameter control 626a2 (e.g., using one or more techniques as described above with respect to expanding the hue parameter control 626a1 in fig. 7B-7C). Specifically, at fig. 7E, computer system 600 expands color temperature parameter control 626a2 and/or color temperature value range indication indicia 626a2c in line. When the spread color temperature parameter control 626a2 is spread, the computer system 600 increases the size of the color temperature value range indication indicator 626a2c such that the tick marks of the color temperature value range indication indicator 626a2c of FIG. 7E are larger and farther apart than the tick marks of the color temperature value range indication indicator 626a2c of FIG. 7A. Further, the tick marks of the color temperature value range indication flag 626a2c of fig. 7E represent more values than the tick marks of the color temperature value range indication flag 626a2c of fig. 7A (e.g., the color temperature value range indication flag 626a2c of fig. 7E has more tick marks than the color temperature value range indication flag 626a2c of fig. 7A). In other words, in response to detecting a portion of the movement input 750D at fig. 7D, the computer system 600 enlarges the color temperature parameter control 626a2 and/or the color temperature value range indication identifier 626a2c (in some embodiments, this makes it easier for the user to change the value of the color temperature parameter control 626a 2). As shown in fig. 7E, in response to detecting the movement input 750D at fig. 7D, the computer system 600 displays the current color temperature value 626a2b of fig. 7E (e.g., "0") on a different location on the display than the current color temperature value 626a2b of fig. 7A. Although computer system 600 has moved current color temperature value 626a2b of FIG. 7E, computer system 600 continues to display current color temperature value 626a2b of FIG. 7E in the center position of color temperature value range indication indicator 626a2c (e.g., when current color temperature value 626a2b of FIG. 7A is displayed). Further, in response to detecting the movement input 750d, the computer system 600 moves the color temperature parameter identification 626a2a to the right side of the computer system 600. At fig. 7E, computer system 600 detects another portion of movement input 750d in the right direction on color temperature value range indication flag 626a2c (e.g., when movement 750d continues to be detected).
As shown in fig. 7F, in response to detecting a portion of the movement input 750d at fig. 7E (e.g., being moved in a rightward direction), the computer system 600 moves the tick mark of the color temperature value range indication identifier 626a2c to the right (e.g., based on the magnitude of the movement input 750d detected at fig. 7E), and updates the current color temperature value 626a2b from "0" (e.g., in fig. 7E) to "-75" (e.g., in fig. 7F). As shown in fig. 7F, in response to detecting a portion of the movement input 750d at fig. 7E, the computer system 600 updates the appearance of the media processing style indication identifier 602b by changing the color of the media processing style indication identifier 602b (e.g., using one or more techniques as discussed above with respect to fig. 6L). At fig. 7F, computer system 600 increases the amount of light gray in the color of media processing style indication identifier 602b based on movement of a portion of movement input 750d at fig. 7E. Here, the computer system 600 increases the amount of light gray (e.g., as opposed to increasing the amount of dark gray as discussed above with respect to fig. 6L) because the current value of the color temperature control has been reduced. As shown in FIG. 7F, computer system 600 continues to display custom standard style identification 636aa and the middle section displayed using custom standard style 634 aa. At fig. 7F, computer system 600 continues to display custom standard style identification 636aa. However, at FIG. 7F, computer system 600 updates custom standard style identification 636aa to include the word "rich contrasted cold" instead of "rich contrast". Here, a "cold" is added to the custom standard style indication identifier 636aa because the current value of the color temperature parameter is reduced in response to detecting a portion of the movement input 750d at fig. 7E, and/or the current value of the color temperature parameter (e.g., "-75") is less than the default value of the color temperature parameter of the standard style 634a (e.g., from which the custom standard style 634aa was created). At fig. 7F, the computer system 600 continues to display the custom standard style identification 636aa because the computer system 600 has edited the parameters of the custom standard style identification 636aa instead of adding a new custom standard media processing style (e.g., a custom standard media processing style that is different from the custom standard style 634aa shown in fig. 7E) to the set of available styles. Thus, at fig. 7F, the number of styles of the available style set (e.g., as indicated by the paging point 638) has remained the same, but the current color temperature value 626a2B has changed from the default value of the standard style 634a (e.g., as shown in fig. 6B). When the intermediate section is displayed using the custom standard style 634aa, the computer system 600 updates the intermediate section to reflect the change in color temperature of the custom standard media processing style and/or the change in the value of the color temperature parameter. For illustration purposes only, the custom standard style 634aa (e.g., in fig. 7F) is shown with the same pattern (e.g., horizontal line) as the standard style 634a (e.g., in fig. 7B) to indicate that the custom standard style 634aa is a modified version of the standard style 634 a. However, the lines (e.g., dashed lines) of the custom standard pattern 634aa of fig. 7F are different from the lines (e.g., solid lines) of the standard pattern 634a of fig. 7B and the lines (e.g., discontinuous lines) of the custom standard pattern 634aa of fig. 7E to show that the custom standard pattern 634aa of fig. 7F is different from the standard pattern 634a of fig. 7B and the custom standard pattern 634aa of fig. 7E. In some embodiments, in response to detecting a portion of the movement input 750d at fig. 7E, the computer system 600 does not update the custom standard style 634aa of fig. 7E and adds additional custom standard media processing styles. In some embodiments, when displaying the additional custom standard media processing style, computer system 600 displays the pagination point of the additional custom standard media processing style to the left and/or right of custom standard pagination point 638aa and replaces custom standard style identification 636aa with the style identification of the additional custom standard media processing style. In some embodiments, computer system 600 adds additional custom standard media processing styles to the set of available styles at locations adjacent to one or more of the standard media processing styles and/or groups the additional custom standard media processing styles with other standard media processing styles. At fig. 7F, computer system 600 detects the end of movement input 750d on color temperature value range indication flag 626a2 c.
As shown in fig. 7G, in response to detecting the end of the movement input 750D, the computer system 600 redisplays the hue parameter control 626a1 and the pinch color temperature parameter control 626a2 (e.g., using one or more of the techniques discussed above with respect to fig. 7D). Because the color temperature parameter control 626a2 is updated in response to detecting the movement input 750D, the computer system 600 displays an updated version of the color temperature parameter control 626a2, wherein the current color temperature value 626a2b of FIG. 7G is different from the current color temperature value 626a2b of FIG. 7D, and the color temperature value range indication identifier 626a2c of FIG. 7G is different from the color temperature value range indication identifier 626a2c of FIG. 7D. Notably, at fig. 7G, the color temperature value range indication identifier 626a2c includes a set of enlarged tick marks (e.g., 4) representing the current color temperature value 626a2b (e.g., "-75") because it is related to the scale of the color temperature value range indication identifier 626a2c (e.g., 75% of the tick marks to the left of the center tick mark are enlarged and filled (e.g., because the current color temperature value 626a2b is negative) to represent "-75" as a value occupying 75% of the range (e.g., -100 to 0) on the scale of the color temperature value range indication identifier 626a2c that is below zero). In some implementations, in response to detecting an input (e.g., a tap gesture) on the shutter control 610 at fig. 7G, the computer system 600 captures media and applies the customized standard style 634aa (rather than standard style 634 a) of fig. 7G to the media (e.g., when a representation of the media is displayed). At fig. 7G, computer system 600 detects a portion of movement input 750G on color temperature parameter control 626a 2.
As shown in fig. 7H, in response to detecting a portion of the movement input 750G at fig. 7G, the computer system 600 stops displaying the hue parameter control 626a1 and expands the color temperature parameter control 626a2 (e.g., using one or more techniques as described above with respect to expanding the hue parameter control 626a1 in fig. 7D). At fig. 7H, computer system 600 detects another portion of movement input 750g in the left direction of color temperature parameter control 626a 2.
As shown in fig. 7I, in response to detecting a portion of the movement input 750g at fig. 7H (e.g., being moved in a left direction), the computer system 600 moves the tick mark of the color temperature value range indication identifier 626a2c to the left (e.g., based on the magnitude of a portion of the movement input 750g detected at fig. 7H), and updates the current color temperature value 626a2b from "-75" (e.g., in fig. 7H) to "0" (e.g., in fig. 7I). As shown in fig. 7I, the computer system 600 displays the middle section of the use custom standard style 634aa of fig. 7I (e.g., the "dashed line of fig. 7I"), the custom standard style identification 636aa, and the media processing style indication identification 602b in the same manner that the computer system 600 displays the middle of the use custom standard style 634aa of fig. 7D (e.g., the "dashed line of fig. 7D") and the media processing style indication identification 602b. At fig. 7D and 7I, computer system 600 displays the middle section, custom standard style identification 636aa, and media processing style indication identification 602b in the same manner, because current color temperature value 626a2b of fig. 7I is the same value as current color temperature value 626a2b of fig. 7D, and current hue value 626a2b at fig. 7I (e.g., as shown in fig. 7J) is the same as current hue value 626a2b of fig. 7D. At fig. 7I, computer system 600 detects the end of movement input 750g on color temperature value range indication flag 626a2 c.
As shown in fig. 7I, in response to detecting the end of the movement input 750g, the computer system 600 redisplays the hue parameter control 626a1 and the pinch color temperature parameter control 626a2 (e.g., using one or more of the techniques discussed above with respect to fig. 7D and 7I). In some implementations, in response to detecting an input (e.g., a tap gesture) on the shutter control 610 at fig. 7I, the computer system 600 captures the media and applies the customized standard style 634aa of fig. 7I (instead of the standard style 634a and/or the customized standard style 634aa of fig. 7H) to the media (e.g., when a representation of the media is displayed). At fig. 7J, computer system 600 detects a movement input 750J in the left direction over live preview 630.
As shown in fig. 7K, in response to detecting the movement input 750j, the computer system 600 translates the set of available media processing styles to the left and displays the middle section using the standard style 634a, the right section using the custom standard style 634aa, and the left section using the vivid style 634b (e.g., using one or more techniques as described above with respect to detecting the movement input 650d in fig. 6E-6I). Notably, at fig. 7K, the computer system 600 uses the standard style 634a to display the middle section, because the standard style 634a is positioned behind the custom standard style 634aa (e.g., of the set of available media processing styles that the computer system 600 previously used to display the middle section in fig. 7I). Likewise, computer system 600 uses the vivid style 634b to display the right section, because the vivid style 634b is positioned behind the standard style 634a in the set of available media processing styles. Thus, at fig. 7I and 7K, other media processing styles in the set of available media processing styles may be used to detect input on the custom media processing style to display a portion of live preview 630. As shown in fig. 7K, in response to detecting the movement input 750J, the computer system 600 replaces the custom standard style identification 636aa of fig. 7J with the standard style identification 636aa. Further, in response to detecting the movement input 750j, the computer system 600 updates the standard style control 626a such that the current hue value 626b1b is set to a default value of the standard style 634a (e.g., "0") and the current hue color temperature 626b2b is set to a default value of the standard style 634a (e.g., "0") (e.g., ceasing to display the corresponding current value of the custom standard style 634 aa). At fig. 7K, computer system 600 also stops displaying reset control 722 because the current value of standard style 634a (e.g., current hue value 626b1b and current color temperature value 626b2b are "0") is displayed (e.g., the default value of standard style 634a is displayed). In some embodiments, in response to detecting a movement input in the right direction, computer system 600 redisplays the middle section using custom standard style 634 aa. At fig. 7K, computer system 600 detects a portion of movement input 750K on hue value range indication identifier 626a1 c.
As shown in fig. 7L, in response to detecting a portion of the movement input 750K at fig. 7K (e.g., when the standard style 634a is selected and/or the middle section is displayed using the standard style 634 a), the computer system 600 expands the hue parameter control 626a1 and stops displaying the color temperature parameter control 626a2 (e.g., using one or more techniques as described above with respect to fig. 7B). At fig. 7L, computer system 600 detects another portion of movement input 750k in the left direction on hue value range indication identifier 626a1 c.
As shown in fig. 7M, in response to detecting a portion of the movement input 750k at fig. 7L, the computer system 600 moves the tick mark of the hue value range indication identifier 626a1C to the left (e.g., based on the magnitude of the movement input 750k detected at fig. 7L), and updates the current hue value 626a1b from "0" (e.g., in fig. 7L) to "50" (e.g., in fig. 7M) (e.g., using one or more techniques as described above with respect to fig. 7C). In addition to updating the current hue value 626a1b, the computer system 600 replaces the standard style identification 636a with the custom standard style identification 636aa, displays the custom standard paging point 638aa as selected, and displays the standard paging point 638a as unselected. Notably, at fig. 7M, computer system 600 updates the current value of the hue parameter of custom standard pattern 634aa and does not update the current value of the hue parameter of standard pattern 634 a. Thus, at fig. 7M, the computer system 600 does not change how the standard style 634a is defined (and/or changes the current value of the standard style 634 a). At FIG. 7M, in response to detecting a portion of the movement input 750k at FIG. 7L, the computer system 600 displays the middle section using the custom standard style 634 aa. The custom standard style 634aa of fig. 7C has a reduced amount of hue (e.g., because the current hue value 626a1b of fig. 7M is lower than the previous value of the current hue value 626a1b shown in fig. 7J). For illustrative purposes only, the lines (e.g., intersecting lines) of the custom standard pattern 634aa of fig. 7M are different from the lines (e.g., dashed lines) of the custom standard pattern 634aa of fig. 7J to show that one or more parameters of the custom standard pattern 634aa have been changed.
As shown in fig. 7M, in response to detecting a portion of the movement input 750k at fig. 7L, the computer system 600 updates the appearance of the media processing style indication identifier 602b based on the current value of the hue parameter in fig. 7M (e.g., "50"). Thus, using one or more techniques as discussed above with respect to fig. 6L, the line around the perimeter of media processing style indication identifier 602b is updated to be about half of the perimeter around media processing style indication identifier 602 b. As shown in FIG. 7M, the custom standard style identification 636aa includes the word "rich contrast" for reasons similar to those described above with respect to the custom standard style identification 636aa of FIG. 7D. When comparing fig. 7D and fig. 7M, computer system 600 displays custom standard style identification 636aa with the same word ("rich contrast") even though the current value of the tone parameter of fig. 7D is higher than the current value of the tone parameter of fig. 7M. In some embodiments, computer system 600 displays custom standard style identification 636aa of fig. 7M, which includes the word "richer" rather than "rich" (e.g., because the current value of the hue parameter of fig. 7D is higher than the current value of the hue parameter of fig. 7M, and/or the current value of the hue parameter of fig. 7D is higher than a default value of the hue parameter by a particular amount (e.g., "75")). In some embodiments, when the current value of the hue parameter is less than the default value, computer system 600 displays custom standard style identification 636aa with the words "soft" and/or "softer". At fig. 7M, computer system 600 detects the end of movement input 750k on hue value range indication identifier 626a1 a.
As shown in fig. 7N, in response to detecting the end of the movement input 750k, the computer system 600 constricts the hue parameter control 626a1 and redisplays the color temperature parameter control 626a2. Here, the hue parameter control 626a1 and the color temperature parameter control 626a2 indicate current values of hue parameters and color temperature parameters of the custom standard style 634aa (e.g., the current hue value 626a1b is "50" and the current color temperature value 626a2b is "0") (e.g., because the custom standard style 634aa was selected and/or the custom standard style 634aa was used to display the middle section in response to detecting the movement input 750 k). At fig. 7N, computer system 600 detects a movement input 750N on hue value range indication identifier 626a1 c.
As shown in fig. 7O, in response to detecting a movement input 750N at fig. 7N (e.g., when the custom standard style 634aa is selected and/or the middle section is displayed using the custom standard style 634 aa), the computer system 600 expands the hue parameter control 626a1 and stops displaying the color temperature parameter control 626a2 (e.g., using one or more techniques as described above with respect to fig. 7B). At fig. 7O, computer system 600 detects another portion of movement input 750n in the right direction on hue value range indication identifier 626a1 c.
As shown in fig. 7P, in response to detecting a portion of the movement input 750n at fig. 7O, the computer system 600 moves the tick mark of the hue value range indication identifier 626a1C to the right (e.g., based on the magnitude of the movement input 750O detected at fig. 7O) and updates the current hue value 626a1b from "50" (e.g., in fig. 7O) to "0" (e.g., in fig. 7P) (e.g., using one or more techniques as described above with respect to fig. 7C). In addition to updating the hue values 626a1b, the computer system 600 removes the custom standard style 634aa from the set of available media processing styles. The computer system 600 removes the custom standard style 634aa from the set of available media processing styles because both the current hue value 626a1b and the current color temperature value 626a2b are set to their respective default values for the standard media processing style. In addition, the computer system 600 replaces the custom standard style identification 636aa with the standard style identification 636a and removes the custom standard paging point 638aa from the paging point 638 because the custom standard style 634aa has been removed from the set of available media processing styles. Thus, when the standard style identification 636a is displayed, the computer system 600 uses the standard style 634a to display the middle section. Thus, the computer system 600 may remove a custom media processing style when the custom media processing for the corresponding media processing style is reset to a default value for (e.g., and/or no longer different from) one or more media processing styles in the set of available media processing styles. In some implementations, in response to detecting a portion of the movement input 750O at fig. 7O, the computer system 600 updates the hue parameters of the custom standard style 634aa and does not remove the custom standard style 634aa from the set of available media processing styles. At fig. 7P, computer system 600 detects the end of movement input 750n on hue value range indication identifier 626a1 a.
As shown in fig. 7Q, in response to detecting the end of the movement input 750n, the computer system 600 constricts the hue parameter control 626a1 and redisplays the color temperature parameter control 626a2 (e.g., using one or more techniques as discussed above with respect to fig. 7A and 7C). At FIG. 7Q, computer system 600 detects a movement input 750Q on live preview 630.
As shown in fig. 7R, in response to detecting the movement input 750q, the computer system 600 translates the set of available media processing styles to the left and uses the retro style 634d to display the middle section and uses the gorgeous style 634c to display the left section (e.g., using one or more techniques as described above with respect to detecting the movement input 650d in fig. 6E-6I). The computer system 600 displays the right section without using a media processing style because the retro style 634d is the last media processing style in the set of available media processing styles (e.g., the right last). As shown in fig. 7R, in response to detecting the movement input 7501, the computer system 600 replaces the standard style identification 636a of fig. 6Q with the retro style identification 636 d. Further, in response to detecting the movement input 750j, the computer system 600 replaces the standard style control 626a with the retro style control 626 d. The pseudo-style control 626d includes a hue parameter control 626d1 and a color temperature parameter control 626d2, where the current hue value 626d1b (e.g., "10") and the current color temperature value 626d2b (e.g., "50") are default values for each respective parameter of the pseudo-style 634d (e.g., which is different from default values for other media processing styles in the set of available media processing styles). At fig. 7R, computer system 600 detects tap input 750R on color temperature parameter control 626d 2.
As shown in fig. 7S, in response to detecting tap input 750r, computer system 600 expands color temperature parameter control 626d2 and stops displaying hue parameter control 626d1 (e.g., using one or more similar techniques as discussed above with respect to fig. 7B). When control 628 in FIG. 7S is displayed, computer system 600 detects tap input 750S on control 628. As shown in fig. 7T, in response to detecting tap input 750s, the computer system constricts color temperature parameter control 626D2 and redisplays hue parameter control 626D1 (e.g., using one or more techniques as described above with respect to fig. 7D). At fig. 7T, computer system 600 detects a portion of movement input 750T on color temperature parameter control 626d 2.
As shown in fig. 7U, in response to detecting a portion of the movement input 750T at fig. 7T, the computer system 600 expands the color temperature parameter control 626d2 and stops displaying the hue parameter control 626d1 (e.g., using one or more similar techniques as discussed above with respect to fig. 7B). At fig. 7U, computer system 600 detects another portion of movement input 750t in the left direction on color temperature value range indication identifier 626d2 c.
As shown in fig. 7V, in response to detecting a portion of the movement input 750t at fig. 7U, the computer system 600 moves the tick mark of the color temperature value range indication flag 626d2c to the left and updates the current color temperature value 626d2b from "50" (e.g., in fig. 7U) to "62" (e.g., in fig. 7V). In addition to updating the current hue value 626d1b, the computer system 600 replaces the retro style identification 636d with the custom retro style identification 636dd and adds the custom retro page point 638dd to the left of the retro page point 638d in the page points 638. In other words, at fig. 7V, in response to updating the current hue value 626d1b from the default value ("50") to the modified value (e.g., "62"), the computer system 600 adds the customized version 634d of the antique style to the set of available styles. Thus, at fig. 7V, the computer system 600 does not update the default values of the hue parameters of the antique style 634d (e.g., as shown in fig. 7X), but rather creates a modified version of the antique style 634d with the updated values. Thus, in some embodiments, the user may access a modified version of the retro style 634d (e.g., the custom retro style 634 dd) at a later time.
As shown in fig. 7V, in response to detecting a portion of the movement input 750t at fig. 7U, the computer system 600 updates the middle section of the live preview 630 such that the middle section is displayed using the customized retro style 634dd in fig. 7V instead of using the retro style 634d in fig. 7U. It should be appreciated that the computer system 600 displays the custom antique style 634dd in fig. 7V with an increased amount of color temperature (e.g., 62) than the color temperature of the antique style 634d (e.g., "50") to reflect the change in the value of the color temperature parameter of the antique media processing style. For illustration purposes only, the custom antique style 634dd (e.g., in fig. 7V) is shown with the same pattern (e.g., vertical lines) as the antique style 634d (e.g., in fig. 7U) to indicate that the custom antique style 634dd is a modified version of the antique style 634 d. However, the line of the customized retro pattern 634dd is a broken line instead of a solid line as the line of the retro pattern 634d of fig. 7U to show that the customized retro pattern 634dd is different from the retro pattern 634 d. Notably, at fig. 7V, the custom retro style identification 636dd (e.g., and 634dd when an input is detected) is displayed to the left of the retro style identification 636d (e.g., and 634 d) rather than to the left of the standard style identification 636a (or 634a when an input is detected) (e.g., as indicated by the paging point 638). This is because the custom antique style 634dd is a modified version of the antique style 634d and not a modified version of the standard style 634 a. Thus, in some embodiments, the computer system 600 groups together customized respective media processing styles with non-customized (and/or non-modified) versions of the respective media processing styles.
Further, the custom antique style identification 636dd includes the word "warm color" because the computer system 600 displays the custom antique style 634dd with an increased amount of color temperature (e.g., "62") in fig. 7V than the color temperature of the antique style 634d (e.g., "50"), and/or the current value of the color temperature parameter in fig. 7V is greater than the default value of the color temperature parameter of the antique style 634 d. In addition, computer system 600 updates the appearance of media processing style indication identifier 602b by increasing the amount of dark gray in the color of media processing style indication identifier 602b based on the movement characteristics of a portion of movement input 750t (e.g., using one or more similar techniques as described above with respect to fig. 6L and 7C). At fig. 7V, computer system 600 detects the end of the movement input 750t on color temperature value range indication identifier 626d2 a.
As shown in fig. 7W, in response to detecting the end of the movement input 750u, the computer system 600 redisplays the color temperature parameter control 626d2 and constricts the color temperature parameter control 626d2 (e.g., using one or more techniques as discussed above with respect to fig. 7G). In response to detecting the end of the movement input 750u, the computer system 600 also displays a reset control 722. At fig. 7W, computer system 600 detects tap input 750W on reset control 722.
As shown in fig. 7W1, in response to detecting tap input 750W, computer system 600 displays a hint 768 that includes the word "reset to retro". Here, hint 768 includes the word "reset to retro" to indicate that confirmation needs to be provided before the currently displayed style can be reset (e.g., via input). Here, the word "retro" indicates a media processing style to be applied to live preview 630 and/or a media processing style currently applied to which a validation input is to be set in response to detection of the validation input by computer system 600. At fig. 7W1, computer system 600 detects tap input 750W1 on reset control 722. In some embodiments, computer system 600 detects tap input 750w1 on hint 768 instead of reset control 722 and, in response to detecting tap input 750w1 on hint 768, performs the functions described below with respect to detecting tap input 750w1 on reset control 722.
As shown in fig. 7X, in response to detecting a tap input 750W at fig. 7W or a tap input 750W1 at fig. 7W1, computer system 600 removes custom antique style 634dd from the set of available media processing styles. Further, the computer system 600 replaces the custom retro style identification 636dd with the retro style identification 636d and removes the custom retro paging point 638dd from the paging point 638 because the custom retro style 634dd has been removed from the set of available media processing styles. Thus, in displaying the retro style identification 636d, the computer system 600 uses the retro style 634d to display the middle section. Thus, the computer system 600 may remove a custom media processing style when the custom media processing for the corresponding media processing style is reset to a default value for (e.g., and/or no longer different from) one or more media processing styles in the set of available media processing styles. In some implementations, in response to detecting the tap input 750W at fig. 7W or the tap input 750W1 at fig. 7W1, the computer system 600 resets the parameters for customizing the antique media processing style and does not remove the customized antique media processing style from the set of available media processing styles.
As shown in fig. 7A-7X above, computer system 600 displays a custom style identification (e.g., custom standard style identification 634aa and/or custom antique style identification 636 dd) using words (and/or symbols and/or numbers) based on one or more current values of one or more parameters. In some embodiments, the custom style identification may include a first term, such as "rich," when the current value of the hue parameter is above a median value and/or a default value (e.g., "0") (e.g., and/or a range of values including median values and/or default values). In some embodiments, the custom style identification may include a second term, such as a "soft" that is different from (e.g., opposite to and/or an inverse of) the first term when the current value of the hue parameter is below the median. In some embodiments, when the current value of the color temperature parameter is above the median (e.g., "0"), the custom style identification may include a third term (e.g., different from the first term and the second term), such as the term "warm color". In some embodiments, when the current value of the color temperature parameter is below the median (e.g., "0"), the custom style identification may include a fourth term (e.g., different from the first, second, and third terms), such as the term "cold". In some embodiments, the third term is opposite and/or an anticomplementary of the fourth term. Thus, in some embodiments, the custom style identification may be a combination of words (such as "rich warm color", "rich cool color", "soft warm color", or "soft cool color") that indicate the current values of the plurality of parameters. In some embodiments, the custom style identification may include words, such as "criteria," when the current values of the two parameters are set to a median value. In some embodiments, when the current value of one of the parameters is set to a median value and the current value of the other parameter is not set to a median value, the custom style identification includes words indicating parameters that are not currently set to a median value, but does not include words indicating parameters that are currently set to a median value, such as "rich" or "soft" if the hue parameter is not currently set to a median value and the color temperature parameter is set to a median value; or "warm" or "cool" if the color temperature parameter is not currently set to the median value and the hue parameter is set to the median value. In some implementations, the custom style identification may include one or more words for one or more other parameters (e.g., third parameter, fourth parameter, fifth parameter, etc.). Thus, when there is a third parameter (or a fourth parameter or a fifth parameter) for the media processing style, the custom style identification may include different words based on the current value of the third parameter (along with words for the first parameter and/or the second parameter (and third parameter and/or fourth parameter)), based on whether the third parameter is above/below the median, such as "bright" (e.g., above the median) or "dark" (e.g., below the median) for the luminance parameter. In some embodiments, the custom style identification may include words for identifying media processing styles having particular values of the hue parameters and particular values of the color temperature parameters, such as "vividness" having a default hue value of "80" and a default color temperature value of "0" (e.g., as discussed above with respect to vividness style 634b of fig. 6H) (e.g., the same for the particular default values of the parameters discussed above for "gorgeous" and/or "antique"). In some implementations, the custom style identification may include one or more additional terms based on whether the current value of the parameter is above/below a default value for the particular media processing style, such as "vivid cold color" when the current value of the parameter value is below "0," vivid warm color "when the current value of the color temperature parameter is above" 0, "vivid soft" when the current value of the color tone parameter is below "80," vivid rich "when the current value of the color tone parameter is above" 80, "or a combination thereof (e.g.," vivid soft rich "," vivid soft warm color "," vivid rich warm color ", or" vivid rich cold color "). In some embodiments, customized identification of other media processing patterns (e.g., gorgeous 634c of fig. 6M and/or antique 634d of fig. 7U) may use the same paradigm as discussed above with respect to vivid pattern 634 b. Furthermore, the words above for describing the current values of particular parameters are merely exemplary, and one or more other words may be used instead of the words discussed above (e.g., "warm", "cold", "soft", "rich", "dark", "bright", "standard", "gorgeous", "vivid", and/or "antique").
Fig. 8A-8C illustrate an exemplary user interface for selecting a media processing style using a computer system, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 9 and 10A-10B.
FIG. 8A illustrates a computer system 600 displaying a settings user interface including settings 844. Settings 844 include media processing style settings 844a. At fig. 8A, computer system 600 detects tap input 850a on media processing style settings 844a. As shown in fig. 8B, in response to detecting tap input 850a, computer system 600 displays media processing style user interface 810 including a representation of standard style 878a and a representation of vivid style 878B. The representation of the standard pattern 878a is a sample image (e.g., raw sheet image), and the representation of the vivid pattern 878b is a sample image. Each respective sample image for a respective pattern has a respective individual media processing pattern applied. As shown in fig. 8B, the paging point 638 indicates that there are four available styles in the available style set, which are the standard style 634a (e.g., corresponding to the standard paging point 638 a), the vivid style 634B (e.g., corresponding to the vivid paging point 638B), the gorgeous style 634c (e.g., corresponding to the gorgeous paging point 638 c), and the retro style 634d (e.g., corresponding to the retro paging point 638 d) that have been previously discussed above. In some implementations, when the user interface 810 is displayed while the custom media processing style is added to the set of available media processing styles, the computer system 600 displays a representation of the paging point and/or custom media processing style.
As shown in fig. 8B, the user interface 810 includes a region 814a that includes default values for parameters of a media processing style (e.g., standard style 634a as discussed above) corresponding to the standard style 878a and a representation of the selection control 816 a. In some implementations, in response to detecting an input on the selection control 816a, the computer system 600 sets the standard style 634a (e.g., using one or more techniques discussed below with respect to fig. 8C) to the currently selected media processing style. At fig. 8B, computer system 600 detects movement input 850B.
As shown in fig. 8C, in response to detecting the movement input 850b, the computer system 600 moves the representation of the media processing style to the left and displays a representation of the vivid style 878b between a portion of the representation of the standard style 878a and a portion of the representation of the gorgeous style 878C. Because the representation of the vivid pattern 878a is in a predetermined location on the display, the computer system 600 replaces the region 814a with the region 814b that includes default values for parameters of the vivid pattern 634b (e.g., as described above). At fig. 8C, in response to detecting tap input 850C, the computer system detects tap input 850C on selection control 816b, and the computer system sets vivid style 634b as the currently selected media processing style. In some implementations, when the vivid style 634b is the currently selected media processing style, the computer system 600 will display a representation of previously captured media, display a representation of FOV (e.g., live preview 630), and/or capture media in the future using the vivid style 634b as the default media processing style. In some implementations, when the vivid style 634b is the currently selected media processing style, the computer system 600 detects a request to redisplay the camera user interface and, in response to detecting the request, the computer system 600 displays the live preview 630 in the camera user interface using the currently selected media processing style (e.g., the vivid style 634 b). In some implementations, when the live preview 630 is displayed using the currently selected media processing style, the computer system 600 captures media and displays the media using the currently selected media processing style selected via the user interface 810 (e.g., with an input such as tap input 850 c).
Fig. 9 is a flowchart illustrating a method for selecting a media processing style using a computer system, according to some embodiments. The method 900 is performed at a computer system (e.g., 100, 300, 500, 600) (e.g., a smartphone, desktop computer, laptop computer, and/or tablet computer) in communication with a display generation component (e.g., a display controller and/or touch-sensitive display system) and a first camera (e.g., front-facing camera and/or rear-facing camera) of one or more cameras (e.g., dual-camera, triple-camera, quad-camera, etc.) of one or more cameras (e.g., on the same side or different sides of the computer system) and one or more input devices (e.g., touch-sensitive surfaces and/or one or more cameras). Some operations in method 900 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 900 provides an intuitive way for selecting media processing styles using a computer system. The method reduces the cognitive burden on a user to select media processing styles using a computer system, thereby creating a more efficient human-machine interface. For battery-powered computing devices, enabling a user to select a media processing style faster and more efficiently using a computer system saves power and increases the time between battery charges.
The computer system displays (902), via the display generation component, representations (e.g., images) (e.g., photo media and/or video media) (e.g., live media, live previews (e.g., representations corresponding to fields of view (e.g., current fields of view) of the one or more cameras that have not been stored/captured) (e.g., media corresponding to a request to capture media (e.g., detection of selection of a shutter affordance (e.g., a user interface object)) and/or previously captured media (e.g., media corresponding to a representation of a field of view (e.g., a previous field of view) of the one or more cameras that have been captured) (e.g., media items that have been saved and can be accessed by a user at a later time) (e.g., images) (e.g., live media, media corresponding to representations of a field of view (e.g., current field of view) of view of the one or more cameras that have not been stored/captured) (e.g., media) in response to detection of a selection of a shutter affordance (e.g., a user interface object)), and/or media corresponding to a thumbnail (e.g., media) in accordance with a user interface 634, wherein the user interface (e.g., user interface 634) is selected based on the user interface (e.g., user interface 634) and/or the user interface (e.g., user interface 634), when operating in camera mode) to display a first portion (e.g., a middle section, a portion of a left side section, and/or a right side section of 630, 676a, 676b, 676c, 680c, and/or 680 d) of the representation (e.g., 630, 676a, 676b, 676c, 680c, and/or 680 d) and a second portion (e.g., a portion of a middle section, a left side section, and/or a right side section of 630, 676a, 676b, 676c, 680c, and/or 680 d) of the representation (e.g., 630, 676a, 676b, 680c, and/or 680 d). In some implementations, the first media processing style is one of a plurality of media processing styles (e.g., including a second media processing style and a third media processing style). In some embodiments, each of the plurality of patterns has the same set of parameters (e.g., the same type of parameters), but one or more parameters have different values. In some implementations, the set of parameters is a set of media processing parameters (e.g., and displayed without use of a second style of visual content applied to the media) for determining an appearance (e.g., color characteristics (e.g., color temperature, hue, brightness, saturation, chroma, color, chroma, and/or harmony) and/or depth parameters of the media that is applied to the visual content of the media (e.g., affects (e.g., portions of) the display of the representation of the media) (e.g., changes one or more characteristics (e.g., color characteristics, depth characteristics) of the display representation of the media). In some embodiments, the first portion and the second portion do not overlap, and/or the first portion does not surround a subset of the second portion, and/or the second portion does not surround a subset of the first portion. In some embodiments, the first portion and the second portion are different. In some implementations, the media processing style also affects the capture of media captured in a media capture (e.g., camera) application.
When a first media processing pattern (e.g., 634 a-634 d, 634aa, and/or 634 dd) is used to display a first portion of the representation (e.g., a middle section and/or a left section, a portion of the right section) and a second portion of the representation (e.g., a middle section and/or a left section, a portion of the right section) of the representation (e.g., 630, 676a, 676b, 676c, 680c, and/or 680 d), the computer system detects (904) inputs (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representations (e.g., 630, 676a, 676b, 676c, 680c, and/or 680 d) (e.g., a swipe input/gesture including a rate at the end of the input/gesture or a change based on movement during the input/gesture) via one or more input devices and/or a tap input/gesture (e.g., a single tap input/gesture, a single tap input/or a tap input/gesture embodiment.
In response to detecting input directed to the representation (e.g., 630, 676a, 676b, 676c, 680c, and/or 680 d) and (and, in some embodiments, when continuing to detect input (and when continuing to display the representation of the media)) in accordance with the determined input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) in a first direction (e.g., right, left, up, down, and/or diagonal) (and in accordance with a determination that the computer system is selecting a mode (e.g., a mode that enables the computer system to apply one or more media processing styles to the media captured by the computer system after the user selects the one or more media processing styles), the computer system uses a second media processing style (e.g., 634a to 634d, 634aa, and/or 634 dd) (e.g., that applies to the visual content of the media) via the display generating means (e.g., does not use a first media processing style and/or a third media processing style to display a portion) (e.g., does not use a first media processing style and/or a portion of the media processing style) to display a first portion (676 a, 680c, and/or a middle portion 676c, 676a, 680 d) and/or a middle portion (e.m) of the first, 676a, and/or 680 d) of the first media processing style (e., A portion of the left side section and/or the right side section) (e.g., a second portion of the representation is not displayed using the second media processing style and/or the third media processing style) (e.g., visual elements corresponding to the third media processing style are not displayed). In some implementations, the input includes a motion component in a first direction. In some implementations, no input is detected on the style selection user interface at a location corresponding to and/or having the second media processing style and/or no input is detected on the style selection user interface at a location corresponding to and/or having the first media processing style. In some implementations, no input is detected at a location on the style selection user interface corresponding to an edge and/or boundary of the second media processing style and/or the first media processing style. In some implementations, the input is detected at a location on the style selection user interface that corresponds to a center location (e.g., non-boundary/edge) of the first media processing style and/or the second media processing style.
As displaying a first portion of the representation using the second media processing style via the display generating component while continuing to display a portion of the second portion of the representation using the first media processing style, the computer system is responsive to detecting the first portion of the input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation, wherein the first portion of the input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) has a first input magnitude (e.g., a first amount of movement in a first direction from a beginning of the input), displaying (908) the first portion of the representation using the second media processing style (e.g., 634a to 634d, 634aa, and/or 634 dd) (e.g., without using the first media processing style), while displaying the second portion of the representation (e.g., 630 a to 634d, 634aa, 634d, 634, and/or 634 dd) (e.g., without using the second media processing style) between the first portion of the representation (e.g., 630, 676a, 676b, 680c, and/or a middle portion of the representation of the left side, 680 a, 680c, or a middle portion of the right side, 680 b, 680c, and/or a middle portion of the first segment of the representation, or the middle segment of the left side, 676a, 680 b, and/or the middle segment of the first portion of the representation, 630 b, and/or the middle segment of the representation.
As part of displaying the first portion of the representation using the second media processing style while continuing to display the second portion of the representation using the first media processing style via the display generating component, the computer system, after displaying the first portion of the representation using the second media processing style (e.g., 634a to 634d, 634aa, and/or 634 dd) while displaying the second portion of the representation (e.g., 630, 676a, 676b, 676c, 680c, and/or part of the right side section) and the third portion of the representation (e.g., 630, 676a, 676b, 676c, 680c, and/or part of the right side section) using the first media processing style (e.g., 634a to 634d, 634aa, and/or 634 dd), and in response to detecting a second portion of the input directed to the representation (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q), wherein the second portion of the input has a second input magnitude (e.g., a second amount of movement in a first direction from the beginning of the input) that is greater than the first input magnitude, displaying (910) the first portion of the representation (e.g., a middle section, a portion of a left side section, and/or a right side section) using the second media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) and the third portion of the representation (e.g., a middle section, a portion of a left side section, and/or a right side section) using the first media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) to display a second portion of the representation (e.g., a portion of the middle section, left section, and/or right section of 630, 676a, 676b, 676c, 680c, and/or 680 d). In some embodiments, the first portion of the representation, the second portion of the representation, and the third portion of the representation do not overlap. Displaying different portions of the representation using respective media processing styles based on the magnitude of the portion of the input pointing to the representation allows the user to control which portions of the representation are displayed using the respective media processing styles and provide visual feedback regarding how the respective media processing styles will affect the media representing the portions of the representation that may be captured, which improves the visual feedback.
In some implementations, the first portion of the representation (e.g., a middle section, a portion of a left side section, and/or a right side section of 630, 676a, 676c, 680c, and/or 680 d) and the second portion of the representation (e.g., a middle section, a portion of a left side section, and/or a right side section of 630, 676a, 676b, 676c, 680c, and/or 680 d) are not displayed using the second media processing style before input (e.g., 650d, 650k1, 650k2, 750j, 750 q) (including the first portion and the second portion) is detected to the representation (e.g., 630, 676a, 676b, 676c, 680c, and/or 680 d).
In some embodiments, the first media processing pattern (e.g., 634 a-634 d, 634aa, and/or 634 dd) is different from the second media processing pattern (e.g., 634 a-634 d, 634aa, and/or 634 dd). Displaying different portions of the representation using different respective media processing styles based on the magnitude of the portion of the input pointing to the representation allows the user to control which portions of the representation are displayed using the different respective media processing styles and provide visual feedback as to how the different respective media processing styles will affect the media representing the portions of the representation that may be captured differently, which improves the visual feedback.
In some implementations, in response to detecting a first portion of an input (e.g., 650d,650k1, 650k2, 750j, and/or 750 q) directed to a representation (e.g., 630, 676a, 676b, 676c, 680c, and/or 680 d), a second portion of the representation (e.g., a middle section, a left side section, and/or a portion of a right side section) and a third portion of the representation (e.g., a middle section, a left side section, 680c, and/or a portion of a right side section) are displayed without using a second media processing pattern (e.g., 634a through 634d, 634aa, and/or 634 dd). In some implementations, in response to detecting the second portion of the input, the first portion of the representation and the third portion of the representation are displayed without using the first media processing style.
In some embodiments, the amount of representation (e.g., 630, 676a, 676b, 676c, 680c, and/or 680 d) to which the second media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) (and/or the first media processing style) is applied is based on the amount of movement (e.g., the distance between two points (e.g., the start of input, the end of input)) of the input (e.g., the velocity, acceleration, and/or displacement (e.g., the distance between the start of input, the end of input) that points to the representation (and, in some embodiments, the direction) (e.g., proportional thereto) of the representation to which the second media processing style is applied based on the amount of movement of the input that points to the representation.
In some embodiments, in response to detecting the end (e.g., and when the representation is displayed using the first media processing style and the second media processing style) of input directed to the representation (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) (e.g., 650d in fig. 6H) (and in accordance with determining that the input is in the first direction), the computer system optionally causes the first media processing style (e.g., 634a to 634d, 634aa, and/or 634 dd) (e.g., 634b in fig. 6H) to be used to display more than a predetermined portion (e.g., 25%, 30%, 40%, 50%, 60%, 75%) (e.g., and/or in accordance with determining that the second media processing style and/or the first media processing style (e.g., and/or the first media processing style) are applied to the first media processing style (e.g., 634a to 634d, 634aa, and/or 634 dd) (e.g., 634b in fig. 6H) to display more than a predetermined portion (e.g., 25%, 30%, 40%, 50%, 60%, 75%) (e.g., and/or in accordance with determining that the end of input directed to the representation is detected (e.g., 650 d) is used to display the first media processing style and/or the second media processing style) (e.g., 634 b) is displayed immediately before the end of input directed to the representation (e.g., 650 d) is detected, and/is applied to the first media processing style and/or directly (e.g., no portion) (e.g., 634 b) The second portion and/or the further portion). In some implementations, the predetermined portion of the representation is a larger portion of the representation that is displayed (e.g., currently displayed) using one respective media processing style than any other portion of the representation that is displayed using another media processing style. In some embodiments, in response to detecting the end of an input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation (e.g., 650d at fig. 6H) (and when the representation is displayed using the first media processing style and the second media processing style) (and in accordance with a determination that the input is in a first direction), the computer system uses the second media processing style (e.g., 634a to 634d, 634aa, and/or 634 dd) (e.g., as discussed above with respect to detecting the end of the input at the same time as, immediately before, or immediately after) the end of the input (e.g., 650 d), displays less than a predetermined portion (e.g., 634a to 634d, 634aa, and/or 634 dd) of the representation (e.g., 630) (and/or displays a portion (e.g., as discussed above) of the first media processing style (and/or immediately after) of the second media processing style (and/or a particular portion of the representation) in accordance with a determination that the second media processing style (and/or the first media processing style) is not applied to the particular portion of the representation before the end of the input directed to the representation is detected) (e.g., and in accordance with a determination that the end of the input is detected) (e.g., the first media processing style and the first media processing style is in the first direction). Automatically displaying the first portion of the representation and the second portion of the representation using the particular media processing style when the specified condition is met allows the computer system to automatically select one or more media processing styles to be applied to the representation of the media and provide visual feedback to the user regarding which media processing style was selected to be applied to the representation of the media in response to detecting the end of input directed to the representation, which performs the operation without additional user input and provides improved visual feedback when a set of conditions has been met.
In some embodiments, in response to detecting an input directed to the representation (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) (e.g., 650k 2) and in accordance with a determination that the input is in a second direction (e.g., right, left, up, down, and/or diagonal) different (e.g., opposite) to the first direction, the computer system displays a second portion of the representation (e.g., 634 a) (which applies to visual content of the media that affects display of the representation of the media) using a third media processing style (e.g., 634 a) that affects display of the representation of the media) while continuing to display the first portion of the representation (e.g., not using a second media processing style and/or third processing style) (e.g., 634 a) in a second direction (e.g., opposite) that is different (e.g., opposite) than the first direction), the first portion of the representation (e.g., a second media processing style (e.g., 634 a) and/or a visual element (e.g., 634 a) that does not correspond to the second media processing style (e.g., 634 a) and/or a visual element (e.g., 634 a) that does not display the second portion of the representation) using a second media processing style and/or the second media processing style (e.g., 634 b). In some implementations, the input includes a motion component in the second direction. In some implementations, the first portion and the second portion of the media do not move in position (e.g., continue to be displayed in the same position) on the representation of the media. In some implementations, the first media processing style, the second media processing style, and the third media processing style have the same set of parameters (e.g., the same type of parameters (e.g., as described below with respect to method 1000 and fig. 7A-7X)). In some implementations, the first media processing style, the second media processing style, and the third media processing style are different because one or more values of the parameter set for each respective media processing style are different. In some embodiments, the first portion includes a first object displayed using the first media processing style and/or the second portion includes a second object displayed using the first media processing style prior to detecting input directed to the representation. In some implementations, in response to detecting input pointing to the representation, and while continuing to detect input, in accordance with determining that the input is in the first direction, displaying the first object using the second media processing style; and in accordance with a determination that the input is in the second direction, displaying the second object using the third media processing style and/or the first media processing style. In some implementations, the visual element corresponding to the second media processing pattern and the visual element corresponding to the third media processing pattern are displayed before the input is detected. Displaying the second portion of the representation using a third media processing style different from the first media processing style and the second media processing style in accordance with determining that the input is in a second direction (e.g., a direction different from the first direction) allows the user to control which portions of the representation are displayed using media processing styles different from the first media processing style and the second media processing style, which provides additional control options without cluttering the user interface.
In some embodiments, in response to detecting an input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation (e.g., 630, 676a, 676b, 676c, 680c, and/or 680 d) and in accordance with determining that the input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) is in a first direction (e.g., and in accordance with determining that an end (e.g., lift-off) of the input of the directed representation is detected (or in response to detecting the end of the input of the directed representation) and/or when the representation is displayed using the first media processing style and the second media processing style), the computer system displays a visual element (e.g., 660a, 660 b) (e.g., previously not displayed) before input directed to the representation is detected (e.g., an indication, such as text and/or a symbol) (e.g., a visual element, such as a user interface object (e.g., a border, outline of a shape, and/or indication that the visual element of the representation may be displayed using a fourth media processing style) (e.g., a visual element corresponding to a fifth media processing style is not displayed) (e.g., a first portion of the representation and a second portion of the representation are displayed using a second media processing style simultaneously) (e.g., a visual element that represents and/or appears to be an edge of a style, and/or an edge of a wire) (in some embodiments, a visual element corresponding to a fourth media processing style is displayed in a location/region of a style selection user interface (e.g., right and/or left edges in a direction opposite to the first direction). In some embodiments, in response to detecting an input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation (e.g., 630, 676a, 676b, 676c, 680c, and/or 680 d) and in accordance with determining that the input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) is in a third direction (e.g., right, left, up, down, and/or diagonal) different from the first direction (e.g., opposite to the first direction) (e.g., and in accordance with determining that an end (e.g., lift-off) of the input directed to the representation is detected (e.g., and/or when the representation is displayed using the first media processing pattern and the third media processing pattern (e.g., as discussed above)), the computer system displays visual elements (e.g., 660a, 660 b) corresponding to fifth media processing styles (e.g., 634 a-634 d, 634aa, and/or 634 dd) that are different from the fourth media processing style (e.g., previously undisplayed) (e.g., indicated, such as text and/or symbols) before input directed to the representation is detected) (e.g., indicated, such as visual elements, such as user interface objects (e.g., borders, outlines of shapes, and/or represented visual elements of representations that may be displayed using the fourth media processing style) (e.g., visual elements that represent and/or appear to be edges of the style and/or edges of the frame) (e.g., visual elements corresponding to the fourth media processing style are not displayed) (e.g., while displaying the first portion of the representation and the second portion of the representation using a third media processing style (e.g., as discussed above)). In some implementations, visual elements corresponding to the fourth media processing style are displayed at a first location on the style selection user interface (e.g., an edge of a representation of the media) and visual elements corresponding to the fifth media processing style are displayed at a second location on the style selection user interface that is different from the first location on the style selection user interface (e.g., an edge of the representation of the media, opposite the first location). In some embodiments, the visual element corresponding to the fourth media processing pattern and/or the visual element corresponding to the fifth media processing pattern and the visual element corresponding to the first media processing pattern are displayed simultaneously. In some embodiments, the visual element corresponding to the fifth media processing style is displayed at a location/region of the style selection user interface (e.g., right and/or left edges in a direction opposite the third direction) that is different from the location/region of the style selection user interface at which the visual element corresponding to the fifth media processing style is to be displayed. Displaying visual elements corresponding to respective styles based on directions to input of the pointing representation provides visual feedback to a user regarding styles that may be selected via additional input of the pointing representation, which provides improved visual feedback.
In some embodiments, before input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation (e.g., 630, 676a, 676b, 676c, 680c, and/or 680 d) is detected, (and in some embodiments, when the first portion of the representation and the second portion of the representation are displayed using the first media processing style), the style selection user interface includes visual elements (e.g., 660a, 660 b) (e.g., visual elements that represent and/or appear to be edges of the style and/or edges of the frame line) corresponding to the second media processing style (e.g., 634a to 634d, 634 j, and/or 634 dd) (e.g., third media processing style) (e.g., visual elements that represent and/or appear to be edges of the style and/or edges of the frame line) as described above with respect to method 900). In some implementations, in response to detecting input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) pointing to the representation (e.g., in accordance with determining that an end of the input was detected or before and after detecting input to the representation) and in accordance with determining that the input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) was in a first direction, the computer system stops displaying visual elements (e.g., 660a, 660 b) corresponding to the second media processing style without using the second media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) to display the representation (e.g., any portion of the representation) (e.g., without applying the second media processing style to the representation of media). In some implementations, in response to detecting input directed to the representation (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) (e.g., in accordance with determining that an end of the input was detected or before and after detecting input directed to the representation) and in accordance with determining that the input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) is in a fourth direction different from the first direction, the computer system stops displaying visual elements (e.g., 660a, 660 b) corresponding to the sixth media processing style without displaying the representation (e.g., 630, 676a, 676b, 676c, 680c, and/or d) (e.g., any portion of the representation) using the sixth media processing style (e.g., without applying the sixth media processing style to the representation of the media). Stopping displaying visual elements corresponding to the second media processing style without displaying the representation using the second media processing style in accordance with determining that the input is in the first direction; or in accordance with a determination that the input is in a fourth direction different from the first direction, ceasing to display visual elements corresponding to the sixth media processing pattern without using the sixth media processing pattern to display the representation provides visual feedback to the user informing the user that the respective media processing pattern corresponding to the visual element for which display has been stopped cannot be selected by the input provided in the particular direction and/or that the user will need to change the direction of the input so that the respective media processing pattern is selected, which provides improved visual feedback.
In some embodiments, at the indication (e.g., a portion) of the second media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) (e.g., a portion of the representation of the media displayed using the second media processing style (and/or the third media processing style (e.g., as discussed above with respect to method 900), the indication (e.g., one or more text/symbols), a portion of the second media processing style, and/or a visual element (e.g., a boundary of an object) representing the second media processing style) does not detect an input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation (e.g., a boundary of an object), a different portion of the representation is displayed using the respective media processing style in response to the non-detection of the input at the indication of the second media processing style, which allows the user to select the respective media processing style via the input without requiring the user to select the respective media processing style and/or the respective UI representation to be cluttered, which may not provide additional user interface controls (e.g., may not be provided with additional UI options).
In some implementations, the representation of the media is a representation of previously captured media (e.g., 676a, 676b, 676c, 680c, and/or 680 d) (e.g., not a preview/view of the live camera field of view). In some implementations, after detecting an input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation, the computer system displays an option (e.g., 816a, 816 b) (e.g., a user interface object labeled "use") to use a seventh media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) for media captured in response to the future media capture request. In some implementations, when an option (e.g., 816a, 816 b) to use a seventh media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) is displayed, the computer system detects an input (e.g., 850 c) (e.g., a tap input (e.g., a tap gesture) (e.g., a single tap input, a double tap input)) directed to the option to use the seventh media processing style (e.g., and/or apply the seventh media processing style to visual content of media) (and/or, in some implementations, in response to detecting a non-tap input/gesture (e.g., a move input/gesture, a press and hold input/gesture, a voice input)). In some implementations, in response to detecting an input directed to an option to use a seventh media processing pattern (e.g., 850 c), the computer system configures the computer system to use the seventh media processing pattern (e.g., for media captured in response to a future media capture request). In some embodiments, when the computer system (e.g., 600) is configured to use the seventh media processing style (e.g., for media captured in response to a future media capture request), the computer system detects a request to capture media (e.g., 650a, as discussed with respect to fig. 8A-8C). In some embodiments, when the computer system is configured to use the seventh media processing style, in response to detecting a request to capture media (e.g., 650a, as discussed with respect to fig. 8A-8C), the computer system captures the respective media. In some embodiments, after capturing the respective media (e.g., and in response to a request to display the respective media), the computer system displays a first user interface (e.g., 668) (e.g., as discussed with respect to fig. 8A-8C) that includes representations (e.g., 680C, 680 d) of the respective media (e.g., previously captured media). In some implementations, representations of respective media are displayed in a first user interface (e.g., that includes representations of respective media) using a seventh media processing style. In some implementations, the first user interface is displayed in response to detecting an input (e.g., a tap gesture) (e.g., a single tap input, a double tap input)) on the media gallery user interface object and/or thumbnail representing the media using the seventh media processing style (and/or, in some implementations, in response to detecting a non-tap input/gesture (e.g., a move input/gesture, a press and hold input/gesture, and/or a voice input)). In some embodiments, when the representation of the media is displayed using the seventh media processing style, the computer system detects a request to display a representation of a second (e.g., previously captured) media and, in response to detecting a request to display a representation of other media, the computer system displays the representation of the other media using the seventh media processing style. In some implementations, the computer system displays representations of other media using the seventh media processing style regardless of whether the other media was captured using the seventh media processing style. In some implementations, the computer system displays representations of other media using the seventh media processing style only when the representations of the media are not captured using the other media processing style. In some implementations, other media is captured before detecting an input directed to an option to display one or more representations of media using a seventh media processing style. Configuring the computer system to use the seventh media processing style in response to detecting input directed to an option to use the seventh media processing style for media captured in response to a future media capture request allows a user to control which media processing style will be applied to one or more representations of media to be captured in the future (and, in some embodiments, representations of previously captured media), which provides additional control options without cluttering the user interface. Displaying different portions of the previously captured representation using the respective media processing styles allows the user to select a media processing style for the previously captured media by providing input and provide visual feedback to the user as to how the respective media processing style will affect one or more portions of the previously captured media, which provides additional control options without cluttering the user interface and provides improved visual feedback.
In some embodiments, the computer system is in communication with one or more cameras including the first camera. In some implementations, the representation of the media includes a representation (e.g., 630) of at least a portion of the current field of view of at least the first camera (e.g., live representation, live preview). In some implementations, the representation is updated when a portion of the current field of view of at least the first camera changes. In some embodiments, a portion of the current field of view of at least the first camera changes as the computer system moves around, one or more objects move in and/or out of the field of view of at least the first camera, and/or as other changes (e.g., illumination changes) occur in the field of view of at least the first camera. In some implementations, after detecting an input directed to the representation (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q), the computer system displays an option (e.g., 816a, 816 b) to use an eighth media processing style for media captured in response to the future media capture request (e.g., a user interface object labeled "use"). In some embodiments, when an option to use the eighth media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) is displayed, the computer system detects an input (e.g., 850 c) (e.g., a tap input (e.g., a tap gesture) (e.g., a single tap input, a double tap input)) directed to the option to use the eighth media processing style (e.g., and/or apply the eighth media processing style to visual content of the media) (and/or, in some embodiments, in response to detecting a non-tap input/gesture (e.g., a move input/gesture, press and hold input/gesture, and/or voice input)). In some implementations, in response to detecting an input (e.g., 850 c) directed to an option to use the eighth media processing pattern, the computer system configures the computer system (e.g., 600) to use the eighth media processing pattern (e.g., for media captured in response to a future media capture request). In some embodiments, when the computer system (e.g., 600) is configured to use the eighth media processing style, the computer system detects a second request to capture media (e.g., 650a, as described with respect to fig. 8A-8C). In some embodiments, when the computer system is configured to use the eighth media processing style, the computer system captures a second corresponding media in response to detecting a second request to capture media (e.g., 650a, as described with respect to fig. 8A-8C). In some implementations, after capturing the second corresponding media, the computer system displays a second user interface (e.g., 668) (e.g., as discussed with respect to fig. 8A-8C) that includes a representation (e.g., 680C, 680 d) of the second corresponding media (e.g., as described with respect to fig. 8A-8C). In some implementations, a representation of the second corresponding media (e.g., previously captured media) is displayed in the second user interface (e.g., that includes a representation of the second corresponding media) using the eighth media processing style. In some implementations, as part of detecting the second request, the computer system detects input (e.g., tap gesture) (e.g., single tap input, double tap input)) on the camera application icon (and/or, in some implementations, in response to detecting a non-tap input/gesture (e.g., move input/gesture, press and hold input/gesture, and/or voice input)) (e.g., to open the camera application). In some embodiments, when the eighth media processing style is used to display a representation of the media, the computer system detects a request to close and re-open the application and, in response to detecting the request to close and re-open the application, the computer system uses the eighth media processing style to display a different representation of the media. In some implementations, prior to capturing the respective media, and when the computer system is configured to use the eighth media processing style, the computer system uses the eighth media processing style to display a third user interface that includes a representation of the media (e.g., a live preview and/or a portion of the current field of view of the at least one camera). In some embodiments, when the computer system is configured to use the eighth media processing style, the computer system captures a third corresponding media in response to detecting the second request to capture media. In some implementations, after capturing the third corresponding media, the computer system does not display a user interface including a representation of the third corresponding media using the eighth media processing style. Configuring the computer system to use the eighth media processing style in response to detecting input directed to an option to use the eighth media processing style for media captured in response to a future media capture request allows a user to control which media processing style will be applied to one or more representations of media to be captured in the future (and, in some embodiments, representations of previously captured media), provides additional control options without cluttering the user interface. Using the respective media processing patterns to display different portions of the representation of at least a portion of the current field of view of at least the first camera allows a user to select a media processing pattern for media to be captured in response to receiving a request to capture media (e.g., activation of a shutter button) by providing input and provides visual feedback to the user as to how the respective media processing pattern will apply to one or more portions of the current field of view after the media corresponding to the current field of view is captured, which provides additional control options without cluttering the user interface and provides improved visual feedback.
In some implementations, the computer system applies a first set of operations (e.g., media processing operations) to the captured media (e.g., 680b, 680 c) as part of applying respective media processing styles (e.g., 634 a-634 d, 634aa, and/or 634 dd) to the captured media (e.g., displaying respective representations using the respective media processing styles). In some implementations, the computer system applies a second set of operations (e.g., media processing operations) to the live preview (e.g., 630) as part of applying respective media processing styles (e.g., 634 a-634 d, 634aa, and/or 634 dd) to a portion of the field of view of the one or more cameras (e.g., displaying a respective portion of the field of view of the one or more cameras) (e.g., the current field of view). In some implementations, parameters for media processing operations in the first set of operations and the second set of operations are selected based on respective media processing styles. In some embodiments, the first set of operations take longer or a greater amount of processing power to apply, and the second set of operations may be applied faster or with a lesser amount of processing power, and using the first set of operations provides higher quality results than using the second set of operations. In some embodiments, applying the second set of operations to the live preview allows the computer system to display the live preview using the corresponding media processing style with reduced latency and/or visual distortion compared to when the first set of operations is applied to the live preview.
In some embodiments, when a first media processing pattern (e.g., 634 a-634 d, 634aa, and/or 634 dd) is used to display a first portion of the representation (e.g., a middle section, a left section, and/or a portion of a right section of 630, 676a, 676b, 676c, 680c, and/or 680 d) and a second portion of the representation (e.g., a middle section, a left section, and/or a portion of a right section of 630, 676a, 676b, 676c, 680c, and/or 680 d), an identification (e.g., 636 a-636 d, 636aa, and/or 636 dd) (e.g., one or more symbols and/or text (e.g., "standard", "vivid")) corresponding to the first media processing pattern is displayed. In some embodiments, the identification is overlaid on the representation of the media. In some implementations, the logo is positioned above, below, to the left of, to the right of, and/or overlaid on a portion of the representation of the media. When the first portion of the representation and the second portion of the representation are displayed using the first media processing style, an identification corresponding to the first media processing style is displayed, which provides visual feedback to the user so that the user can quickly identify which media processing style is being applied without having to determine the type of media processing style being applied by how the media processing style is applied to the representation of the media, which provides improved visual feedback.
In some implementations, as part of displaying a first portion of the representation (e.g., a middle section, a left section, and/or a right section of 630, 676a, 676b, 676c, 680c, and/or 680 d) using a second media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) while continuing to display a second portion of the representation (e.g., a portion of a middle section, a left section, and/or a right section of 630) using the first media processing style (e.g., a portion of 634 a-634 d, 634aa, and/or 634 dd), the computer system displays a separation (e.g., 640) (e.g., a region and/or portion of the representation; a visually distinct user interface object defining an intersection between the first portion and the second portion) between the first portion and the second portion. In some embodiments, a partition is a region and/or portion of a representation where the first media processing pattern, the second media processing pattern, and/or any other media processing pattern is not applied. In some embodiments, the partition is translucent. In some embodiments, the partition is not translucent. In some implementations, the computer system moves the partitions across the display based on the magnitude of the input pointing to the representation. In some embodiments, the computer system optionally changes the size of the first portion of the representation and the second portion of the representation as the separation is moved across the display (e.g., in response to detecting an input). In some embodiments, the first portion of the representation and the second portion of the representation change relatively and/or in an indirectly proportional manner (e.g., as the first portion of the representation increases in size, the second portion of the representation decreases in size (e.g., decreases by the same amount as the first portion increases in size) (or vice versa). As part of displaying the first portion of the representation using the second media processing pattern while continuing to display the second portion of the representation using the first media processing pattern, a separation is displayed between the first portion of the representation and the second portion of the representation, which provides visual feedback to the user such that the user can quickly identify which portion of the representation is being displayed using the second media processing pattern and/or which portion of the representation is being displayed using the first media processing pattern, which provides improved visual feedback.
In some embodiments, the input pointing to the representation (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) is a movement input (e.g., a swipe input/gesture including a rate at which the input/gesture ends or a drag input/gesture that causes a change based on movement during the input/gesture) (or, in some embodiments, is not a movement input (e.g., is a tap input, a press, and hold input)).
In some embodiments, the computer system is in a first capture mode. In some implementations, when the style selection user interface is displayed, and when the computer system (e.g., 600) is in a first capture mode (e.g., indicating photo mode control 620c in fig. 6N), the computer system detects input (e.g., movement input) directed to the style selection user interface (and/or, in some implementations, detects non-movement input/gestures (e.g., press and hold input/gesture, voice input, and/or tap input)) (e.g., does not point to a representation, points to one or more camera capture mode user interface objects, is at the bottom of the user interface, is at a location different from the location at which input directed to the representation of media was detected). In some embodiments, it is different from the input pointing to the representation. In some embodiments, the computer system displays a camera control area that includes a plurality of selectable user interface objects for a camera capture mode. In some implementations, each camera mode (e.g., video mode, photo/still mode, portrait mode, slow motion mode, panoramic mode) has multiple settings (e.g., studio lighting settings, contour lighting settings, stage lighting settings for portrait capture mode) with multiple values (e.g., light levels for each setting) for the mode (e.g., portrait capture mode) in which the camera (e.g., camera sensor) is operating to capture media (including post-processing that is automatically performed after capture). Thus, for example, the capture mode is different from a mode that does not affect how the camera operates when capturing media or does not include multiple settings (e.g., a flash mode with one setting that has multiple values (e.g., inactive, active, automatic)). In some implementations, the capture modes allow a user to capture different types of media (e.g., photos or videos), and the settings for each mode may be optimized to capture a particular type of media corresponding to a particular mode (e.g., via post-processing) that has particular properties (e.g., shape (e.g., square, rectangle), speed (e.g., slow motion, time lapse), audio, video). For example, when the computer system is configured to operate in a still photo capture mode, one or more cameras of the computer system capture a first type of media (e.g., rectangular photos) with specific settings (e.g., flash settings, one or more filter settings) when activated; when the computer system is configured to operate in the square capture mode, one or more cameras of the computer system capture a second type of media (e.g., square photos) with specific settings (e.g., flash settings and one or more filters) when activated; when the computer system is configured to operate in a slow motion capture mode, one or more cameras of the computer system capture a third type of media (e.g., slow motion video) with specific settings (e.g., flash settings, frame per second capture speed) when activated; when the computer system is configured to operate in a portrait capture mode, one or more cameras of the computer system capture a fifth type of media (e.g., a portrait photograph (e.g., a photograph with a blurred portion (e.g., background and/or foreground)), and in some embodiments, the computer system generates a photograph with a blurred portion by applying a composite depth effect to at least a portion of a field of view of one or more cameras of the computer system at a particular setting (e.g., a particular type of light amount (e.g., stage light, studio light, and/or contour light), aperture amount, and/or blur amount) (and in some embodiments, a particular type of light is composite (e.g., computer-generated) (e.g., generated by the computer system using depth information of the photograph and/or at least a portion of the field of view (e.g., current field of view) of one or more cameras of the computer system)). And/or when the computer system is configured to operate in a panoramic capture mode, one or more cameras of the computer system capture a fourth type of media (e.g., panoramic photographs (e.g., wide photographs)) with a particular setting (e.g., an amount of field of view to capture when zooming and/or moving). In some embodiments, when switching between capture modes, the display of the representation of the field of view changes to correspond to the type of media to be captured by the capture mode (e.g., the representation is rectangular when the computer system is operating in a still photo capture mode, and square when the computer system is operating in a square capture mode). In some embodiments, the synthesized (e.g., computer-generated) depth of view effect adjusts the photograph such that it appears that the photograph has been captured with a camera having an aperture and/or focal length (e.g., physical focal length, effective focal length) that is different from the aperture (e.g., physical aperture, effective aperture) and/or focal length (e.g., physical focal length, effective focal length) of the one or more cameras that actually captured the photograph. In some implementations, in response to detecting an input directed to the style selection user interface, the computer system transitions the computer system from being in the first capture mode to being in a different capture mode (e.g., indicating portrait mode control 620d in fig. 6O) (e.g., when continuing to display a representation of media using at least one media processing style, and/or when continuing to apply the media processing style to visual content of the media). Transitioning the computer system from being in the first capture mode to being in a second capture mode different from the first capture mode in response to detecting input directed to the style selection user interface allows the user to control the capture mode in which the computer system operates, which provides additional control options without cluttering the user interface.
In some embodiments, after transitioning the computer system (e.g., 620) from being in a first capture mode (e.g., indicating photo mode control 620c in fig. 6N) to being in a different capture mode (e.g., indicating portrait mode control 620d in fig. 6O), the computer system detects a request to capture media. In some embodiments, in response to detecting a request to capture media, the computer system captures media (e.g., as discussed with respect to fig. 6O) with a different capture mode based on the currently selected media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd), including: in accordance with a determination that the currently selected media processing style is a first media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd), capturing media in a different capture mode using the first media processing style (e.g., as discussed with respect to fig. 6O); and in accordance with a determination that the currently selected media processing style is a second media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd), capturing media in a different capture mode using the second media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd). In some implementations, the currently selected media processing style is applied to different media (e.g., media captured in different camera modes). In some implementations, when switching from a respective capture mode to a different respective capture mode, the representation of the currently selected media processing style continues to be displayed.
In some embodiments, the computer system (e.g., 600) is in a third capture mode (e.g., indicated by 602C) (e.g., before and after input of the pointing representation is detected) (e.g., still camera, video, slow motion, and/or portrait) (e.g., as discussed with respect to fig. 8A-8C). In some embodiments, after detecting input directed to the representation (e.g., 630), the computer system detects a request to display a second user interface that includes a second representation of the media (e.g., as discussed with respect to fig. 6N-6O). In some implementations, in response to detecting a request to display a second user interface including a second representation of media, the computer system displays the second user interface including the second representation of media (e.g., 630). In some embodiments, the computer system detects an input (e.g., a movement input) directed to the second representation (e.g., as discussed with respect to fig. 8A-8C) while the second user interface is displayed (and/or, in some embodiments, detects a non-movement input/gesture (e.g., a press and hold input/gesture, a voice input, and/or a tap input) (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) (e.g., as discussed with respect to fig. 6N-6O). In some embodiments, in response to detecting an input (e.g., 650N1, 650N 2) directed to the second representation (e.g., 630) (e.g., as discussed with respect to fig. 8A-8C), and in response to determining that the computer system is not in a first media processing mode selection mode (e.g., such that the computer system applies one or more media processing modes to the second representation) (e.g., as indicated by 620C) transitions the computer system from being in a third capture mode (e.g., as indicated by 620C) to be in a fourth capture mode (e.g., as indicated by d) (e.g., as discussed with respect to fig. 6N-6O) (e.g., in response to detecting an input (e.g., 650N1, 650N 2) (e.g., as discussed with respect to fig. 8A, 650C) (e.g., as discussed with respect to fig. 8A, and/or). And in accordance with a determination that the computer system is in the first media processing style selection mode (e.g., as indicated at 602 b), the computer system maintains the computer system in a third capture mode (e.g., as indicated at 620 c) (e.g., media processing and displaying an indication that the media processing style for capturing media has changed) (e.g., still camera, video, slow motion, and/or portrait) (e.g., forgoing transitioning the computer system from being in the fifth capture mode to being in the sixth capture mode) (e.g., as discussed with respect to fig. 6N-6O). When the specification is met, selecting whether to transition the computer system from being in the third capture mode to being in the fourth capture mode or to keep the computer system in the third capture mode based on whether the computer system is in the media processing style selection mode allows the computer system to intelligently perform different operations based on whether the computer system is in the media processing style selection mode, which performs the operations when a set of conditions has been met without additional user input.
In some embodiments, prior to detecting input directed to the representation, the style selection user interface includes a plurality of selectable user interface objects (e.g., 626a1, 626a2, 626b1, 626c2, 626d1, 626d 2) for the first media processing style (e.g., editing/modifying parameters thereof (e.g., visual characteristics (e.g., color temperature, hue, brightness, saturation, chroma, color, saturation, viewing chroma, coldness, and/or harmony) and/or depth parameters)) prior to detecting input directed to the representation).
In some implementations, in response to detecting input pointing to the representation (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q), and in accordance with determining that the input pointing to the representation is in a first direction, and in accordance with determining that a second media processing style is being applied to a fourth portion (e.g., a middle portion) of the representation (e.g., 630) of the media (e.g., a middle section and/or a portion, a left side section, and/or a right side section of the middle section of 630, 676a, 676b, 676c, 680c, and/or 680 d) (and/or in accordance with determining that more than a predetermined portion (e.g., of the representation is displayed using the second media processing style when an end of the input pointing to the representation is detected (e.g., simultaneously with, immediately before, and/or immediately after the end of the input is detected), 25%, 30%, 40%, 50%, 60%, 75%)) (e.g., in response to detecting an input pointing to the representation), the computer system displays a plurality of selectable user interface objects (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626d 2) for the second media processing style (e.g., 634a to 634d, 634aa, and/or 634 dd) together with a representation of a current value for the first media processing style (e.g., and a representation of a current value for the second media processing style that is different from the representation of the current value for the first media processing style), and ceases to display the plurality of selectable user interface objects for the first media processing style (e.g., 634a to 634d, 634aa, and/or 634 dd), replacing the display of the plurality of selectable user interface objects for the first media processing style with the display of the plurality of selectable user interface objects for the second media processing style) (e.g., as described with respect to fig. 6A-6D and method 900). In some implementations, in response to detecting input pointing to the representation (e.g., 650D, 650k1, 650k2, 750j, and/or 750 q), and in accordance with determining that the input pointing to the representation is in the first direction, and in accordance with determining that the second media processing style is not applied to a fourth portion (e.g., a middle portion) of the representation (e.g., 630) of the media (e.g., middle section and/or a portion, left side section, and/or right side section of the middle section of 630, 676A, 676b, 676c, 680c, and/or 680D) (and/or in accordance with determining that less than a predetermined portion (e.g., of the representation is displayed using the second media processing style when an end of the input pointing to the representation is detected (e.g., simultaneously with, immediately before, and/or immediately after the end of the input is detected), 25%, 30%, 40%, 50%, 60%, 75%)) (e.g., in response to detecting input pointing to the representation), the computer system continues to display the plurality of selectable user interface objects for the first media processing style (e.g., 626A1, 626A2, 626b1, 626b2, 626c1, 626c2, 626D1, 626D 2) without displaying the plurality of selectable user interface objects for the second media processing style (e.g., discarding the display of the plurality of selectable user interface objects for the second media processing style in place of the display of the plurality of selectable user interface objects for the first media processing style) (e.g., as described with respect to fig. 6A-6D and method 900). When the prescribed condition is met, selecting whether to display the plurality of selectable user interface objects for the second media processing style or to continue to display the plurality of selectable user interface objects for the first media processing style allows the computer system to provide the user with relevant selectable options for the media processing style if determined to be likely relevant to the user, which performs the operation without further user input when a set of conditions has been met.
In some implementations, the plurality of selectable user interface objects (e.g., 626A1, 626A2, 626b1, 626b2, 626c1, 626c2, 626D1, 626D 2) for the first media processing style are displayed (e.g., as described with respect to fig. 6A-6D and method 900) at one or more locations on (e.g., overlaying) the representation of media (e.g., 630, 676A, 676b, 676c, 680c, and/or 680D) (e.g., live preview and/or previously captured media). In some implementations, the one or more locations are in a bottom portion of the representation of the media, in a bottom portion of the representation of the media displayed in a camera display area (e.g., 604), near (e.g., above) a user interface object (e.g., 610) and/or a camera capture mode user interface object for capturing the media, in one or more locations between an indication identification area (e.g., 602) and a control area (e.g., 606). In some embodiments, as part of displaying the plurality of selectable user interface objects for the first media processing style, the computer system stops displaying one or more other selectable user interface objects (e.g., one or more selectable objects for controlling a zoom level of the representation of the media, one or more selectable objects for controlling a composite lighting effect applicable to the representation of the media). Displaying the plurality of selectable objects for the first media processing style at one or more locations on the representation of the media provides feedback to the user regarding selectable user interface objects available (e.g., for editing, corresponding to) the first media processing style, while effectively using limited space to display user interface elements (e.g., on a display or in a predetermined display area available for displaying user interface elements), provides feedback to the user regarding the representation of the media, which provides additional control options without cluttering the user interface.
In some embodiments, when a first media processing style is selected for use (e.g., when a first portion of the representation is displayed and a second portion of the representation is displayed using the first media processing style), the computer system (e.g., 600) detects a first request (e.g., 650a, 650c, 650 j) for capturing media (e.g., detects an input (e.g., a tap gesture) (e.g., a single tap input, a double tap input)) (and/or in some embodiments, in response to detecting a non-tap input/gesture (e.g., a move input/gesture, a press and hold input/gesture and/or a voice input) on a user interface object of the captured media), in response to detecting one or more of a video request (e.g., a move input/gesture, a press and hold input/gesture and/or a voice input)) in some embodiments, in response to detecting a first request for capturing media (e.g., 634a to 634a d, 634aa and/or 634 dd) (e.g., no second media processing style) is applied) (e.g., one or more of corresponding to different activations and/or a single photo for capturing a user interface object of the media, in some embodiments, including in response to detecting a video request and/or more of the first media request, the first request to capture media includes a single request (e.g., including detecting a single input/gesture). In some embodiments, after capturing media to which the first media processing style applies, and when the use of the second media processing style is selected (e.g., 634 a-634 d, 634aa, and/or 634 dd) (e.g., when the first portion of the representation and the second portion of the representation are displayed using the second media processing style) (e.g., as discussed with respect to fig. 6O-6U)), the computer system detects a second request (e.g., 650a, 650c, 650 j) to capture media (e.g., detects an input (e.g., a tap gesture) (e.g., a single tap input, a double tap input)) and/or, in some embodiments, in response to detecting a non-tap input/gesture (e.g., a move input/gesture, a press and hold input/gesture, and/or a voice input) on a user interface object of the captured media) (e.g., in response to detecting a second request to capture media, the computer system captures media to which the second media processing style applies (e.g., corresponds to a different activation and/or single or multiple of a user interface for capturing media) (e.g., as discussed with respect to fig. 6O-6) in some embodiments, the representation of the media comprising the first portion of the representation and the second portion of the representation to which the second media processing style is applied is different from the representation of the media comprising the first portion of the representation and the second portion of the representation to which the first media processing style is applied. In some embodiments, in response to detecting the second request to capture media, the computer system captures a plurality of photos and/or videos and applies a second media processing style to the plurality of photos and/or videos. In some implementations, the second request to capture media includes a plurality of requests to capture media (e.g., including detecting a plurality of inputs/gestures). In some implementations, the first request to capture media includes a single request (e.g., including detecting a single input/gesture). Capturing media including the one or more portions of the representation to which the respective media processing style is applied in response to detecting the request to capture media while the one or more portions of the representation to which the respective media processing style is applied are displayed, allows the computer system to intelligently capture media representing media that is displayed when the request to capture media is detected, which performs the operation when a set of conditions has been met without requiring additional user input.
In some embodiments, in response to detecting an input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation (e.g., 630, 676a, 676b, 676c, 680c, and/or 680 d) and in accordance with determining that the input has been detected, and in accordance with determining that the input directed to the representation meets one or more movement criteria (e.g., the input has been detected for longer than a certain duration, has been detected to have a rate (e.g., average rate, highest rate) above a threshold (e.g., a non-zero threshold), has been detected to end at a particular location on the pattern selection user interface, and/or has been detected to exceed (e.g., from a starting location to an ending location) a threshold (e.g., a non-zero threshold) distance), the computer system uses the second media processing pattern (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) to display (e.g., immediately after detecting the end of the input, and/or immediately display) the first portion of the representation has been detected to have a rate (e.g., a highest rate) above a non-zero threshold) threshold, has been detected, and/or has been detected to be detected to exceed (e.g., from a starting location to an ending location), the threshold (e.g., a non-zero threshold). In some implementations, in response to detecting an input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation (e.g., 630, 676a, 676b, 676c, 680c, and/or 680 d), and in accordance with determining that an end of the input (e.g., liftoff) has been detected, and in accordance with determining that the input directed to the representation does not satisfy one or more movement criteria, the computer system uses a first media processing style (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) to display (e.g., immediately after detecting the end of the input, display, abruptly, and/or immediately) a first portion of the representation and a second portion of the representation (e.g., not using a second media processing style) (e.g., to indicate that the first media processing style has been selected for use in capturing media in response to a future media capture input). Selecting whether to display the first portion of the representation and the second portion of the representation using the second media processing style or to display the first portion of the representation and the second portion of the representation using the first media processing style based on movement of the input allows the computer system to intelligently provide feedback to the user as to which media processing style was selected and will affect the display and/or capture of the media to proceed forward, which performs the operation when a set of conditions has been met without additional user input and provides improved visual feedback.
In some embodiments, after detecting an input pointing to the representation (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q), the computer system displays the first portion of the representation and the second portion of the representation using a second media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd). In some embodiments, when the representation is displayed and the second portion of the representation is displayed using the second media processing style, the computer system detects a second input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation (and, in some embodiments, includes movement in the same direction as the direction of movement of the input directed to the representation). In some implementations, in response to detecting the second input pointing to the representation, in accordance with determining that the second input pointing to the representation is in the first direction, the computer system displays the first portion of the representation using a ninth media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) (e.g., different from the first media processing style, the second media processing style, and the third media processing style (e.g., as described above with respect to method 900), while continuing to display the second portion of the representation using the second media processing style. In some embodiments, as displaying the first portion of the representation using the ninth media processing style while continuing to display a portion of the second portion of the representation using the second media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd), the computer system displays (e.g., concurrently displays) the first portion of the representation using the ninth media processing style while displaying the second portion of the representation and the third portion of the representation using the second media processing style in response to detecting the first portion of the second input directed to the representation (and, in some embodiments, the first portion of the second input has a third input magnitude). In some implementations, after the first portion of the representation is displayed using the seventh media processing style, a second portion of the representation and a third portion of the representation are displayed simultaneously using the second media processing style, and in response to detecting the second portion of the second input directed to the representation. In some implementations, the second portion of the second input has a fourth input magnitude that is greater than the second input magnitude, and the computer system displays the first portion of the representation and the third portion of the representation using the seventh media processing style while displaying the second portion of the representation using the second media processing style.
In some embodiments, in response to detecting a second input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation, and in accordance with determining that an end of the second input has been detected, the computer system: in accordance with a determination that the second input directed to the representation meets one or more movement criteria (e.g., the input has been detected longer than a bit duration, has been detected to have a rate (e.g., average rate, highest rate) that is above a threshold (e.g., non-zero threshold), has been detected to end at a particular location on the style selection user interface, and/or has been detected to exceed (e.g., from a starting location to an ending location) a threshold (e.g., non-zero threshold) distance), a first portion of the representation and a second portion of the representation are displayed (e.g., displayed promptly, abruptly, and/or immediately after the end of the input is detected) using seventh media processing styles (e.g., 634 a-634 d, 634aa, and/or 634 dd) (e.g., without using the first media processing style and the second media processing style); and in accordance with a determination that the input directed to the representation does not meet the one or more movement criteria, displaying (e.g., displaying quickly, abruptly, and/or immediately after detecting the end of the input) the first portion of the representation and the second portion of the representation using the second media processing style (e.g., without using the seventh media processing style and the first media processing style). Selecting whether to display the first portion of the representation and the second portion of the representation using the seventh media processing style or the second portion of the representation using the second media processing style based on the movement of the input allows the computer system to intelligently provide feedback to the user as to which media processing style was selected and will affect the display and/or capture of the media going forward, which provides additional control options without cluttering the user interface and provides improved visual feedback. In response to detecting the first portion of the second input pointing to the representation, the first portion of the representation is displayed using a seventh media processing style while the second portion of the representation and the third portion of the representation are displayed using a second media processing style, which provides the user with visual feedback as to how different media processing styles affect the visual content represented by the representation of the media differently and as to at least some media processing styles that may be selected based on the second input pointing to the representation, which provides improved visual feedback. In response to detecting the second input pointing to the representation after detecting the input pointing to the representation, and in accordance with a determination that the second input pointing to the representation is in the first direction, displaying the first portion of the representation using a ninth media processing style while continuing to display the second portion of the representation using a second media processing style allows the user to control which portions of the representation are displayed using different media processing styles than the first media processing style and the second media processing style, which provides additional control options without cluttering the user interface.
In some embodiments, prior to displaying a style selection user interface that includes representations of media displayed using the first media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd), a user interface that includes (e.g., a fourth representation of media (e.g., displayed without using the first media processing style) (or any other media processing style (e.g., any other user selected/predefined media processing style (e.g., applied to the representation in response to detecting input such as an input directed to the representation), the first media processing style, the second media processing style, the third media processing style as discussed above) and) a user interface object (e.g., 602 b) for displaying the style selection user interface is displayed, the user interface object is displayed at a first respective location in a user interface that includes a fourth representation of media (e.g., a mode that causes the computer system to apply one or more media processing styles to the second representation) (e.g., as described with respect to method 1000). In some embodiments, when the user interface object (e.g., 602 b) for displaying a style selection user interface is displayed, the computer system detects an input (e.g., 650 b) (e.g., a tap input (e.g., a tap gesture) (e.g., a single tap input, a double tap input)) directed to the user interface object (e.g., and/or directed to the first respective location) for displaying the style selection user interface (e.g., a single tap input, a double tap input) (and/or, in some embodiments, detects a non-tap input/gesture (e.g., move input/gesture, press and hold input/gesture, and/or voice input)). In some implementations, in response to detecting input directed to a user interface object for displaying a style selection user interface, the computer system displays a style selection user interface (e.g., an interface including one or more of (e.g., 634 a-634 d, 634aa, and/or 634 dd)) such as, for example, and/or ceases to display representations of media that are not displayed using the first media processing style or any other media processing style (e.g., any other user selected/predefined media processing style). In some embodiments, in response to detecting a first input directed to a user interface object for displaying a style selection user interface, the computer system is configured to operate in a style mode. In some implementations, as part of the display style selection user interface, the computer system displays (and/or continues to display) a representation of the media using the currently selected media processing style. Displaying a representation of media using a first media processing style in response to detecting input directed to a user interface object for displaying a style selection user interface allows a user to control, via a computer system, whether the style selection user interface is to be displayed, wherein the user can set a new media processing style to be applied to the representation of media, which provides additional control options without cluttering the user interface.
In some implementations, the style selection user interface includes a user interface object (e.g., 602 b) for controlling settings (e.g., aperture settings (e.g., to control depth parameters), closing photo capture settings, wherein multiple photos are captured in response to a single request to capture media) at a second corresponding location in the style selection user interface (e.g., an interface including one or more of (e.g., 634 a-634 d, 634aa, and/or 634 dd). In some embodiments, when the style selection user interface and the user interface object (e.g., 602 b) for controlling settings at the second corresponding location are displayed, input (e.g., tap gesture) (e.g., single tap input, double tap input)) is detected (and/or in some embodiments, non-tap input/gesture (e.g., move input/gesture, press and hold input/gesture, voice input)) is detected that is directed to the second corresponding location in the style selection user interface (e.g., to the location where the user interface object for displaying the style selection user interface was previously displayed). In some embodiments, in response to detecting an input directed to a second corresponding location in the style selection user interface, the computer system ceases to display the style selection user interface (e.g., as discussed above with respect to the input detected at 602c and 602 d) (or any other media processing style (e.g., any other user selected/predefined media processing style (e.g., applied to the representation in response to detecting an input such as an input directed to the representation), the first media processing style, the second media processing style, the third media processing style as discussed above) (e.g., and/or ceases to display a representation of the displayed media and/or wherein a portion of the representation is displayed using the media processing style (e.g., the first media processing style and/or the second media processing style)). In some embodiments, in response to detecting an input directed to a second corresponding location in the user interface, the computer system displays one or more user interface objects (e.g., sliders) for controlling the settings. In some embodiments, in response to detecting an input directed to a second corresponding location in the style selection user interface, the computer system is not configured to operate in the media processing style selection mode. In some embodiments, as part of the stop style selection user interface, the computer system remains displaying the representation using the currently selected media processing style. Stopping the display of the style selection user interface in response to detecting input directed to a user interface object for controlling settings (e.g., input detected when displaying a user interface including a representation of media displayed using a first media processing style) allows a user to control, via the computer system, whether the style selection user interface is to be displayed, wherein the user can select a new media processing style to be applied to the representation of media, which provides additional control options without cluttering the user interface.
In some implementations, after displaying the style selection user interface, the computer system receives a request to display the camera user interface. In some embodiments, in response to receiving a request to display a camera user interface, the computer system displays the camera user interface (e.g., a user interface including 602, 604, and/or 606), including simultaneously displaying in the camera user interface: a representation of the field of view of one or more cameras (e.g., 630); and a respective user interface object (e.g., 602 b) that, when selected, causes a style selection user interface (e.g., a user interface object for displaying a style selection user interface) to be displayed, comprising: in accordance with a determination that the first media processing style is currently selected as the media processing style, the corresponding user interface object (e.g., 602 b) is displayed in a first appearance (e.g., not in a second appearance display affordance). In some implementations, in accordance with a determination that a second media processing style is currently selected as the media processing style, the computer system displays a respective user interface object (e.g., 602 b) having a second appearance that is different from the first appearance (e.g., as described above with respect to 602b at fig. 6A-6D) (e.g., without displaying the affordance in the first appearance). In some embodiments, the camera user interface further includes a user interface object (e.g., 610) for capturing media, the user interface object being displayed simultaneously with the representation of the field of view of the one or more cameras and the affordance, the user interface object, when selected, causing the device to capture media with the one or more cameras of the device. In some embodiments, the computer system displays the respective user interface object in a first appearance when the default style is the currently selected media processing style and in a second appearance when one or more (and/or a predetermined number) of different non-default styles are selected. Displaying a user interface object that, when selected, causes the user interface to be selected in a different visual appearance display style based on whether the corresponding media processing style is a first media processing style or a second media processing style provides visual feedback to the user regarding the media processing style currently being applied to and/or currently configured for visual content of the media, which provides improved visual feedback.
In some implementations, the user interface includes a first user interface object that is displayed concurrently with a first portion of the representation and a second portion of the representation that are displayed using the first media processing style (e.g., as discussed with respect to fig. 8A-8C). In some embodiments, when displaying a first user interface object that is displayed concurrently with a first portion of the representation and a second portion of the representation that are displayed using a first media processing style, the computer system displays input (e.g., tap gesture) (e.g., single tap input, double tap input)) directed to the first user interface object via the one or more input devices (and/or, in some embodiments, detects non-tap input/gesture (e.g., move input/gesture, press and hold input/gesture, voice input) (e.g., as discussed with respect to fig. 8A-8C). In some embodiments, in response to detecting input directed to the first user interface object, the computer system displays the first portion of the representation and the second portion of the representation (e.g., as discussed with respect to fig. 8A-8C) without using the first media processing style (e.g., the style displayed when the computer system is not operating in a media processing style selection mode, and/or cannot be detected by input directed to the representation) for display of content. In some implementations, in response to detecting input directed to the first user interface object, a first portion of the representation and a second portion of the representation are displayed using a media processing style different from the first media processing style. In some implementations, in response to detecting an input directed to the first user interface object, a neutral style (and/or a default style) is used to display a first portion of the representation and a second portion of the representation. In response to detecting input directed to the first user interface object (e.g., input detected when the first user interface object is displayed concurrently with displaying the first portion of the representation and the second portion of the representation displayed using the first media processing style) displaying the first portion of the representation and the second portion of the representation without using the first media processing style allows the user to control whether the representation of the first media processing style will be displayed using the first media processing style, which provides additional control options without cluttering the user interface.
In some implementations, the style selection user interface includes a selectable user interface object (e.g., 610) for capturing media (e.g., shutter buttons). In some embodiments, when a first media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) is used to display a representation of media (e.g., 630) and to display a selectable user interface object for capturing media (e.g., and a style selection user interface), the computer system detects input (e.g., 650a, 650c, 650 j) directed to the selectable user interface object for capturing media (e.g., a position in the style selection user interface) (e.g., a tap input (e.g., a tap gesture) (e.g., a single tap input, a double tap input)) (and/or, in some embodiments, detects a non-tap input/gesture (e.g., a move input/gesture, press and hold input/gesture, and/or voice input)). In some implementations, in response to detecting an input (e.g., 650a, 650c, 650 j) directed to a selectable user interface object for capturing media, the computer system captures media to which the first media processing style is applied (e.g., based on a current value of a parameter of the first media processing style). In some embodiments, in response to detecting input directed to the selectable user interface object for capturing media and/or detecting input directed to the representation (e.g., as a request to switch media processing styles (e.g., in response to detecting input directed to the representation), the computer initiates capturing of media to which the media processing styles are applied, the media processing styles applied to predetermined portions (e.g., 25%, 30%, 40%, 50%, 60%, 75%) of the representations displayed using the first media processing styles and/or portions of the representations of media that are larger (or equal) than other portions of the representations of media displayed (and/or when input directed to the selectable user interface object for capturing media is detected (e.g., immediately before/after)) in response to detecting input directed to the selectable user interface object for capturing media, this provides additional control options without cluttering the user interface.
In some embodiments, as part of displaying a first portion of a representation using a first media processing style, a computer system applies the first media processing style (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) differently to one or more objects (e.g., persons in live preview 630) (e.g., persons and/or faces of persons) (e.g., identifiable objects) and to a second portion of the representation that does not include the one or more objects) (e.g., for one type of identified object, uses a different visual parameter (e.g., color characteristic (e.g., color temperature, hue, brightness, saturation, chroma, hue, chroma, coldness, and/or harmony) and/or depth parameter) set (e.g., a subset of the first portion) than a different visual parameter (e.g., a subset of the first portion) than a different type of identified object (e.g., a person). In some implementations, the first media processing style is applied differently to different portions of the representation in an attempt to preserve the appearance of some of the particular portions of the scene (e.g., portions of the scene including sky, skin tone, user's face, etc.) included in the representation of the media.
In some implementations, the first media processing style (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) is applied to the representation of media based on one or more parameters selected from the group consisting of contrast, sharpness, color temperature, and combinations thereof (e.g., as described with respect to fig. 6A-6C and method 70).
In some implementations, when a second media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) is used to display a first portion of the representation (e.g., a middle section, a portion of a left side section, and/or a right side section of 630, 676a, 676b, 676c, 680c, and/or 680 d) and a third portion of the representation (e.g., a portion of a middle section, a left side section, and/or a right side section of 630, 676a, 676b, 676c, 680c, and/or 680 d) while a second portion of the representation is displayed using the first media processing style, the computer system detects an end of an input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation. In some implementations, in response to detecting an end of an input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation, the computer system stops displaying a second portion of the representation (e.g., at least a portion of the second portion, edges of the representation) using the first media processing style (e.g., dissolve the second portion of the representation displayed using the first media processing style) and reduces a subset of the second portion of the representation (e.g., portions of middle, left, and/or right side sections of 660a, 660b, and/or 630, 676a, 676b, 676c, and/or 680 d) (e.g., portions of the representation included in and less than the second portion of the representation) and displays a first portion of the representation, a second portion of the representation, and/or a third portion of the representation using the second media processing style (e.g., darkens, fades, does not highlight, and/or increases opacity). In some embodiments, the first portion of the representation is displayed using the second media processing style and the second portion of the representation and the third portion of the representation are displayed simultaneously using the first media processing style, the computer system detecting an end of input directed to the representation. In some embodiments, when the first portion of the representation is displayed using the second media processing style and when the second portion of the representation and the third portion of the representation are displayed using the first media processing style, in response to detecting the end of the input directed to the representation, the computer system stops displaying the first portion of the representation (e.g., at least a portion of the second portion, an edge of the representation) using the second media processing style and reduces visual saliency of a subset of the first portion of the representation (e.g., a portion of the representation included in and less than the second portion of the representation). In some implementations, the computer system detects a third input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation (e.g., 630, 676a, 676b, 676c, 680c, and/or 680 d) when displaying a subset of the second portion of the representation (e.g., a portion of the middle section, the left section, and/or the right section of 630, 676a, 676b, 676c, 680c, and/or 680 d) with reduced visual salience. In some embodiments, in response to detecting the third input pointing to the representation, the computer system increases visual salience (e.g., does not darken, lighten, fade out, highlight, and/or reduce opacity) of the subset of the second portion of the representation. In response to detecting the fourth input directed to the representation, increasing visual salience of the subset of the second portion of the representation, which provides visual feedback to the user that the end of the input directed to the representation has not been detected, and, in some embodiments, providing visual feedback to the user as to how the media processing style can affect the subset of the second portion of the representation, which provides improved visual feedback. In response to detecting the end of the input directed to the representation, the visual saliency of the subset of the second portion of the representation is reduced, which provides the user with visual feedback that the media processing style has been selected via the input and/or is not currently detected, which gives the user confidence that no unintended change in relation to the change in the selected media processing style will occur without further user input, which provides improved visual feedback.
In some implementations, displaying the representation of the media includes: in accordance with a determination that a representation (e.g., and/or a portion of a representation of media) is to be displayed (and/or any media processing style) using a tenth media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) in response to detecting a fourth input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation, a fifth portion (e.g., 660a, 660b, right side section, and/or left side section) of the representation is to be displayed in a first visual appearance (e.g., first color and/or no graying) (e.g., edges (e.g., left and/or right edges), visual elements); and in accordance with a determination that the representation of the media (e.g., and/or a portion of the representation of the media) is to be displayed without using the tenth media processing style (and/or any media processing style) in response to detecting the fourth input directed to the representation, displaying a fifth portion of the representation (e.g., 660a, 660b, right side section, and/or left side section) in a second visual appearance that is different from the first visual appearance. The fourth portion of the representation being displayed differently based on determining whether the representation of the media will be displayed using the tenth media processing style provides visual feedback to the user as to whether the user may select the respective media processing style via input and/or whether the respective media processing style is accessible via input in a particular direction, which provides improved visual feedback.
In some implementations, a sixth portion of the representation of the media is displayed using a first media processing style (e.g., 660a, 660b, right side section, and/or left side section) (e.g., edge (e.g., left edge and/or right edge), a portion) (e.g., a region/edge of the representation where the media processing style is applied to the media) before input directed to the representation is detected (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q), and while the first portion of the representation and the second portion of the representation are displayed using the first media processing style. Displaying the sixth portion of the representation of the media displayed using the first media processing style provides feedback to the user regarding how the first media processing style may affect the second region of the representation, which provides improved visual feedback.
In some implementations, before input directed to the representation (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) is detected, and when the first media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) is used to display the first portion of the representation and the second portion of the representation, the seventh portion (e.g., 660a, 660b, right side section, and/or left side section) of the representation of the media is displayed without using the first media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) (e.g., edge (e.g., left edge and/or right edge), a portion) (without any media processing style (e.g., first media processing style, second media processing style, third media processing style, etc.) being applied to the region/edge of the representation of the media). In some implementations, in response to detecting an input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation, the computer system displays a seventh portion (e.g., 660a, 660b, right-side section, and/or left-side section) of the representation of the media to transition from not using the first media processing style to displaying an animation that is displayed using the first media processing style (e.g., applying the first media processing style to a fade-in of the representation of the media). In response to detecting input directed to the representation, displaying the second region of the representation of the media transitions from displaying without using the first media processing style to displaying with the first media processing style, which provides feedback to the user as to how the first media processing style may affect the second region of the representation (e.g., when the user would be more likely to see how the first media processing style may affect the second region of the representation), which provides improved visual feedback.
In some implementations, prior to displaying the style selection user interface, wherein the first portion of the representation and the second portion of the representation are displayed using a first media processing style applied to visual content of the media, the computer system displays a user interface object (e.g., 844 a) for enabling a second media processing style selection mode. In some implementations, when a user interface object for enabling the second media processing style selection mode is displayed (e.g., 844 a), the computer system detects input (e.g., 850 a) (e.g., tap input (e.g., tap gesture) (e.g., single tap input, double tap input)) directed to the user interface object for enabling the second media processing style selection mode (and/or, in some implementations, detects non-tap input/gesture (e.g., move input/gesture, press and hold input/gesture, and/or voice input)). In some embodiments, in response to detecting input directed to a user interface object for enabling the second media processing style selection mode, the computer system displays a respective user interface including simultaneously displaying a representation (e.g., 878 a) of previously captured media (e.g., sample media, media not captured by the computer system, and/or templates) to which the first media processing style (e.g., 634 a) was applied and a representation (e.g., 878 b) of previously captured media (e.g., sample media, media not captured by the computer system, and/or templates) to which the second media processing style (e.g., 634 b) was applied. In some implementations, when the respective user interface is displayed, the computer system detects an input (e.g., a tap gesture) (e.g., a single tap input, a double tap input)) directed to the respective user interface (and/or, in some implementations, detects a non-tap input/gesture (e.g., a move input/gesture, a press and hold input/gesture, and/or a voice input)). In some implementations, in response to detecting an input directed to the respective user interface, and in accordance with a determination that the input directed to the respective user interface corresponds to a selection of an option to use the first media processing style, the computer system displays a user interface including a representation of media (e.g., without using the second media processing style) in response to detecting a request to display the media user interface. In some implementations, in response to detecting an input directed to the respective user interface, and in accordance with a determination that the input directed to the respective user interface corresponds to a selection of an option to use the second media processing style, the computer system displays, in response to detecting a request to display the media user interface, a user interface including a representation of media using the second media processing style (e.g., without using the first media processing style).
In some embodiments, the style selection user interface includes a first style mode user interface object (e.g., 602b and/or 688 b) that, when selected, causes (e.g., causes the computer system to switch between) the display of the style selection user interface (e.g., a user interface object for displaying the style selection user interface, and/or a user interface object that, when selected, causes the display of a style portion user interface (e.g., or ceases to display). In some implementations, the first modality mode user interface object is displayed concurrently with one or more camera settings user interface objects (e.g., 688) (e.g., one or more camera settings user interface objects (e.g., user interface objects for controlling camera settings) are displayed based on a camera capture mode (e.g., settings for each camera capture mode) in which one or more cameras are configured to capture media). In some embodiments, prior to displaying the user interface object for displaying the style selection user interface, the computer system detects an input (e.g., 650 w) directed to the respective user interface (e.g., swipe input, tap input, and/or drag input), and in response to detecting the input directed to the respective user interface, the computer system displays the user interface object (e.g., previously not displayed) for displaying the style selection user interface and one or more camera settings affordances (e.g., previously not displayed). In some embodiments, in response to detecting selection of a respective camera setting user interface object of the one or more camera setting user interface objects, the computer system displays one or more controls for adjusting the camera setting (e.g., controls that when selected cause the computer system to turn on mode (e.g., flash mode, night mode, animated image capture mode, and/or timer mode), controls that when selected cause the computer system to turn off mode, controls that when selected cause the capture setting to be adjusted in value (e.g., exposure value, time value of timer mode), and/or controls for changing one or more filters and/or zoom levels for capturing and/or displaying media). Displaying the style selection user interface includes displaying the first style mode user interface object concurrently with the one or more user camera settings user interface objects, which allows a user to access controls that may cause the style selection user interface to be displayed and stopped from being displayed, while allowing the user to access controls for controlling the one or more user camera settings, which reduces the amount of input spent accessing the corresponding controls if they are not displayed concurrently.
In some embodiments, the style selection user interface includes a second style mode user interface object (e.g., 602b and/or 688 b) that, when selected, causes (e.g., causes the computer system to toggle therebetween) the display of the style selection user interface (e.g., a user interface object for displaying the style selection user interface, and/or a user interface object that, when selected, causes the display of a style portion user interface (e.g., or ceases display). In some embodiments, when the second media processing style is used to display the first portion of the representation (e.g., 630) while the first media processing style is used to display the second portion of the representation and the third portion of the representation, the computer system displays (e.g., color, size, having a first boundary (e.g., lines (e.g., shown in a clockwise and/or counterclockwise direction) surrounding the second style mode user interface object) the second style mode user interface object (e.g., 602b and/or 688 b) in a third appearance (e.g., as discussed above with respect to fig. 6L), wherein the lines surround and/or surround a portion (e.g., 0% to 100%) of the second style mode user interface object. In some embodiments, after displaying the first portion of the representation using the second media processing style while displaying the second portion of the representation and the third portion of the representation using the first media processing style, and in response to detecting the second portion of the input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation, the computer system changes the second style mode user interface object from displaying (e.g., displaying and/or displaying an animation of the second style mode user interface change) to displaying (e.g., displaying and/or displaying) the first portion of the representation and the third portion of the representation using the second media processing style in a fourth appearance different from the third appearance (e.g., color, size, having a first boundary (e.g., a line (e.g., shown in a clockwise and/or counter-clockwise direction) that surrounds and/or surrounds a portion (e.g., 0% to 100%) of the second style mode user interface object) (e.g., while displaying the first portion of the representation and the third portion of the representation using the second media processing style) (e.g., as discussed above with respect to fig. 6). Changing the second style mode user interface object from being displayed in the third appearance to being displayed in the fourth appearance provides visual feedback to the user as to how switching to the media processing style being applied to the representation affects the display of the representation differently, which provides improved visual feedback.
In some embodiments, as part of changing the second style mode user interface object (e.g., 602 b) from being displayed in the third appearance to being displayed in the fourth appearance, the computer system changes the color, chromaticity, and/or color of at least a portion of the display (e.g., at least a portion of the second style mode user interface object (e.g., 602 b) (and, in some embodiments, the portion including boundaries of the second media processing style (e.g., lines surrounding the second media processing style)) of the second style (e.g., 634 a-634 d, 634aa, and/or 634 dd)) of the first style (e.g., the second style) according to determining the value of the first parameter (e.g., 626a and/or 626 b) of the second style (e.g., the second style) is different from the value of the first parameter (e.g., 626a and/or 626 b) of the second style. In some implementations, the computer system does not change the first visual aspect of the second style mode user interface object in accordance with determining that the value of the first parameter of the first media processing style is not different from the value of the first parameter of the second media processing style. In some embodiments, when the first portion of the representation is displayed using the second media processing style while the second portion of the representation and the third portion of the representation are displayed using the first media processing style, the first visual aspect corresponds to (is and/or is represented by) a first color, and after the first portion of the representation is displayed using the second media processing style while the second portion of the representation is displayed using the first media processing style, and in response to detecting the second portion of the input directed to the representation, the second visual aspect corresponds to (is and/or is represented by) a second color different in length from the first color (e.g., as described above with respect to method 1000). In some embodiments, as part of changing the second style mode user interface object (e.g., 602 b) from being displayed in the third appearance to being displayed in the fourth appearance, the computer system changes the display of the second visual aspect of the second style mode user interface object (e.g., the color of 602b and/or the line (e.g., as discussed above with respect to fig. 6L)) of the second style mode user interface object (e.g., the size, length, and/or fill (e.g., the boundary line and/or line surrounding, adjacent, and/or surrounding the second style mode user interface object) of the second parameter (e.g., 626a and/or 626 b) according to determining the value of the second parameter (e.g., 626a and/or 626 b) of the first media processing style. In some implementations, the second visual aspect is different from the first visual aspect (e.g., the first visual aspect of the second style mode user interface object is not changed based on determining that the value of the second parameter of the first media processing style is different from the value of the second parameter of the second media processing style). In some implementations, in accordance with determining that the value of the second parameter of the first media processing style is not different from the value of the second parameter of the second media processing style, the computer system does not change the second visual aspect of the second style mode user interface object. In some embodiments, the second visual aspect corresponds to (is and/or is represented by) a first length when the first portion of the representation is displayed using the second media processing style while the second portion of the representation and the third portion of the representation are displayed using the first media processing style, and after the first portion of the representation is displayed using the second media processing style while the second portion of the representation and the third portion of the representation are displayed using the first media processing style, and in response to detecting the second portion of the input directed to the representation, the second visual aspect corresponds to (is and/or is represented by) a second length different from the first length (e.g., as described with respect to method 1000). Changing the display of the particular visual aspect of the second style mode user interface object based on whether the value of the particular parameter has changed provides visual feedback to the user as to which parameters have been changed for the media processing style, which provides improved visual feedback.
In some embodiments, the computer is configured to store the media (e.g., and/or capture media) in a first file format (e.g., a compressed format such as JPEG and/or HEIC) (e.g., when the original capture indication identifier 602b is displayed in an inactive state). In some embodiments, when the computer system is configured to capture and store media in a first file format, and when the second style mode user interface object (e.g., 602 b) is displayed in an active state (e.g., an enabled state (e.g., a state in which the computer system performs an action in response to detecting one or more inputs directed to the user interface object)), the computer system detects a request (e.g., 650 v) to configure the computer system to capture and store media in a second file format (e.g., a raw format) that is different from the first file format. In some embodiments, in response to detecting a request to configure the computer system to capture and store media in the second file format, the computer system ceases to display the second style mode user interface object in an active state (e.g., as discussed above with respect to fig. 6V-6Y) (e.g., ceases to display the second style mode user interface object and/or displays the second style mode user interface object in an inactive state (e.g., a disabled state (e.g., a state in which the computer system does not perform an action in response to detecting one or more inputs directed to the user interface object)). In some embodiments, in response to detecting a request to configure the computer system to capture and store media in the second file format, the computer system configures the computer system to capture and store media in the second file format. In some embodiments, as part of detecting a request to configure the computer system to capture and store media in the second file format, the computer system detects an input (e.g., a tap input, a press and hold input, and/or a swipe input) directed to a first selectable user interface object that is used to control the file format for capturing media with the one or more cameras. In some embodiments, in response to detecting a request to configure the computer system to capture and store media in the second file format, the computer system changes a first selectable user interface object from being displayed in an inactive state to being displayed in an active state, the first selectable user interface object controlling a file format for capturing media with the one or more cameras. Stopping displaying the second style mode user interface object in an active state in response to detecting a request to configure the computer system to capture and store media in the second file format provides visual feedback to the user that the media processing style is not applied and that the media processing style is not available when the computer is configured to capture and store media in the second file format, which improves visual feedback.
It is noted that the details of the process described above with respect to method 900 (e.g., fig. 9) also apply in a similar manner to the methods described herein. For example, method 900 optionally includes one or more of the features of the various methods described above with reference to method 1000. For example, method 900 may be used to select one or more media processing styles and method 1000 may be used to edit media selected using method 900. For the sake of brevity, these details are not repeated hereinafter.
Fig. 10A-10B are flowcharts illustrating methods of editing media processing styles using a computer system according to some embodiments. The method 1000 is performed at a computer system (e.g., 100, 300, 500, 600) (e.g., a smartphone, desktop computer, laptop computer, and/or tablet computer) in communication with a display generation component (e.g., a display controller and/or a touch-sensitive display system) and a first camera (e.g., front-facing camera and/or rear-facing camera) of one or more cameras (e.g., dual-camera, triple-camera, quad-camera, etc.) of one or more cameras (e.g., on the same side or different sides of the computer system) and one or more input devices (e.g., touch-sensitive surfaces and/or one or more cameras). Some operations in method 1000 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 1000 provides an intuitive way for editing media processing styles using a computer system. The method reduces the cognitive burden on a user to edit media processing styles using a computer system, thereby creating a more efficient human-machine interface. For battery-powered computing devices, enabling users to edit media processing patterns faster and more efficiently using a computer system saves power and increases the time between battery charges.
The computer system displays (1002), via the display generation component, media (e.g., in response to detecting a request to capture media (e.g., when operating in a camera mode) (e.g., and when operating in a particular style mode (e.g., a media processing style mode)), representations (e.g., 630, 676a, 676b, 676c, 680c, and/or 680 d) (e.g., photo media, video media) (e.g., live media, live preview (e.g., media corresponding to a representation of a field of view (e.g., a current field of view) of the one or more cameras that has not been stored/captured) (e.g., in response to detecting a request to capture media (e.g., a selection of a shutter affordance (e.g., a user interface object)), previously captured media (e.g., media corresponding to a representation of a field of view (e.g., a previous field of view) of the one or more cameras that has been captured), media that has been saved and can be accessed by a user at a later time and/or in response to a thumbnail (e.g., in a view of a representation of media) in a user interface (e.g., a user interface), a user interface (e.g., a user interface 634) is displayed in accordance with a user interface (e.g., a visual style, a user interface 634, a user interface view media style is edited, media style, user interface (e.g., user interface 634, user interface view media display of media 634aa and/or 634 dd) to display a representation of the media. In some implementations, the first media processing style is one of a plurality of media processing styles. In some embodiments, each of the plurality of patterns has the same set of parameters. In some implementations, the set of parameters is a set of visual characteristics (e.g., color temperature, hue, brightness, saturation, chroma, hue, saturation, and/or harmony) and/or depth parameters) (e.g., where no second style is applied to the media).
When a representation of media is displayed using the first media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd), the computer system concurrently displays (1004), via the display generation component, a plurality of selectable user interface objects for the first media processing style (e.g., editing parameters thereof (e.g., visual characteristics (e.g., color temperature, hue, brightness, saturation, chroma, color, chroma, coldness, and/or harmony) and/or depth parameters)), including: a first selectable user interface object (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2a, 626d1 and/or 626d 2) for editing a first parameter (e.g., as indicated by 626a1a, 626a2a, 626b2a, 626c1a, 626d1a and/or 626d2 a) of the first media processing style (e.g., a visual characteristic (e.g., a color characteristic (e.g., color temperature, hue, brightness, saturation, chroma, hue, saturation, viewing chroma, coldness and/or harmony) and/or depth parameter)) (1006), a representation of the first selectable user interface object with a current value of a first parameter of the first media processing style (e.g., 626a1b, 626a2b, 626b1b, 626b2b, 626c1b, 626c2b, 626d1b, and/or 626d2 b) (e.g., a number (e.g., 0 to 100), a percentage (e.g., 0% to 100%), an indication of the number on a control (e.g., slider, rotatable knob) (e.g., slider bar displayed at a particular location on the slider), one or more characters indicated for the first value), a representation of the first value, a representation of the current value of the first parameter (e.g., a representation of the first value of the first selectable user interface object with the first parameter of the first media processing style, a representation of the first value of the first selectable user interface object, compression control and/or a portion of control) are displayed together (e.g., concurrently with, including the representation); and a second selectable user interface object (e.g., 626a1, 626a2, 626b1, 626b2, 626c1a, 626c2a, 626d1a, and/or 626d2 a) for editing a second parameter of the first media processing style (e.g., as indicated by 626a1a, 626a2a, 626b1a, 626b2a, 626c1, 626c2, 626d1, and/or 626d 2), e.g., a visual characteristic (e.g., a color characteristic (e.g., color temperature, hue, brightness, saturation, chroma, hue, apparent chroma, coldness, and/or harmony) and/or depth parameter), the second selectable user interface object being representative of a current value of the second parameter of the first media processing style (e.g., 626a1b, 626a2b, 626b1b, 626b2b, 626c1b, 626c2b, 626d1b, and/or 626d2 b) (e.g., numbers (e.g., 0 to 100), percentages (e.g., 0% to 100%), indications of numbers on controls (e.g., sliders, rotatable knobs) (e.g., slider bars displayed at particular locations on the sliders), one or more characters indicated for a first value), compression controls, and/or portions of controls) are displayed together (e.g., concurrently with, including the representations), wherein the first parameter is different from the second parameter. In some embodiments, in accordance with a determination that the first value corresponds to a first amount of the parameter, the first value is displayed to indicate the first amount of the first parameter. In some embodiments, in accordance with a determination that the first value corresponds to a second amount of the parameter, the first value is displayed to indicate the second amount of the first parameter, wherein the first amount is different from the second amount. In some embodiments, the first value of the second parameter is different from the first value of the first parameter. In some embodiments, the first selectable user interface object is different from the second selectable user interface object. In some embodiments, the plurality of selectable user interface objects are not displayed until a request is applied to edit how the first media processing style is applied to the visual content. In some implementations, the plurality of selectable user interface objects for editing parameters of the first media processing style are displayed adjacent to each other (e.g., adjacent to each other on a line) (e.g., aligned with each other, in a row).
When displaying the plurality of selectable user interface objects (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1, and/or 626d 2) for the first media processing pattern (e.g., and when displaying a representation of media displayed using the first media processing pattern) (e.g., and when operating in a particular pattern mode (e.g., media processing pattern mode), the computer system detects (1010), via the one or more input devices, inputs (e.g., 750a, 750d, 750g, 750k, 750n, 750r, and/or 750 t) (e.g., tap inputs (e.g., tap gestures) (e.g., single tap inputs, double tap inputs)) directed to the plurality of selectable user interface objects for the first media processing pattern (e.g., tap inputs/gestures including rates at the end of the input/gesture or drag inputs and/or hold gestures based on movement during the input/gesture) and/or hold inputs/gestures).
In response to (1012) detecting an input (e.g., 750a, 750d, 750g, 750k, 750n, 750r, and/or 750 t) (e.g., a tap input (e.g., a tap gesture) (e.g., a single tap input, a double tap input)) directed to the plurality of selectable user interface objects for the first media processing style (and/or, in some embodiments, detecting a non-tap input/gesture (e.g., a movement input/gesture that includes a rate at which the input/gesture ends or a drag input/gesture that causes a change based on movement during the input/gesture) and/or pressing and holding the input/gesture)) (e.g., and when a representation of the media is displayed using the first media processing style, and/or while continuing to operate in a particular camera mode, and/or while operating in a particular style mode (e.g., media processing style mode)), and/or, in some embodiments, in response to detecting a non-tap input/gesture (e.g., move input/gesture and/or press and hold input/gesture) directed to the plurality of selectable user interface objects for editing parameters corresponding to the first media processing style, the computer system displays (1014), via the display generation component, a current value for adjusting (e.g., a first control (e.g., 626a1, 626a2, 626b2a, 626b1a, 626b2a, 626c2a, 626d1a, and/or 626d2 a) as indicated by 626a1a, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1, and/or 626d 2) (e.g., a slider and/or rotatable knob) (e.g., an expansion control of a previously displayed compression control)) (e.g., a control for adjusting a first value of a second parameter) (e.g., a second value of the first parameter is not displayed) (e.g., concurrent with a representation of a first media processing style applied to visual content) (e.g., when input directed to the plurality of selectable user interface objects for the first media processing style is continuously detected). In some embodiments, in response to detecting an input (e.g., a movement input) directed to a first control for adjusting a current value of a first parameter (and/or in some embodiments, in response to detecting a non-movement input (e.g., a tap input, a rotational drag gesture, a press and hold gesture, and/or a voice input) directed to a first control for adjusting a current value of a first parameter, the computer system updates a representation of the media and/or a portion of the representation of the media (e.g., to reflect the current value of the first parameter) and/or updates the current value of the first parameter.
Responsive to (1012) detecting an input (e.g., 750a, 750d, 750g, 750k, 750n, 750r, and/or 750 t) directed to the plurality of selectable user interface objects for the first media processing style (e.g., a tap input (e.g., a single tap input, a double tap input)) (and/or, in some embodiments, detecting a non-tap input/gesture (e.g., a movement input/gesture including a rate at which the input/gesture ends) and/or pressing and holding the input/gesture)) (e.g., and when a representation of media is displayed using the first media processing style and/or when operating in a particular camera mode is continued) (and/or, in some embodiments, responsive to detecting a non-input/gesture directed to the plurality of selectable user interface objects for editing parameters corresponding to the first media processing style (e.g., a movement input/gesture including a rate at which the input/gesture ends) and/or a drag input/gesture causing a change based on movement during the input/gesture) and/or pressing and holding the input/gesture)), generating a second input parameter value (e.g., a computer system for adjusting the display of the parameters via the second input system (e.g., the display of the second input system) responsive to the detected non-tap input/gesture (e.g., the media processing style mode) (e.g., when operating in the particular mode) is continued to be operated in the particular mode (e.g., mode) is mode, second controls (e.g., 626a1, 626a2, 626b1, 626b2a, 626c1a, 626c2a, 626d1a, and/or 626d2 a) as indicated by 626a1a, 626a2, 626b2, 626c1, 626c2, 626d1, and/or 626d 2) (e.g., sliders and/or rotatable knobs) (e.g., expansion controls of previously displayed compression controls)) (e.g., controls that do not display a first value for adjusting the first parameter) (e.g., concurrent with a representation of the first media processing style applied to the visual content) (e.g., when input directed to the plurality of selectable user interface objects for the first media processing style continues to be detected). In response to detecting an input (e.g., a movement input) directed to a second control for adjusting a current value of a second parameter (and/or, in response to detecting a non-movement input (e.g., a tap input, a rotate drag gesture, and/or a press and hold gesture) directed to a second control for adjusting a current value of a second parameter, in some embodiments, a computer system updates a representation of a media and/or a portion of a representation of a media (e.g., to reflect a current value of a second parameter) and/or updates a current value of a second parameter, in some embodiments, in response to detecting an input directed to a second control for adjusting a current value of a second parameter, the computer system does not update a representation of a media and/or a portion of a representation of a media to reflect a current value of a first parameter, and/or does not update a current value of a first parameter, displaying a respective control for adjusting a current value of a corresponding parameter in accordance with a respective user interface object for determining that the input is directed to a respective parameter for editing a first media processing style, which allows a user interface to be used to access a respective current value of a corresponding parameter for adjusting a respective parameter based on the input, which may be provided via a visual interface to concurrently display a first visual interface-type of the selectable control, this provides improved visual feedback.
In some embodiments, as part of displaying the first control (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1, and/or 626d 2), the computer system displays (e.g., concurrently displays with the first control; as part of the first control) a second representation (e.g., 626a1b, 626a2b, 626b1b, 626b2b, 626c1b, 626c2b, 626d1b, and/or 626d2 b) of a current value of a first parameter of the first media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd). In some implementations, as part of displaying the second control (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1, and/or 626d 2), the computer system displays (e.g., concurrently displays with the second control; as part of the second control) a second representation (e.g., 626a1b, 626a2b, 626b1b, 626b2b, 626c1b, 626c2b, 626d1b, and/or 626d2 b) of a current value of the second parameter of the first media processing pattern. In accordance with a determination that the input points to a respective user interface object for editing a respective parameter, a respective control is displayed having a representation of a respective current value of the respective control, which provides visual feedback to a user regarding the current value of the respective parameter and how the user can adjust the current value of the respective parameter to change how the media processing style is applied to the visual content, which provides improved visual feedback.
In some implementations, when a representation of media (e.g., 630, 676a, 676b, 676c, 680c, and/or 680 d) is displayed using a first media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) and the plurality of selectable user interface objects (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1, and/or 626d 2) for the first media processing style are displayed (e.g., and when operating in a particular style mode (e.g., media processing style selection mode), the computer system detects a request (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) to display the representation of media using a second media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) applied to visual content of the media (e.g., a style that differs from the first style in one or more visual characteristics). In some implementations, in response to detecting a request (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) to display a representation of the media using a second media processing style applied to visual content of the media (e.g., and when operating in a particular style mode (e.g., media processing style mode), the computer system stops displaying the plurality of selectable user interface objects for the first media processing style. In some implementations, in response to detecting a request to display a representation of the media using the second media processing style applied to the visual content of the media, the computer system displays the representation of the media using the second media processing style applied to the visual content of the media. In some implementations, in response to detecting a request to display a representation of the media using the second media processing style applied to the visual content of the media, the computer system displays the representation of the media using the second media processing style applied to the visual content of the media while displaying a plurality of selectable user interface objects for the second media processing style. In some implementations, the plurality of selectable user interface objects for the second media processing style includes a first selectable user interface object for editing a third parameter of the second media processing style, the first selectable user interface object displayed with a representation of a current value of the third parameter of the second media processing style; and a second selectable user interface object for editing the fourth parameter of the second media processing style, the second selectable user interface object being displayed with a representation of a current value of the fourth parameter of the second media processing style. In some embodiments, the third parameter is different from the fourth parameter (e.g., a different type of parameter). In some embodiments, the first parameter is the same as the third parameter (e.g., the same type of parameter). In some embodiments, the second parameter is the same as the fourth parameter (e.g., the same type of parameter). In some embodiments, in response to detecting input directed to the plurality of selectable user interface objects for the second media processing style, the computer system displays, via the display generating component, a control for adjusting a current value of a third parameter in accordance with determining that the input is directed to the first selectable user interface object for editing the third parameter of the second media processing style. In some embodiments, in response to detecting input directed to the plurality of selectable user interface objects for the second media processing style, the computer system displays, via the display generating component, a control for adjusting a current value of a fourth parameter in accordance with determining that the input is directed to the second selectable user interface object for editing the fourth parameter of the second media processing style. In response to detecting a request to display a representation of the media using a second media processing style applied to the visual content of the media, ceasing to display the plurality of selectable user interface objects for the first media processing style allows the computer system to provide related user interface objects related to the media processing style being applied to the representation of the media without providing user interface objects unrelated to the media processing style being applied to the representation of the media, which performs the operation when a set of conditions has been met without additional user input and provides improved visual feedback.
In some embodiments, as part of a first control (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1, and/or 626d 2) for adjusting a current value of a first parameter (e.g., in response to detecting input directed to a first selectable user interface object of the plurality of selectable user interface objects for a first media processing style and in accordance with a determination that the input is directed to a first parameter for editing a first media processing style), a computer system expands (and/or enlarges) a first selectable user interface object (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1, and/or 626d 2) for editing the current value of the first parameter (e.g., to display a first control for adjusting the current value of the first parameter) (e.g., in line with the expansion of a first selectable user interface object for editing the first media processing style, such that the first selectable user interface object for adjusting the first parameter occupies a first user interface value of the first parameter in response to detecting that the first user interface object occupies the first user interface object for editing the first parameter as part of the same media processing style) (e.g., occupies the same area of the first selectable user interface object (e.g., in response to the display of the first input interface) for the first input interface object) for adjusting the current value of the first parameter (e.g., 626b1, 626b2, 626c2, 626d1, and/626 d 2) (e.626 d 2) (e.control) and/or 626d 2) (e.control) is occupied in such, and in accordance with a determination that the input is directed to a second user interface object for editing a second parameter of the first media processing style), the computer system expands a second selectable user interface object for editing the second parameter of the first media processing style (e.g., to display a second control for adjusting a current value of the second parameter) (e.g., expands in line, expands the second user interface object for editing the second parameter of the first media processing style such that the second control for adjusting the current value of the second parameter occupies the same area (and/or a portion of the same area) occupied by the second user interface object for editing the second parameter of the first media processing style before detecting an input directed to the plurality of selectable user interface objects for the first media processing style. In some implementations, the first control for adjusting the current value of the first parameter is associated with (e.g., is a larger version of, is larger than, includes a portion of, and/or includes one or more characteristics of) a first selectable user interface object for editing the first parameter of the first media processing style. In some implementations, the second control for adjusting the current value of the second parameter is associated with (e.g., is a larger version of, is larger than, includes a portion of, and/or includes one or more characteristics of) a second selectable user interface object for editing the second parameter of the first media processing style. As part of the first control for adjusting the current value of the first parameter is displayed via the display generating means in response to the input, a first selectable user interface object for editing the first parameter of the first media processing style is expanded, which provides visual feedback to the user that the first selectable user interface object for editing the first parameter of the first media processing style corresponds to the first control for adjusting the current value of the first parameter, which reduces confusion for the user while also providing a neat user interface and providing improved visual feedback.
In some embodiments, the computer system detects an end of input directed to the plurality of selectable user interface objects for the first media processing style when a first control (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1, and/or 626d 2) for adjusting the current value of the first parameter is displayed via the display generation component (e.g., and continues to detect input directed to the plurality of selectable user interface objects for the first media processing style). In some implementations, in response to detecting an end (e.g., lift-off) of an input (e.g., 750a, 750d, 750g, 750k, 750n, and/or 750 t) directed to the plurality of selectable user interface objects for the first media processing style, the computer system reduces (e.g., shrinks) a size of a first control for adjusting a current value of the first parameter (e.g., to display a first selectable user interface object for editing the first parameter of the first media processing style, the first selectable user interface object displayed with a representation of the current value of the first parameter of the first media processing style) (e.g., to display a shrink animation). In some embodiments, after collapsing the first control for adjusting the current value of the first parameter, and/or in response to detecting an end of input directed to the plurality of selectable user interface objects for the first media processing style, the computer system redisplays the first selectable user interface object for editing the first parameter and the second selectable user interface object for editing the second parameter. In some embodiments, after collapsing the first control for adjusting the current value of the first parameter, and/or in response to detecting an end of input directed to the plurality of selectable user interface objects for the first media processing style, the computer system redisplays the first selectable user interface object for editing the first parameter, and displays a representation of the current value of the first parameter at a different location on the first selectable user interface object for editing the second parameter than the location previously displayed prior to detecting the input. In some embodiments, the computer system detects an end of input directed to the plurality of selectable user interface objects for the first media processing style while displaying, via the display generating component, a second control for adjusting the current value of the second parameter (e.g., and continuing to detect input directed to the plurality of selectable user interface objects for the first media processing style); and in response to detecting the end of the input directed to the plurality of selectable user interface objects for the first media processing style, the computer system reduces a size of a second control for adjusting a current value of the second parameter (e.g., to display a second selectable user interface object for editing the second parameter of the first media processing style, the second selectable user interface object displayed with a representation of the current value of the second parameter of the first media processing style). In some embodiments, after reducing the size of the second control for adjusting the current value of the second parameter, and/or in response to detecting an end of input directed to the plurality of selectable user interface objects for the first media processing style, the computer system redisplays the first selectable user interface object for editing the first parameter and the second selectable user interface object for editing the second parameter. In some embodiments, after reducing the size of the second control for adjusting the current value of the second parameter, and/or in response to detecting the end of the input directed to the plurality of selectable user interface objects for the first media processing style, the computer system redisplays the second selectable user interface object for editing the second parameter, and displays the representation of the current value of the second parameter at a different location on the second selectable user interface object for editing the second parameter than the location previously displayed prior to detecting the input. In response to detecting the end of the input directed to the plurality of selectable user interface objects for the first media processing style, reducing the size of the first control for adjusting the current value of the first parameter provides visual feedback to the user that the first selectable user interface object for editing the first parameter of the first media processing style corresponds to the first control for adjusting the current value of the first parameter, which reduces confusion for the user while also providing a clean user interface and providing improved visual feedback.
In some implementations, prior to detecting an input (e.g., 750a, 750d, 750g, 750k, 750n, and/or 750 t) directed to the plurality of selectable user interface objects for the first media processing style, the current value of the first parameter is a first value (e.g., represented by 626a1b, 626a2b, 626b1b, 626b2b, 626c1b, 626c2b, 626d1b, and/or 626d2 b). In some embodiments, the computer system detects an end (e.g., liftoff) of input directed to the plurality of selectable user interface objects for the first media processing style when a first control (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1 and/or 626d 2) for adjusting the current value of the first parameter is displayed via the display generation component (e.g., and continues to detect input directed to the plurality of selectable user interface objects for the first media processing style). In some implementations, in response to detecting an end of input (e.g., liftoff) directed to the plurality of selectable user interface objects for the first media processing style, the computer system displays a representation of a current value of the first parameter (e.g., 626a1b, 626a2b, 626b1b, 626b2b, 626c1b, 626c2b, 626d1b, and/or 626d2 b). In some embodiments, the current value is a second value different from the first value. In some embodiments, the second value is the same as the first value. In some embodiments, when the first control for adjusting the current value of the first parameter is displayed via the display generating component, and in response to detecting the end of the input directed to the plurality of selectable user interface objects for the first media processing style (e.g., liftoff), the computer system displays a representation of the current value of the second parameter, wherein the current value of the representation of the second parameter is a value subsequent to detecting the input directed to the plurality of selectable user interface objects for the first media processing style that is the same as the value of the current value prior to detecting the input directed to the plurality of selectable user interface objects for the first media processing style (e.g., the current value of the second parameter does not change). In some embodiments, prior to detecting input directed to the plurality of selectable user interface objects for the first media processing style, the current value of the first parameter is a third value while displaying a second control (e.g., and continuing to detect input directed to the plurality of selectable user interface objects for the first media processing style) for adjusting the current value of the second parameter via the display generating component: detecting an end of input (e.g., lift-off) directed to the plurality of selectable user interface objects for the first media processing style; and in response to detecting an end (e.g., a lift-off) of an input directed to the plurality of selectable user interface objects for the first media processing style, displaying a representation of a current value of the second parameter. In some implementations, the current value is a third value that is different from the fourth value (and/or a representation of the current value of the first parameter is displayed as the same value as the current value of the first parameter was displayed prior to detecting input directed to the plurality of selectable user interface objects for the first media processing style). In response to detecting the end of the input directed to the plurality of selectable user interface objects for the first media processing style, a representation of a current value of the first parameter (where the current value is a second value different from the first value) is displayed, which provides visual feedback to the user to understand that the current value of the first parameter has been input adjusted, which provides improved visual feedback.
In some embodiments, the first selectable user interface object (e.g., 626a1, 626a2, 626B1, 626B2, 626c1, 626c2, 626d1, and/or 626d 2) for editing the first parameter is displayed with a first value range (e.g., -100 to 100) of the first parameter (e.g., and in some embodiments, a representation of a current value of the first parameter of the first media processing style is displayed on, adjacent to, and/or included in a representation of the first value range, the first representation (e.g., 626a1c, 626a2c, 626B1c, 626B2c, 626c1c, 626d1c, and/or 626d2 c) of the first parameter, the first value range having a first distance between a first point in the first representation of the first value range and a second point in the first representation of the first value range, the first point representing the first value, the second point representing the second value (e.g., as discussed above with respect to fig. 7A-7B). In some embodiments, as part of displaying the first control (e.g., 626a1, 626a2, 626B1, 626B2, 626c1, 626c2, 626d1, and/or 626d 2), the computer system displays a second representation of the range of values (e.g., 626a1c, 626a2c, 626B1c, 626B2c, 626c1c, 626c2c, 626d1c, and/or 626d2 c) having a second distance (e.g., on the display generating component) between a first point in the second representation of the range of values and a second point in the second representation of the range of values, the first point representing the first value, the second point representing the second value, the second distance being greater than the first distance (e.g., on the display generating component) (e.g., as discussed above with respect to fig. 7A-7B). In some embodiments, the second selectable user interface object for editing the first parameter is displayed with a third representation of the first range of values (e.g., -100 to 100) of the second parameter (e.g., and in some embodiments, a representation of the current value of the first parameter of the first media processing style is displayed on, adjacent to, and/or included in a representation of the first range of values), the first range of values of the second parameter having a third distance between a first point in the third representation of the first range of values of the second parameter and a second point in the third representation of the first range of values of the second parameter, the first point representing a third value and the second point representing a fourth value. In some embodiments, as part of displaying the second control, the computer system displays a fourth representation of a second range of values of the second parameter, the second range of values having a fourth distance between a first point in the fourth representation of the second range of values of the second parameter and a second point in the fourth representation of the second range of values of the second parameter, the first point representing a third value, the second point representing a fourth value, the fourth distance being greater than the third distance. In some embodiments, the second selectable user interface object for editing the first parameter includes a representation of a second range of values (e.g., -100 to 100) of the first parameter (e.g., and in some embodiments, a representation of a current value of the second parameter of the first media processing style is displayed on, adjacent to, and/or included in the representation of the second range of values). In some embodiments, as part of displaying the second control, the computer displays a representation of the value range (e.g., 30 to 60) that is a subset of the first value range of the second parameter (e.g., and stops displaying the representation of the first value range of the second parameter). In some embodiments, when the first control for adjusting the current value of the first parameter is displayed via the display generating component and input directed to the plurality of selectable user interface objects for the first media processing style continues to be detected (and/or when movement of input directed to the plurality of selectable user interface objects for the first media processing style continues to be detected), the computer system increases a first size of the first control for adjusting the current value of the first parameter (e.g., enlarges the first control to display one or more portions of the respective control for adjusting the current value of the respective parameter in an enlarged, larger size than the control was previously displayed) (e.g., on the user interface). In some implementations, the size of the second control (e.g., the second control) for adjusting the current value of the second parameter is increased (e.g., on the user interface) when the current value of the second parameter is displayed via the display generating component and input directed to the plurality of selectable user interface objects for the first media processing style continues to be detected (and/or when movement of input directed to the plurality of selectable user interface objects for the first media processing style continues to be detected). As part of displaying the first control, a representation of a range of values is displayed having a second distance greater than the first distance between a second point representing the first value and a second point representing the second value, which provides visual feedback to the user, i.e., the first control for adjusting the current value of the first parameter may be manipulated to change the current value of the first parameter via input, and give the user focus and/or select (e.g., more easily select) a value between the point representing the first value and the point representing the second value, which provides additional control options without cluttering the user interface and provides improved visual feedback.
In some embodiments, the first control (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1, and/or 626d 2) is displayed with a third representation of the third range of values of the first parameter (e.g., 626a1c, 626a2c, 626b1c, 626b2c, 626c1c, 626c2c, 626d1c, and/or 626d2 c) having a third distance between a first point in the third representation of the third range of values and a second point in the third representation of the third range of values, the first point representing a third value, the second point representing a fourth value. In some implementations, when a third representation of the third range of values of the first parameter and the first control is displayed via the display generating component, the computer system detects an end (e.g., lift-off) of input (e.g., 750a, 750d, 750g, 750k, 750n, and/or 750 t) directed to the plurality of selectable user interface objects for the first media processing style. In some embodiments, in response to detecting the end of the input directed to the plurality of selectable user interface objects for the first media processing style, the computer system displays first selectable user interface objects (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626d1, and/or 626d 2) for editing the first parameter and a fifth representation of a range of values (e.g., 626a1c, 626a2c, 626b1c, 626b2c, 626c1c, 626c2c, 626d1c, and/or 626d2 c) having a fourth distance (e.g., on a display generating component) between a first point in the fifth representation of the range of values and a second point in the fifth representation of the range of values, the first point representing a third value, the second point representing a fourth value, the fourth distance being less than the third distance (e.g., on the display generating component). In some embodiments, the second control is displayed with a fifth representation of a fifth range of values of the second parameter, the fifth range of values having a fifth distance between a first point in the fifth representation of the range of values of the second parameter and a second point in the fifth representation of the fifth range of values, the first point representing the fifth value and the second point representing the sixth value. In some embodiments, the computer system detects an end of input (e.g., liftoff) directed to the plurality of selectable user interface objects for the first media processing style when displaying the second control and the fifth representation of the fifth range of values of the second parameter via the display generating component. In some embodiments, in response to detecting the end of the input directed to the plurality of selectable user interface objects for the first media processing style, the computer system displays a second selectable user interface object for editing the second parameter and a sixth representation of the range of values, the sixth representation having a sixth distance between a first point in the sixth representation of the range of values for the second parameter and a second point in the sixth representation of the range of values for the second parameter, the first point representing a fifth value, the second point representing a sixth value, the sixth distance being less than the fifth distance. In some embodiments, the first control is displayed with a representation of a range of values that is a subset (e.g., 30 to 60) of a second range of values (e.g., -100 to 100) of the first parameter (e.g., having a minimum value and a maximum value between the second range of values). In some embodiments, the computer system detects an end of input (e.g., liftoff) directed to the plurality of selectable user interface objects for the first media processing style when displaying, via the display generating component, the representation of the first control and the subset of the second range of values of the first parameter. In some implementations, in response to detecting an end of input directed to the plurality of selectable user interface objects for the first media processing style, the computer system displays a representation of a second range of values of the first parameter. In some implementations, in response to detecting an end of input directed to the plurality of selectable user interface objects for the first media processing style, the computer system stops displaying a representation of the subset of the second range of values of the first parameter. In some implementations, in response to detecting an end of input directed to the plurality of selectable user interface objects for the first media processing style, the computer system displays a representation of a second range of values of the second parameter (e.g., not previously displayed when displaying a representation of a subset of the second range of values of the first parameter) concurrently with the representation of the second range of values of the first parameter. In some embodiments, the second control is displayed with a representation of a range of values that is a subset (e.g., 30 to 60) of the second range of values of the second parameter. In some embodiments, when the representation of the second control and the subset of the second value range of the second parameter is displayed via the display generating component, the computer system detects an end of input (e.g., liftoff) directed to the plurality of selectable user interface objects for the first media processing style and, in response to detecting an end of input directed to the plurality of selectable user interface objects for the first media processing style, displays the representation of the second value range of the second parameter. In some implementations, in response to detecting an end of input directed to the plurality of selectable user interface objects for the first media processing style, the computer system stops displaying a representation of a subset of the second range of values of the second parameter. In some implementations, in response to detecting an end of input directed to the plurality of selectable user interface objects for the first media processing style, the computer system reduces a second size of the first control (e.g., the same as the first size in the above paragraph) for adjusting the current value of the first parameter (e.g., reduces the first control to display one or more portions of the respective control (e.g., on the user interface) for adjusting the current value of the respective parameter in a reduced, smaller size than the control was previously displayed). In some embodiments, the computer system detects an end of input (e.g., lift-off) directed to the plurality of selectable user interface objects for the first media processing style when the second control for adjusting the current value of the second parameter is displayed via the display generating component. In some implementations, in response to detecting an end of input directed to the plurality of selectable user interface objects for the first media processing style, the computer system reduces a size of the first control for adjusting a current value of the first parameter. The first selectable user interface object for editing the first parameter is displayed together with a representation of a value range having a fourth distance smaller than the third distance between a second point representing the third value and a second point representing the fourth value, which provides visual feedback to the user that the first control for adjusting the current value of the first parameter is no longer manipulable to change the current value of the first parameter via the input, which provides improved visual feedback.
In some embodiments, in response to detecting an input (e.g., 750a, 750d, 750g, 750k, 750n, and/or 750 t) directed to the plurality of selectable user interface objects (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1, and/or 626d 2) for the first media processing style, and in accordance with determining that the input is directed to a first selectable user interface object for editing a first parameter of the first media processing style, the computer system moves a second control (e.g., 626a1b, 626a2b, 626b1b, 626c2b, 626d1b, 626d2b, and/or 626d 2) for adjusting a current value of a second parameter (e.g., 626a1, 626a2, 626b1, 626c2, 626c1, 626d 2) from a first location on the user interface to a second location (e.g., different from the first location on the user interface) (e.g., for the first media processing style and/or other selectable user interface objects). In some embodiments, in response to detecting input directed to the plurality of selectable user interface objects for the first media processing style and in accordance with determining that the input is directed to a second selectable user interface object for editing a second parameter of the first media processing style, the computer system moves a first control for adjusting a current value of the first parameter from a third location on the user interface to a fourth location on the user interface (e.g., different from the third location) (e.g., and/or one or more of the other plurality of selectable user interface objects for the first media processing style). In response to detecting input directed to the plurality of selectable user interface objects for the first media processing style and in accordance with a determination that the input is directed to a first selectable user interface object for editing a first parameter of the first media processing style, moving a second control for adjusting a current value of a second parameter from a first position to a second position on the user interface provides visual feedback to the user that the input is not directed to a second selectable user interface object for editing the first parameter of the first media processing style, thereby allowing the user to correct potential errors when needed, which provides improved visual feedback.
In some embodiments, in response to detecting an input (e.g., 750a, 750d, 750g, 750k, 750n, and/or 750 t) directed to the plurality of selectable user interface objects (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1, and/or 626d 2) for the first media processing style and in accordance with determining that the input is directed to a first selectable user interface object (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1, and/or 626d 2) for editing a first parameter of the first media processing style, the computer system stops displaying second controls (e.g., 626a1, 626a2, 626b1, 626c2, 626d1, and/or 626d 2) for adjusting a current value of the second parameter (e.g., and/or one or more selectable user interface objects of the other plurality of selectable user interface objects for the first media processing style. In some embodiments, in response to detecting input directed to the plurality of selectable user interface objects for the first media processing style and in accordance with a determination that the input is directed to a second selectable user interface object for editing a second parameter of the first media processing style, the computer system ceases to display the first control for adjusting the current value of the first parameter (e.g., and/or one or more of the other plurality of selectable user interface objects for the first media processing style). In response to detecting input directed to the plurality of selectable user interface objects for the first media processing style and in accordance with a determination that the input is directed to a first selectable user interface object for editing a first parameter of the first media processing style, ceasing to display a second control for adjusting a current value of a second parameter provides visual feedback to the user that the input is not directed to a second selectable user interface object for editing the first parameter of the first media processing style, thereby allowing the user to correct potential errors when needed, which provides improved visual feedback.
In some implementations, when a representation of media is displayed using a first media processing style, and prior to detecting input directed to the plurality of selectable user interface objects for the first media processing style, first identifications (e.g., 636 a-636 d) corresponding to the first media processing style (e.g., 634 a-634 d) (e.g., one or more symbols and/or text (e.g., "standard", "vivid")) are displayed. In some embodiments, in response to detecting an input (e.g., 750a, 750 g) directed to the plurality of selectable user interface objects for the first media processing style, and in response to detecting movement of the input directed to the plurality of selectable user interface objects for the first media processing style as (or as) different values (e.g., -100 to 100) and/or percentages) from a default value (e.g., a predetermined value) (e.g., 0) of the first parameter of the first media processing style and/or in response to determining that the current value of the second parameter has changed (e.g., after the input of the plurality of selectable user interface objects for the first media processing style, in response to detecting movement of the input of the plurality of selectable user interface objects for the first media processing style as (or as) different values from the default value of the second parameter of the first media processing style), the computer system 634 (e.g., aa) and/or the first media processing style (e.g., 636) corresponds to a predefined symbol (e.g., a symbol(s) (and/or a predefined media style) before the first aa and/or the second media processing style (e.g., 636) and/or the predefined media processing style (e.g., aa) are displayed, "custom", "custom criteria", "custom vivid")). In some embodiments, the second identifier is different from the first identifier (and the display of the first identifier is stopped). In some embodiments, the second identifier comprises a portion (e.g., one or more words) of the first identifier. In some embodiments, when the second identification is displayed, the computer system detects other inputs directed to the plurality of selectable user interface objects for the first media processing style. In some embodiments, in response to detecting other inputs directed to the plurality of selectable user interface objects for the first media processing style and in accordance with a determination that the current value of the first parameter is a default value for the first parameter of the first media processing style and the current value of the second parameter is a default value for the second parameter of the first media processing style, the computer system displays (e.g., redisplays) the first identifier and ceases to display the second identifier. In some implementations, the first media processing pattern is different from the third media processing pattern. In some implementations, the first media processing style is a predefined media processing style (e.g., a style that is not created in response to detecting input to the computer system), and the third media processing style is not a predefined media processing style. In accordance with a determination that the current value of the first parameter has changed to a value different from the default value of the first parameter, a second indication corresponding to the third media processing style is displayed, which provides visual feedback to the user: the first media processing style has been edited such that at least one parameter of the first media processing style is not a default value for the at least one parameter of the media processing style and/or a custom media processing style has been created that is customized by the user, which provides improved visual feedback.
In some implementations, the user interface includes a selectable user interface object (e.g., 722) for resetting one or more parameters of the first media processing style. In some embodiments, when displaying the selectable user interface object (e.g., 722) for resetting one or more parameters of the first media processing style, the computer system detects an input (e.g., 750 w) (e.g., a tap input (e.g., a tap gesture) (e.g., a single tap input, a double tap input)) directed to the selectable user interface object for resetting the one or more parameters of the first media processing style (and/or, in some embodiments, detects a non-tap input/gesture (e.g., a move input, a press and hold input/gesture, and/or a voice input)). In some implementations, in response to detecting an input (e.g., 750 w) directed to the selectable user interface object for resetting the one or more parameters of the first media processing style, the computer system displays a representation of a current value of the first parameter of the first media processing style (e.g., represented by 626a1b, 626a2b, 626b1b, 626b2b, 626c1b, 626c2b, 626d1b, 626d2 b) as a second default value of the first parameter of the first media processing style (e.g., represented by 626a1b, 626a2b, 626b1b, 626c2b, 626d1b, and/or 626d2 b) (e.g., a numerical value (e.g., -100 to 100), a percentage) (e.g., or/and setting the current value of the first parameter of the first media processing style as the default value of the first parameter of the first media processing style); and displaying a representation of the current value of the second parameter of the first media processing style as a second default value (e.g., represented by 626a1b, 626a2b, 626b1b, 626b2b, 626c1b, 626c2b, 626d1b, and/or 626d2 b) of the second parameter of the first media processing style (e.g., a numerical value (e.g., -100 to 100), a percentage) (e.g., or/and setting the current value of the first parameter of the first media processing style as a default value of the second parameter of the first media processing style). In some implementations, in response to detecting input of the selectable user interface object directed to the one or more parameters for resetting the first media processing style, the computer system sets a current value of the first parameter of the first media processing style to a second default value of the first parameter of the first media processing style and sets the current value of the first parameter of the first media processing style to the second default value of the first parameter of the first media processing style (e.g., does not display a representation of the current value of the first parameter and/or the second parameter in response to detecting input of the selectable user interface object directed to the one or more parameters for resetting the first media processing style). In some embodiments, the default value of the first parameter is different from the default value of the second parameter. In some embodiments, the selectable user interface object for resetting one or more parameters of the first media processing style is displayed based only on determining that the current value of the first parameter of the first media processing style is not the value of the default value of the first parameter and/or that the current value of the second parameter of the second media processing style is not the value of the default value of the second parameter of the first media processing style. In some embodiments, in response to detecting input directed to the plurality of selectable user interface objects for the first media processing style (and when a first control for adjusting a current value of the first parameter is displayed via the display generating component and/or a second control for adjusting a current value of the second parameter is displayed via the display generating component), the computer system displays the selectable user interface object for resetting the one or more parameters of the first media processing style. In some embodiments, the first control for adjusting the current value of the first parameter is displayed concurrently with a selectable user interface object for resetting one or more parameters of the first media processing style, the selectable user interface object being displayed concurrently with the first control for adjusting the current value of the first parameter (or the second control for adjusting the current value of the second parameter). In some implementations, the plurality of selectable user interface objects for the first media processing style are displayed simultaneously with the selectable user interface objects for resetting one or more parameters of the first media processing style. In some implementations, in response to detecting an input (e.g., 750W) directed to the selectable user interface object for resetting the one or more parameters of the first media processing style, the computer system displays an animation of a current value of the first parameter of the first media processing style changing (e.g., gradually changing over time) to a second default value of the first parameter of the first media processing style (e.g., as discussed above with respect to fig. 7W-7X). In some implementations, in response to detecting an input directed to a selectable user interface object for resetting the one or more parameters of the first media processing style, the computer system displays an animation of a current value of a second parameter of the first media processing style changing to a second default value of the second parameter of the first media processing style. In some implementations, the animation of the current value change of the first parameter of the first media processing style is displayed simultaneously with the animation of the current value change of the second parameter of the first media processing style. In some embodiments, in response to detecting input directed to a selectable user interface object for resetting the one or more parameters of the first media processing style, the computer system displays an animation of the first control changing to the first user interface object for the first parameter. Animation that displays a change in a current value of a first parameter of a first media processing style to a second default value of the first parameter of the first media processing style provides feedback to a user: input of the selectable user interface object directed to the one or more parameters for resetting the first media processing style has caused a current value of the first parameter of the first media processing style to change, which provides improved visual feedback. In response to detecting an input directed to the selectable user interface object for resetting the one or more parameters of the first media processing style, displaying (and/or setting) the current value of the first parameter of the first media processing style and the representation of the current value of the first parameter of the first media processing style as default values provides the user with the ability to reset the media processing style via one input instead of multiple inputs, which reduces the number of inputs required to perform the operation.
In some implementations, the cue (e.g., 768) is displayed with an indication of how at least one of the one or more parameters of the first media processing style is to be reset (e.g., "reset to warm color", "reset to cool color", "reset to neutral", "reset to rich", and/or "reset to soft") (e.g., an indication of a characteristic (e.g., a word indicating a characteristic), such as a characteristic of "warm and/or cool" being a "color temperature" parameter, and/or a characteristic of "soft" being a "hue" parameter, including the parameter). A prompt displaying an indication of how at least one of the one or more parameters including the first media processing style is to be reset provides visual feedback to a user: if one or more additional inputs are received from the user, one or more parameters of the first media processing pattern will be reset in a particular manner and/or reset to a particular pattern, which improves visual feedback and reduces performance of unintended operations.
In some embodiments, prior to detecting an input (e.g., 750w and/or 750w 1) directed to a selectable user interface object (e.g., 722) for resetting the one or more parameters of the first media processing style, the computer system displays a first style mode user interface object (e.g., 602 b) that, when selected, causes (e.g., causes the computer system to switch between) a representation (e.g., for displaying a user interface object of a style selection user interface, and/or causes a user interface object of a style portion user interface to be displayed when selected) that the first selected corresponding media processing style is not applied to be displayed. In some implementations, the first style mode user interface object is displayed with a first appearance (e.g., 602 b) based on a current value of a first parameter of the first media processing style (e.g., using one or more of the techniques described above with respect to the style mode user interface object and the second style mode user interface object described above with respect to the method 900 and/or fig. 6L and 7C). In some embodiments, in response to detecting an input directed to the selectable user interface object for resetting the one or more parameters of the first media processing style, the computer system displays an animation of the first style mode user interface object that transitions from being displayed in a first appearance (e.g., a visual appearance having a first visual aspect and a second visual aspect as described above with respect to method 900 and/or fig. 6L and 7C) to being displayed in a second appearance (e.g., a visual appearance having a first visual aspect and a second visual aspect as described above with respect to method 900 and/or fig. 6L and 7C), the first appearance being based on a current value of the first parameter of the first media processing style, the second appearance being based on a second default value of the first parameter of the first media processing style. In some embodiments, the animation is a gradual transition that occurs over a period of time (e.g., 0.01 to 10 seconds). Displaying the first style mode user interface object transitions from displaying an animation in a first appearance based on a current value of a first parameter of the first media processing style to displaying in a second appearance based on a second default value of the first parameter of the first media processing style, which provides visual feedback to the user as to how the reset media processing style will change the media processing style and the media processing style has changed, which provides improved visual feedback.
In some embodiments, when a first control (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1, 626d 2) for adjusting a current value of a first parameter is displayed, and in response to detecting an input (e.g., 750a, 750d, 750g, 750k, 750n, and/or 750 t) directed to the first control (e.g., a tap input (e.g., a tap gesture) (e.g., a single tap input, a double tap input)) and/or, in some embodiments, detecting a non-tap input/gesture (e.g., a move input/gesture, a press and hold input/gesture, a voice input)) and/or in response to detecting a movement of an input directed to the plurality of selectable user interface objects for a first media processing style, the computer system replaces the current value of the first parameter (e.g., 626a1b, 626b2b, 626b1b, 626b1b and/or d2 b) with a third value of the first parameter (e.g., a fourth value of the first parameter) (e.g., a fourth value representative of the current value of the first parameter). In some embodiments, the input directed to the first control is the same as the input directed to the plurality of selectable user interface objects for the first media processing style. In some embodiments, the third value is different from the fourth value. In some embodiments, the computer system detects movement of an input directed to the first control when the first control for adjusting the current value of the first parameter is displayed. In some embodiments, when the second control for adjusting the current value of the second parameter is displayed, and in response to detecting the movement of the input directed to the second control (e.g., and/or in response to detecting movement of the input directed to the plurality of selectable user interface objects for the first media processing style), the computer system changes the current value of the second parameter from the third value of the second parameter to the fourth value of the second parameter (e.g., does not change the current value of the first parameter) (e.g., replaces the display of the representation of the third value of the first parameter with the display of the representation of the fourth value of the first parameter). In some embodiments, the input directed to the first control is the same as the input directed to the plurality of selectable user interface objects for the first media processing style. In some embodiments, the computer system detects movement of an input directed to the second control when the second control is displayed for adjusting the current value of the second parameter. In some embodiments, the third value is different from the fourth value. Changing the current value of the first parameter from the third value of the first parameter to the fourth value of the first parameter in response to detecting the movement of the input directed to the first control allows the user control to set the current value of the first parameter based on the movement of the input, which provides additional control options without cluttering the user interface.
In some implementations, when a first control for adjusting a current value of a first parameter is displayed, and in response to detecting movement of an input (e.g., 750a, 750d, 750g, 750k, 750n, and/or 750 t) directed to the first control, the computer system uses the modified first media processing style (e.g., 634 a-634 d) to display (e.g., before and/or after detecting an end of the input directed to the first control) a second representation (e.g., 630) of the media. In some implementations, the second representation of the media using the modified first media processing style (e.g., 634aa and/or 634 dd) is different from the representation of the media using the first media processing style. In some implementations, the second representation of the media using the first media processing style is displayed based on the changed value (e.g., the fourth value) of the first parameter, and the representation of the media using the first media processing style is displayed based on the value (e.g., the third value) before the input to the first control is detected. In some embodiments, when displaying the second control for adjusting the current value of the second parameter, and in response to detecting this movement of the input directed to the second control, the computer system displays (e.g., before and/or after detecting the end of the input directed to the second control) a third representation of the media using the first media processing style, wherein the third representation of the media using the first media processing style is different from the representation of the media using the first media processing style and the second representation of the media using the first media processing style. In response to detecting this movement of the input directed to the first control, a second representation of the media is displayed using the first media processing style, wherein the second representation of the media using the first media processing style is different from the representation of the media using the first media processing style, which provides feedback to the user as to how the input affects how the first media processing style is applied to the representation of the media, which provides improved visual feedback.
In some implementations, when a representation of media is displayed (e.g., 630) using a first media processing style, the computer system detects a first request (e.g., 650a, 650c, 650 j) to capture the media. In some implementations, in response to detecting a first request to capture media, the computer system captures the first media. In some implementations, the computer system detects a second request to capture media when a second representation of the media (e.g., 630) is displayed using the modified first media processing style. In some implementations, in response to detecting the first request to capture media, the computer system captures second media. In some embodiments, after capturing the first media and the second media, the computer system: displaying a representation (e.g., 680 c) of a first media having a first media processing style (e.g., as discussed above with respect to fig. 7A-7X); and displaying a representation (e.g., 680 d) of the second media with the modified first media processing style (e.g., as discussed above with respect to fig. 7A-7X). In some embodiments, the computer system transitions from displaying the representation of the first media to displaying the representation of the second media (or vice versa) in response to detecting an input (e.g., a movement input) directed to the representation of the first media (and/or in response to detecting a non-movement input (e.g., a tap input, a rotational drag gesture, and/or a press and hold gesture) directed to the representation of the first media in some embodiments) (and/or in response to detecting a non-movement input (e.g., a tap input, a rotational drag gesture, and/or a press and hold gesture) directed to the representation of the second media in some embodiments). In some implementations, the representation of the first media and the representation of the current media are sequentially displayed in a media viewer interface (e.g., fig. 6A-6U). In some implementations, the representation of the first media and the representation of the second media are displayed simultaneously in a media viewer interface and/or a media grid (e.g., among a plurality of other representations of the media).
In some implementations, the user interface includes a second selectable user interface object (e.g., 610) for capturing media. In some embodiments, when a representation of the media (e.g., 630) is displayed and a second selectable user interface object (e.g., 610) for capturing the media is displayed using the first media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd), the computer system detects input (e.g., tap gesture) (e.g., single tap input, double tap input)) directed to the second selectable user interface object for capturing the media (and/or, in some embodiments, in response to detecting non-tap input/gesture (e.g., move input/gesture, press and hold input/gesture, and/or voice input)). In some implementations, in response to detecting an input (e.g., 650a, 650c, 650 j) directed to a second selectable user interface object for capturing media, the computer system captures third media to which the first media processing style is applied (e.g., based on a current value of a parameter of the first media processing style). In some embodiments, in response to detecting input directed to the selectable user interface object for capturing media, and when input directed to the plurality of selectable user interface objects for a first media processing style is detected and/or input directed to a representation is detected (e.g., as a request to switch media processing styles, as described above with respect to method 900 and fig. 6A-6P), the computer initiates capturing of media to which a media processing style is applied when input directed to the selectable user interface object for capturing media is detected (e.g., immediately before/after), the media processing style being applied to more predetermined portions (e.g., 25%, 30%, 40%, 50%, 60%, 75%) of the representation of the media and/or other portions of the representation of the media that are larger than the representation of media to which another media processing style is applied (e.g., portions described above with respect to method 900 and fig. 6A-6P). Capturing media to which the first media processing style is applied in response to detecting input directed to a second selectable user interface object for capturing media allows the user to capture media to which the currently selected media processing style is to be applied, which provides additional control options without cluttering the user interface.
In some embodiments, as part of displaying the representation using the first media processing style, the computer system applies the first media processing style differently to one or more objects (e.g., people shown in 630) in the representation (e.g., people and/or faces of people) (e.g., identifiable objects) and to a portion of the first portion that does not include the one or more objects (e.g., for one type of identified object, uses different visual parameters (e.g., color characteristics (e.g., color temperature, hue, brightness, saturation, chroma, hue, color, chroma, visual chroma, and/or harmony) and/or depth parameters) as compared to a different type of identified object (e.g., a main body (e.g., a person) as compared to a non-main body) (e.g., a first portion of the representation (e.g., including a portion of the object) is displayed with a different visual appearance as compared to a second portion of the representation (e.g., not including a subset of the object)).
In some embodiments, when the plurality of selectable user interface objects for the first media processing style are displayed, the computer system detects a first input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) (e.g., a movement input) (and/or, in some embodiments, in response to detecting a non-movement input (e.g., a tap input, a rotational drag gesture, and/or a press and hold gesture)) directed to the representation of the media (e.g., 630) via the one or more input devices. In some implementations, in response to detecting the first input directed to the representation of the media, the computer system displays a representation of the current value of the first parameter of the fourth media processing pattern (e.g., 626a1b, 626a2b, 626b1b, 626b2b, 626c1b, 626c2b, 626d1b, and/or 626d2 b) and stops displaying the representation of the current value of the first parameter of the first media processing pattern (e.g., 626a1b, 626a2b, 626b1b, 626b2b, 626c1b, 626c2b, 626d1b, and/or 626d2 b). In some implementations, in response to detecting the first input directed to the representation of media, the computer system displays a portion of the representation of media using the fourth media processing style (e.g., a portion of the representation of media displayed using the first media processing style prior to detecting the input directed to the representation of media). In some implementations, in response to detecting the first input directed to the representation of the media, the computer system displays a representation of the current value of the second parameter of the fourth media processing style and ceases to display the representation of the current value of the second parameter of the first media processing style. In some embodiments, as part of displaying the representation of the current value of the first parameter of the fourth media processing pattern, the computer system displays an animation (e.g., a sliding animation, a dissolving animation, and/or a fade-in/fade-out animation) that changes the representation of the current value of the first parameter of the first media processing pattern (e.g., 626a1b, 626a2b, 626b1b, 626c2b, 626d1b, and/or 626c2 b) to the representation of the current value of the first parameter of the fourth media processing pattern (e.g., 626a1b, 626a2b, 626b1b, 626b2b, 626c2b, 626d1b, and/or 626d2 b). In some implementations, as part of displaying the representation of the current value of the second parameter of the fourth media processing style, the computer system displays an animation (e.g., a fade animation over time) that changes the representation of the current value of the second parameter of the first media processing style to the representation of the current value of the second parameter of the fourth media processing style. The animation, which displays a representation of a current value of a first parameter changing the first media processing style to a representation of a current value of a first parameter of a fourth media processing style, provides visual feedback to the user that the user interface object for the first media processing is changing to the user interface object for the fourth media processing style, which may reduce potential errors, which provides improved visual feedback. In response to detecting a first input directed to the representation of the media, displaying a representation of the current value of the first parameter of the fourth media processing pattern and ceasing to display the representation of the current value of the first parameter of the first media processing pattern allows the computer system to provide related user interface objects related to the media processing pattern being applied to the representation of the media without providing user interface objects unrelated to the media processing pattern being applied to the representation of the media, which performs the operation when a set of conditions has been met without requiring additional user input and provides improved visual feedback.
In some implementations, when the plurality of selectable user interface objects (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1, 626d 2) for the first media processing style are displayed, the computer system detects a second input (e.g., 650d, 650k1, 650k2, 750j, and/or 750 q) directed to the representation of the media via the one or more input devices. In some implementations, in response to detecting the second input directed to the representation of media, the computer system displays a portion of the representation of media using a fifth media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) (e.g., a portion of the representation of media displayed using the first media processing style prior to detecting the input directed to the representation of media). In some embodiments, when the portion of the representation of the media is displayed using the fifth media processing style, and in accordance with a determination that the portion of the representation of the media using the fifth media processing style is greater than a threshold amount (e.g., 25%, 30%, 40%, 50%, 51%, 60%, or 75%) of the representation (and/or at a particular portion (e.g., middle) of the representation and/or display generating component) of the representation (e.g., 630) (and/or the display generating component), the computer system displays a representation of a current value of the first parameter of the fifth media processing style (e.g., 626a1b, 626a2b, 626b1b, 626c2b, 626c1b, 626d2b, and/or 626 b) and ceases to display the representation of the current value of the first parameter of the first media processing style (e.g., a1b, 626a2b, 626b1b, 626c2b, 626c1b, 626d2b, and/or d2 b). In some embodiments, when the portion of the representation of the media is displayed using the fifth media processing style, and in accordance with a determination that the portion of the representation of the media using the fifth media processing style is not greater than a threshold amount (e.g., 25%, 30%, 40%, 50%, 51%, 60%, or 75%) of the representation (and/or at a particular portion (e.g., middle) of the representation and/or display generating component), the computer system continues to display the representation of the current value of the first parameter of the first media processing style and discards displaying the representation of the current value of the first parameter of the fifth media processing style. When the prescribed condition is met, displaying a representation of the current value of the first parameter of the fifth media processing style and ceasing to display a representation of the current value of the first parameter of the first media processing style allows the computer system to provide relevant user interface objects related to the media processing style being applied to the representation of the media without providing user interface objects unrelated to the media processing style being applied to the representation of the media, which performs the operation without additional user input and provides improved visual feedback when a set of conditions has been met.
In some implementations, in response to detecting an input (e.g., one or more default values of the one or more parameters 750a, 750d, 750g, 750k, and/or 750 t) directed to the plurality of selectable user interface objects (e.g., 626a1, 626a2, 626b1, 626b2, 626c1, 626c2, 626d1, 626d 2) for the first media processing style (e.g., and in accordance with determining at least one current value of the one or more parameters of the first media processing style (e.g., represented by 626a1b, 626a2b, 626b1b, 626b2b, 626c1b, 626c2b, 626d1b, and/or 626d2 b) that is different (e.g., and/or substantially different) than the one or more parameters of the first media processing style (e.g., 634a and/or 634 d)), the computer system adds a first customized media processing style (e.g., 634aa and/or 634) that is different than the first media processing style (e.g., 634a media style) to which is a set of processing style (e.g., is a media style that is different than the first media processing style) but is a set of processing style (e.g., is a set of processing style) of processing parameters that is different from the first media style (e.g., 634 d). In some implementations, the user interface includes one or more indications corresponding to one or more media processing styles. In some implementations, the one or more indications include a first indication corresponding to a first media processing style. In some embodiments, the computer system is responsive to detecting input directed to the plurality of selectable user interface objects for the first media processing style and, in accordance with determining that at least one current value of one or more parameters of the first media processing style is different from one or more default values of the one or more parameters of the first media processing style, displays a plurality of selectable user interface objects corresponding to a first custom media processing style that is different from the first media processing style (e.g., custom media processing style corresponding to the first media processing style); and adding a second indication corresponding to the first customized media processing pattern to the one or more indications (e.g., displaying the second indication as part of the one or more indications (e.g., between, in line with) the one or more indications). In some implementations, adding the first customized media processing pattern includes configuring the first customized media processing pattern to be available for future use (e.g., in other user interfaces, after exiting/closing the application, after a certain period of time has elapsed that would not be available if the first customized media processing pattern was not configured to be available for future use). When the prescribed condition is met, the first custom media processing style is added to the set of available media processing styles, which allows the user to reuse the custom style without providing input to recreate the style, and prevents the user from editing non-custom media processing styles, which reduces the amount of input required to perform the operation.
In some implementations, when the set of available media (e.g., 634 a-634 d, 634aa, and/or 634 dd) processing styles includes a first custom media processing style, the computer system detects a first request (e.g., 750n and/or 750 w) to change one or more parameters of the first custom media processing style. In some embodiments, the computer system removes the first customized media processing style (e.g., 634aa and/or 634 dd) from the set of available media processing styles (e.g., stops displaying the second indication, and/or stops displaying the second indication as part of (e.g., in-line with) the one or more indications between) the one or more indications in response to detecting a first request to change the one or more parameters of the first customized media processing style (e.g., and in accordance with a determination that the first customized media processing style will be the same (or substantially the same) as (e.g., in the set of available media processing styles) after the first request is fulfilled). In some embodiments, when the one or more indications include a second indication corresponding to a first customized media processing style that is different from the first media processing style, and when the plurality of selectable user interface objects for the first customized media processing style are displayed, the computer system detects a first input directed to the plurality of selectable user interface objects for the first customized media processing style via the one or more input devices. In some implementations, in response to detecting the first input, and in accordance with a determination that the first customized media processing pattern is the same (or substantially the same) as one or more other available media processing patterns in the set of available media processing patterns, the computer system removes a second indication corresponding to the first customized media processing pattern. In some implementations, removing the first customized media processing pattern includes configuring the first customized media processing pattern to be unavailable for future use (e.g., in other user interfaces, after exiting/closing the application, after a certain period of time has elapsed that would not be available if the first customized media processing pattern was not configured for future use). When the prescribed condition is met, the first custom media processing pattern is removed from the set of available media processing patterns, which allows the computer system to automatically remove patterns that may be repetitive and/or undesirable, which reduces the number of inputs required to perform the operation.
In some implementations, after adding the first custom media processing style (e.g., 634aa and/or 634 dd) to the set of available media processing styles (e.g., 634 a-634 d, 634aa and/or 634 dd), the computer system displays a respective user interface including a respective representation (e.g., 630) of the media displayed using the respective media processing style (e.g., 634a and/or 634 d).
In some implementations, when displaying a respective user interface that includes a respective representation (e.g., 630) of media displayed using a respective media processing style (e.g., 634a and/or 634 d), and when the set of available media processing styles includes a first custom media processing style (e.g., 634aa and/or 634 dd), the computer system detects a request to display the respective representation of media using a next (or previous) available media processing style from the set of available media processing styles. In some implementations, as part of detecting a request TO display a representation of media using a next available media processing style from a set of available media processing styles, a computer system detects an input on a respective user interface (e.g., as described above with respect TO method steps XX-LINK TO CS 1). In some implementations, in response to detecting a request (e.g., 750 o) to display a respective representation of media using a next (e.g., or previous) available media processing style, the computer system, when displaying the respective representation of media using the respective media processing style: in accordance with a determination that the respective media processing style is a first media processing style, at least a portion of the respective representation of the media is displayed using the first customized media processing style (e.g., 634aa and/or 634 dd); and in accordance with a determination that the respective media processing style is not the first media processing style, forgoing use of the first custom media processing style (e.g., 634aa and/or 634 dd) to display at least a portion of the respective representation of the media (e.g., as discussed above with respect to fig. 7O-7P). In some implementations, after removing the first customized media processing style, the computer displays a respective user interface that includes a respective representation of media displayed using the first media processing style. In some implementations, when a user interface is displayed that includes a respective representation of media displayed using a first media processing style, and when the set of available media processing styles is not the first custom media processing style, the computer system detects a request to display a representation of media using a next available media processing style. In some implementations, in response to detecting a request to display a representation of media using a next available media processing style, the computer system displays at least a portion of the representation of media using the first customized media processing style. In some implementations, the one or more indications include a third indication corresponding to a sixth media processing style. In some embodiments, the one or more indications are displayed such that the second indication is adjacent (e.g., immediately adjacent, closer to, to the right of, to the left of, above, and/or below) the first indication and is not adjacent to the third indication. Displaying at least a portion of the representation of the media using the first customized media processing pattern (e.g., when a specified condition is met) provides feedback to the user that the first customized photography pattern is a customized pattern for the first media processing pattern and not other media processing patterns, which provides improved visual feedback.
In some implementations, when the set of available media processing styles includes a first customized media processing style (e.g., 634aa and/or 634 dd), the computer system detects a second request (e.g., 750a, 750d, 750g, 750k, and/or 750 t) to change one or more parameters of the first customized media processing style. In some embodiments, in response to detecting a second request to change the one or more parameters of the first customized media processing style (e.g., and in accordance with a determination that the first customized media processing style is not the same as (or substantially the same as) the one or more other available media processing styles after the second request is fulfilled), the computer system updates the one or more parameters of the first customized media processing style (e.g., represented by 626a1, 626a2, 626b1, 626c2, 626d1, and/or 626d 2) (e.g., as discussed above with respect to input 750a and/or input 750 t) (and continues to include the first customized media processing style in the set of available media processing styles) (e.g., does not include additional customized media processing styles in accordance with a determination that the first customized media processing style is not the set of available media processing styles) in some embodiments, when the one or more indications include a second indication corresponding to the first customized media processing style that is different from the first media processing style, and when the one or more selectable media processing styles are displayed for the first customized media processing style are directed to the first user interface via the first user interface, the computer system determines that the value of the first customized media processing style has been changed (e.g., in accordance with the first customized media processing style) to the first user interface, and at least one current value of the one or more parameters of the first customized media processing pattern is different from one or more default values of the one or more parameters of the first media processing pattern (e.g., has been changed by the second input and/or the one or more inputs directed to the first control), the computer system does not add a fourth indication to the one or more indications (e.g., continues to display the same number of indications as displayed prior to detecting the second input directed to the plurality of selectable user interface objects for the first customized media processing pattern) and updates the current value of the first parameter of the first customized media processing pattern based on the second input directed to the plurality of selectable user interface objects for the first customized media processing pattern. In some implementations, in response to detecting a second input directed to the plurality of selectable user interface objects for the first customized media processing pattern and in accordance with a determination that a current value of a first parameter of the first customized media processing pattern has changed, the computer system continues to include the second indication as part of the one or more indications corresponding to the first customized media processing pattern (e.g., continues to display the second indication). In response to detecting a second request to change the one or more parameters of the first custom media processing pattern, the one or more parameters of the first custom media processing pattern are updated, which reduces a number of inputs required to navigate through the set of available media processing patterns, and reduces a number of inputs required to reconfigure the first custom media processing pattern after the one or more parameters have been updated, which reduces a number of inputs required to perform the one or more operations.
In some implementations, when the set of available media processing styles includes the first customized media processing style, the computer system detects a third request (e.g., 750a and/or 750 t) to change one or more parameters of the first customized media processing style (e.g., as discussed above with respect to input 750a and/or input 750 t). In some implementations, in response to detecting a third request to change the one or more parameters of the first customized media processing pattern (e.g., and in accordance with a determination that the first customized media processing pattern is not identical (or substantially identical) to one or more other available media processing patterns after the third request is fulfilled), the computer system adds a second customized media processing pattern (e.g., for/corresponding to the first media processing pattern) to the set of available media processing patterns without updating the one or more parameters of the first customized media processing pattern (e.g., as discussed above with respect to input 750a and/or input 750 t) (and/or the one or more parameters of the first media processing pattern). In some embodiments, when the one or more indications include a second indication corresponding to a first customized media processing style that is different from the first media processing style, and when the plurality of selectable user interface objects for the first customized media processing style are displayed, the computer system detects a second input (e.g., a tap gesture) (e.g., a single tap input, a double tap input)) directed to the plurality of selectable user interface objects for the first customized media processing style via the one or more input devices (and/or, in some embodiments, detects a non-tap input/gesture (e.g., a move input/gesture, a press and hold input/gesture, and/or a voice input)). In some implementations, in response to detecting a second input directed to the plurality of selectable user interface objects for the first customized media processing style and in accordance with determining that a current value of a first parameter of the first customized media processing style has changed (e.g., and at least one current value of one or more parameters of the first customized media processing style is different from one or more default values of one or more parameters of the first media processing style) (e.g., has been changed by the second input and/or one or more inputs directed to the first control), the computer system: adding the fifth indication to one or more indications corresponding to a second corresponding media processing style that is different from the first customized media processing style (e.g., continuing to display the second indication); and updating the current value of the first parameter of the second custom media processing pattern based on the second input directed to the plurality of selectable user interface objects for the first custom media processing pattern (e.g., while discarding updating the current value of the first parameter of the first custom media processing pattern based on the second input directed to the plurality of selectable user interface objects for the first custom media processing pattern and not updating the current value of the first parameter of the first custom media processing pattern based on the second input directed to the plurality of selectable user interface objects for the first custom media processing pattern). In some implementations, in response to detecting a second input directed to the plurality of selectable user interface objects for the first customized media processing pattern and in accordance with a determination that a current value of the first parameter of the first customized media processing pattern has not changed, the computer system continues to include the second indication as part of the one or more indications corresponding to the first customized media processing pattern (e.g., continues to display the second indication). In some implementations, the first custom media processing pattern and the second custom media processing pattern are both between the first media processing pattern and the second media processing pattern in the set of available media processing patterns. In response to detecting a third request to change the one or more parameters of the first custom media processing pattern, adding the second custom media processing pattern to the set of available media processing patterns without updating the one or more parameters of the first custom media processing pattern reduces the number of inputs required to reconfigure the first custom media processing pattern, reduces the number of inputs required to perform one or more operations, and provides the user with additional options to reduce the need to repeatedly reconfigure the photography pattern.
In some embodiments, the first customized media processing style (e.g., 634aa and/or 634 dd) and the second customized media processing style (e.g., 634aa and/or 634 dd) have the same respective text identification (e.g., 636aa and/or 636 dd) (e.g., description or name (e.g., vivid warm color, vivid cool color, warm color, cool color, neutral, soft warm color, soft cool color, vivid and/or soft)). In some embodiments, when a user interface is displayed that includes a representation of media (e.g., 630), and in accordance with a determination that a first customized media processing style (e.g., 634aa and/or 634 dd) is being applied to the representation of media, the computer concurrently displays the same respective text identification with an indication of a parameter of the first customized media processing style (e.g., 626a and/or 626 d). In some embodiments, when a user interface is displayed that includes a representation of media (e.g., 630), and in accordance with a determination that a second custom media processing style is being applied to the representation of media, the computer system simultaneously displays an indication (e.g., 626a and/or 626 b) of parameters of the same respective text identifier (e.g., 636aa and/or 636 dd) and the second custom media processing style (e.g., 634aa and/or 634 dd). In some implementations, the indication of the parameter of the first customized media processing pattern is identified differently (e.g., by a different value) than the indication of the parameter of the second customized media processing pattern (e.g., as discussed above with respect to fig. 7C, 7D, and 7M). Displaying the same respective identification simultaneously with an indication of parameters of a particular media processing style (e.g., a media processing style having the same identification as another media processing style) provides visual feedback that enables a user to identify which media processing style is being applied (e.g., and/or differences regarding how media processing styles having the same identification are being applied), which provides improved visual feedback.
In some embodiments, in response to detecting an input (e.g., 750a, 750d, 750g, 750k, and/or 750 t) (e.g., pointing to the one or more selectable user interface objects for the first media processing style and in accordance with determining at least one current value of one or more parameters of the first media processing style to be different (e.g., and/or substantially different) from one or more default values of the one or more parameters of the first media processing style), the computer displays a first customized media processing style (e.g., 634aa and/or dd) in accordance with determining the at least one current value (e.g., 634 a-634 d, 634aa, and/or 634 dd) of the one or more parameters of the first media processing style (e.g., a positive and/or negative delta (e.g., a value between-100 to 100)) in accordance with determining the one or more default values of the one or more parameters of the first media processing style (e.g., as described above with respect to fig. 7C and 7F), and wherein a first difference between the one or more current values (e.g., 634 a-634 d, 634aa, and/or 634 dd) and the one or more default values of the one or more parameters of the first media processing style is a first difference (e.g., positive and/or negative (e.g., a value between positive and/100) is a positive and/negative) (e.g., as described above with respect to fig. 7C and 7F), the one or a difference (e.g., and/or a value) is different) is displayed in accordance with the one color, and/is a difference (e.g., a value and/, the difference between the at least one current value of the one or more parameters of 634aa and/or 634 dd) and the one or more default values of the one or more parameters of the first media processing style is a second operation (e.g., a positive and/or negative delta (e.g., -a value between 100 and 100)) that is different (e.g., a different value) from the first difference (e.g., as described above with respect to fig. 7C and 7F), the computer system displays a second text identification (e.g., description or name, such as vivid warm color, vivid cold color, warm color, cold color, neutral, soft warm color, soft cold color, vivid and/or soft) of the first customized media processing style (e.g., 634aa and/or 634 dd). In some embodiments, the second textual identification (e.g., 636aa and/or 636 dd) is different from the first textual identification (e.g., 636aa and/or 636 dd) (e.g., as described above with respect to fig. 7C and 7F). Displaying the identification based on a difference between the at least one current value of the one or more parameters of the first media processing style and the one or more default values of the one or more parameters of the first media processing style provides visual feedback informing a user how the changed version of the media processing style differs from the default version of the media processing style, which provides improved visual feedback.
In some embodiments, the first media processing style (e.g., 634 a-634 d, 634aa, and/or 634 dd) has a third text identification (e.g., as described above with respect to fig. 7W, 7W1, and/or 7X) that is different from the first text identification and the second text identification. In some implementations, when the set of available media processing styles includes the first customized media processing style, the computer system detects a third request (e.g., 750n, 750w, and/or 750w 1) to change one or more parameters of the first customized media processing style. In some embodiments, in response to detecting a third request to change the one or more parameters of the first customized media processing style, and in accordance with a determination that the first customized media processing style will be the same (or substantially the same) as one or more other available media processing styles (e.g., the first media processing style) after the first request is fulfilled (e.g., in a set of available media processing styles) (e.g., the one or more parameters of the first customized media processing style are the same as one or more parameters of one or more of the other available media processing styles) (e.g., and/or in response to detecting a request to reset the first customized media processing style (e.g., in response to detecting an input directed to an optional user interface object for resetting the one or more parameters of the first media processing style)), the computer system displays third textual identifications (e.g., 636a through 636d, 636aa and/or 636 dd) (e.g., without displaying the first identifications and the second identifications) (e.g., as described above with respect to fig. 7W, 7W and/or fig. 7X). In some implementations, when the custom media processing style is reset, the computer system changes the style name back to its original name. Displaying the third identification (e.g., an identification of the media processing style from which the media processing style changed) provides visual feedback informing the user of the media processing style that created the media processing style, which provides improved visual feedback.
In some implementations, the plurality of selectable user interface objects (e.g., 626a1b, 626a2b, 626b1b, 626b2b, 626c1b, 626c2b, 626d1b, and/or 626d2 b) for the first media processing style are displayed in response to detecting a request (e.g., 650 b) to edit the first media processing (e.g., and when a representation of media displayed using the first media processing style is displayed). In some embodiments, prior to displaying the plurality of selectable user interface objects, the computer system detects, via one or more input devices, a request to edit a first media processing style (e.g., a request to edit visual content being applied to media) such as how the first media processing style is being applied to visual content. In some embodiments, as part of detecting a request to edit a first media processing style, the computer system detects a movement input (e.g., and/or, in some embodiments, a non-movement input, such as a press and hold input, a pinch input, etc.) on the representation of the media (e.g., as described above with respect to method 900 and fig. 6A-6C). In some implementations, as part of detecting a request to edit a first media processing style, the computer system detects a tap input (and/or a non-tap input, such as a press and hold input, a pinch input, etc.) on a user interface object for displaying a representation of the first media processing style (e.g., as described above with respect to method 900 and fig. 6A-6C). In some implementations, the computer system concurrently displays the plurality of selectable user interface objects in response to detecting a request to edit the first media processing style (e.g., and when displaying a representation of media displayed using the first media processing style). In some implementations, a respective custom style for one media processing style is displayed and/or included in a set of available media processing styles, even if parameters of the respective custom style match another media processing style in the set of available media processing styles.
In some implementations, the user interface including the representation of media (e.g., 630) includes a second style mode user interface object (e.g., 602 b) that, when selected, causes (e.g., causes the computer system to switch between) display of a representation to which the second selected media processing style is applied (e.g., a user interface object for displaying the style selection user interface, and/or causes display of a user interface object of a style portion of the user interface when selected) (e.g., or causes display of a representation to which the second selected media processing style is not applied). In some embodiments, the computer system detects a respective input (e.g., 750a, 750d, 750g, 750k, 750n, 750r, 750t, 750w, and/or 750w 1) (e.g., a movement input (e.g., a swipe gesture and/or a drag gesture)) (and/or, in some embodiments, in response to detecting a non-movement input (e.g., a tap input, a press and hold gesture, and/or a voice input)) (e.g., when a first control for adjusting a current value of a first parameter is displayed or when a second control for adjusting a current value of a second parameter is displayed). In some embodiments, in response to detecting the respective input, and in accordance with determining that the respective input is directed to the first control for adjusting the current value of the first parameter, the computer system changes a first appearance (e.g., color, size, having a first boundary (e.g., a line (e.g., shown in a clockwise and/or counterclockwise direction) surrounding the second style mode user interface object) of the second style mode user interface object (e.g., 0% to 100%) for (e.g., as described above with respect to method 900) (e.g., displays the second style mode user interface object in an appearance that was not displayed prior to detecting the respective input) (e.g., as described above with respect to fig. 6L, 7C, and/or 7M). In response to detecting the respective input, and in accordance with a determination that the respective input is directed to the first control for adjusting the current value of the first parameter, changing the first appearance of the second style mode user interface object provides visual feedback informing the user that the input has caused a change in the representation of how the media processing style is being applied to the media, which provides improved visual feedback. In some implementations, the first appearance of the second style mode user interface object (e.g., 602 b) gradually changes (e.g., as described above with respect to fig. 6L, 7C, and/or 7M) when the current value of the first parameter is modified. In some implementations, the respective inputs have a first magnitude (e.g., velocity and/or acceleration). In some implementations, the appearance of the second style mode user interface object changes at a second magnitude that is based on the first magnitude. In some implementations, the appearance of the second style mode user interface object moves and/or accelerates at a faster (e.g., or alternatively, slower) speed as the corresponding input moves and/or accelerates faster (e.g., or alternatively, slower). In response to detecting the respective input, and in accordance with a determination that the respective input is directed to the first control for adjusting the current value of the first parameter, the first appearance of the second style mode user interface object is gradually changed, which provides visual feedback informing the user that the input has caused a change in the representation of how the media processing style is being applied to the media, while reducing visual disturbances that may be caused when the user interface element is suddenly changed, which provides improved visual feedback.
In some embodiments, as part of changing the first appearance of the second style mode user interface object, in accordance with determining that the corresponding input is in the first direction (e.g., up/down/right/left direction), the computer system updates the first visual aspect (e.g., a line around the perimeter of 602b and/or the color of 602 b) of the second style mode user interface object (e.g., 602 b) in the first manner (e.g., at least a portion of the second media processing style (and, in some embodiments, the portion includes the color, chromaticity, and/or tint of the boundary (e.g., a line around the boundary)), e.g., as discussed with respect to fig. 6F and 6L. In some embodiments, as part of changing the first appearance of the second style mode user interface object, in accordance with a determination that the corresponding input is in a second direction (e.g., up/down/right/left direction) different from the first direction, the computer system updates the first visual aspect in a second manner different from the first manner (e.g., as discussed with respect to fig. 6F and 6L). In some embodiments, the first mode is opposite to the second mode. In some embodiments, the computer system increases (or alternatively decreases) the length and/or size of the first visual appearance (e.g., in a clockwise direction) as part of updating the first visual appearance in the first manner, and decreases (or alternatively increases) the length and/or size of the first visual appearance (e.g., in a counterclockwise direction) as part of updating the first visual appearance in the second manner. In some implementations, as part of updating the first visual appearance in the first manner, the computer system adds more of a first color (e.g., red and/or black) and/or removes more of a second color (e.g., blue and/or white) from the second style mode user interface object, wherein the second color is different from the first color. In some embodiments, as part of updating the first visual appearance in the second manner, the computer system adds more of the second color (e.g., red and/or black) and/or removes more of the first color (e.g., blue and/or white) from the second style mode user interface object. Updating the first visual aspect in a manner based on the direction of the respective input provides visual feedback informing the user how the first input is changing parameters of the media processing style, which provides improved visual feedback.
In some embodiments, as part of changing the first appearance of the second style mode user interface object (e.g., 602 b), the computer system displays the second style mode user interface object in a visual element that is an open shape with an opening (e.g., as shown at 602b in fig. 7C-7F). In some embodiments, the second current value of the first parameter (e.g., 626a1 and/or 626a 2) has been set to the maximum value of the first parameter (e.g., 100, 150, 200, 256, and/or 300) in accordance with a determination that the respective input, the opening (e.g., gap) is on a first side of the opening shape (e.g., a side relative to a midpoint, center, and/or origin of the shape) (e.g., has no opening on a second side of the opening shape), e.g., as described above with respect to fig. 7C-7F). In some embodiments, in accordance with a determination that the respective input has set the second current value of the first parameter to a minimum value (e.g., 100, 150, 200, 256, and/or 300) of the first parameter that is different from the maximum value of the first parameter, the opening is on a second side (e.g., a side relative to a midpoint, center, and/or origin of the shape) of the opening shape that is different from the first side (e.g., the right side) (e.g., no opening is on the first side of the opening shape) (e.g., no opening is on the second side of the first opening shape) (e.g., as shown at 602b in fig. 7C-7F). The visual element is displayed as an open shape based on whether the second current value is a minimum or maximum value of the first parameter, wherein the shape has openings displayed on different sides, which provides visual feedback to the user regarding the direction in which the visual element is advanced (e.g., clockwise and/or counter-clockwise) before reaching a position indicating the minimum or maximum value of the first parameter, which provides improved visual feedback.
In some embodiments, as part of changing the appearance of the second style mode user interface object (602 b), the computer system changes the display of the third visual aspect of the second style mode user interface object (e.g., the color of 602b and/or the line of 602b (e.g., as discussed above with respect to fig. 6F, 6L, 7C, 7D, and/or 7M)) (e.g., at least a portion of the second media processing style (and, in some embodiments, the portion including the color, chromaticity, and/or tint of the boundary (e.g., the line surrounding the boundary)) based on determining that the value of the first parameter of the first media processing style is different from the value of the first parameter of the second media processing style, without changing the second visual aspect of the second style mode user interface object.
In some embodiments, in response to detecting the respective input and in accordance with a determination that the respective input (e.g., 750a, 750d, 750g, 750k, 750n, 750r, and/or 750 t) is directed to a first control for adjusting a current value of a first parameter (e.g., 626a1c and/or 626a2 c), the computer system changes a second appearance (e.g., color and/or a line surrounding 602 b) of the second style mode user interface object (e.g., color, size, having a first border (e.g., line (e.g., shown in a clockwise and/or counterclockwise direction) surrounding the second style mode user interface object, wherein the line surrounds and/or surrounds a portion (e.g., 0% to 100%) of the second style mode user interface object (e.g., as described above with respect to method 900 and/or fig. 6F and 6L.) in some embodiments, as part of changing a second appearance of the second style mode user interface object, changes a fourth visual aspect (e.g., a second visual aspect (e.g., color, size, at least a portion of the second style, e.g., surrounds, a second aspect (e.g., a line, a border, a length, and/or a visual aspect (e.g., a border, a visual aspect) of the computer system and/or the like) is displayed adjacent to the second style (e.g., a border, or in a clockwise and/or counterclockwise direction), or vice versa). Changing the display of the particular visual element of the second style mode user interface object based on whether the value of the particular parameter has changed provides visual feedback to the user as to which parameters have been changed for the media processing style (and which parameters have not changed), which provides improved visual feedback.
It is noted that the details of the process described above with respect to method 1000 (e.g., fig. 10A-10B) may also apply in a similar manner to the methods described herein. For example, method 1000 optionally includes one or more of the features of the various methods described above with reference to method 900. For example, method 900 may be used to select one or more media processing styles and method 1000 may be used to edit media selected using method 900. For the sake of brevity, these details are not repeated hereinafter.
The foregoing description, for purposes of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the techniques and their practical applications. Those skilled in the art will be able to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
While the present disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. It should be understood that such variations and modifications are considered to be included within the scope of the disclosure and examples as defined by the claims.
As described above, one aspect of the present technology is to collect and use data from various sources to improve delivery of media processing styles or any other media editing tools to a user that may be available to him. The present disclosure contemplates that in some examples, such collected data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, telephone numbers, email addresses, tweet IDs, home addresses, data or records related to the user's health or fitness level (e.g., vital sign measurements, medication information, exercise information), date of birth, or any other identifying or personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be used to benefit users. For example, personal information data may be used for media processing styles that are useful to a user. Thus, the use of such personal information data enables a user to calculate control over the media processing patterns delivered and/or the media processing patterns initially available to the user. In addition, the present disclosure contemplates other uses for personal information data that are beneficial to the user. For example, health and fitness data may be used to provide insight into the overall health of a user, or may be used as positive feedback to individuals using technology to pursue health goals.
The present disclosure contemplates that entities responsible for collecting, analyzing, disclosing, transmitting, storing, or otherwise using such personal information data will adhere to established privacy policies and/or privacy practices. In particular, such entities should exercise and adhere to privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining the privacy and security of personal information data. Such policies should be readily accessible to the user and should be updated as the collection and/or use of the data changes. Personal information from users should be collected for legal and reasonable use by entities and not shared or sold outside of these legal uses. In addition, such collection/sharing should be performed after informed consent is received from the user. In addition, such entities should consider taking any necessary steps to defend and secure access to such personal information data and to ensure that others who have access to personal information data adhere to their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices. In addition, policies and practices should be adjusted to collect and/or access specific types of personal information data and to suit applicable laws and standards including specific considerations of jurisdiction. For example, in the united states, the collection or acquisition of certain health data may be governed by federal and/or state law, such as the health insurance flow and liability act (HIPAA); while health data in other countries may be subject to other regulations and policies and should be processed accordingly. Thus, different privacy practices should be maintained for different personal data types in each country.
In spite of the foregoing, the present disclosure also contemplates embodiments in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, in terms of providing media processing styles to users, the present technology may be configured to allow users to choose to "opt-in" or "opt-out" to participate in the collection of personal information data during or at any time after registration with a service. As another example, users may choose not to share dates with respect to their custom media processing patterns, including media that the users have captured on their personal devices. For another example, the user may choose to limit the length of time that the captured media is held, or to prohibit access to the captured media altogether. In addition to providing the "opt-in" and "opt-out" options, the present disclosure also contemplates providing notifications related to accessing or using personal information. For example, the user may be notified that his personal information data will be accessed when the application is downloaded, and then be reminded again just before the personal information data is accessed by the application.
Further, it is an object of the present disclosure that personal information data should be managed and processed to minimize the risk of inadvertent or unauthorized access or use. Once the data is no longer needed, risk can be minimized by limiting the data collection and deleting the data. In addition, and when applicable, included in certain health-related applications, the data de-identification may be used to protect the privacy of the user. De-identification may be facilitated by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of stored data (e.g., collecting location data at a city level instead of at an address level), controlling how data is stored (e.g., aggregating data among users), and/or other methods, as appropriate.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, the present disclosure also contemplates that the various embodiments may be implemented without accessing such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, media processing patterns may be generated and made available based on non-personal information data or absolute minimum personal information, such as content requested by a device associated with a user, other non-personal information available for editing and/or capturing media, or publicly available information.

Claims (103)

1. A method, comprising:
at a computer system in communication with a display generation component and one or more input devices:
displaying, via the display generating component, a style selection user interface comprising a representation of media, wherein a first portion of the representation and a second portion of the representation are displayed using a first media processing style applied to visual content of the media;
detecting, via the one or more input devices, an input directed to the representation while displaying a first portion of the representation and a second portion of the representation using the first media processing style; and
In response to detecting the input directed to the representation and in accordance with a determination that the input is in a first direction, displaying a first portion of the representation via the display generating component using a second media processing style while continuing to display a second portion of the representation using the first media processing style, comprising:
in response to detecting a first portion of the input directed to the representation,
wherein a first portion of the input has a first input magnitude, displaying a first portion of the representation using the second media processing style while displaying a second portion of the representation and a third portion of the representation located between the first portion of the representation and the second portion of the representation using the first media processing style; and
after displaying a first portion of the representation using the second media processing style while displaying a second portion of the representation and a third portion of the representation using the first media processing style, and in response to detecting a second portion of the input directed to the representation, wherein the second portion of the input has a second input magnitude that is greater than the first input magnitude, displaying the first portion of the representation and the third portion of the representation using the second media processing style while displaying the second portion of the representation using the first media processing style.
2. The method of claim 1, wherein the first portion of the representation and the second portion of the representation are not displayed using the second media processing style before the input directed to the representation is detected.
3. The method of any of claims 1-2, wherein the first media processing pattern is different from the second media processing pattern.
4. A method according to any of claims 1 to 3, wherein the second portion of the representation and the third portion of the representation are not displayed using the second media processing style in response to detecting the first portion of the input directed to the representation.
5. The method of any of claims 1-4, wherein an amount of the representation to which the second media processing style is applied is based on an amount of movement of the input to the representation.
6. The method of any one of claims 1 to 5, further comprising:
responsive to detecting an end of the input directed to the representation:
in accordance with a determination that more than a predetermined portion of the representation is displayed using the first media processing style when the end of the input to the representation is detected, displaying a first portion of the representation using the first media processing style; and
In accordance with a determination that less than a predetermined portion of the representation is displayed using the first media processing style when the end of the input to the representation is detected, a first portion of the representation is displayed using the second media processing style.
7. The method of any one of claims 1 to 6, further comprising:
in response to detecting the input directed to the representation, and in accordance with a determination that the input is in a second direction different from the first direction, a second portion of the representation is displayed using a third media processing style while continuing to display the first portion of the representation using the first media processing style, wherein the third media processing style is different from the first media processing style and the second media processing style.
8. The method of any of claims 1 to 7, further comprising:
in response to detecting the input directed to the representation:
in accordance with a determination that the input is in the first direction, displaying a visual element corresponding to a fourth media processing style; and
in accordance with a determination that the input is in a third direction different from the first direction, a visual element corresponding to a fifth media processing style different from the fourth media processing style is displayed.
9. The method of any of claims 1-8, wherein, prior to detecting the input directed to the representation, the style selection user interface includes a visual element corresponding to the second media processing style and a visual element corresponding to a sixth media processing style, the method further comprising:
in response to detecting the input directed to the representation:
in accordance with a determination that the input is in the first direction, ceasing to display the visual element corresponding to the second media processing style without using the second media processing style to display the representation; and
in accordance with a determination that the input is in a fourth direction different from the first direction, display of the visual element corresponding to the sixth media processing style is stopped without displaying the representation using the sixth media processing style.
10. The method of any of claims 1 to 9, wherein the input directed to the representation is not detected at an indication of the second media processing style.
11. The method of any of claims 1-10, wherein the representation of the media is a representation of previously captured media.
12. The method of claim 11, further comprising:
upon detecting the input directed to the representation, displaying an option to use a seventh media processing style for media captured in response to a future media capture request;
detecting, while displaying the option to use the seventh media processing style, an input directed to the option to use the seventh media processing style;
in response to detecting the input directed to the option to use the seventh media processing pattern, configuring the computer system to use the seventh media processing pattern; and
detecting a request to capture media when the computer system is configured to use the seventh media processing style;
capturing the respective media in response to detecting the request to capture media when the computer system is configured to use the seventh media processing style; and
after capturing the respective media, a first user interface is displayed that includes a representation of the respective media, wherein the representation of the respective media is displayed in the first user interface using the seventh media processing style.
13. The method of any one of claims 1 to 10, wherein:
The computer system communicates with one or more cameras including a first camera; and
the representation of the media includes a representation of at least a portion of a current field of view of at least the first camera.
14. The method of claim 13, further comprising:
upon detecting the input directed to the representation, displaying an option to use an eighth media processing style for media captured in response to a future media capture request;
detecting, while displaying the option to use the eighth media processing style, an input directed to the option to use the eighth media processing style;
in response to detecting the input directed to the option to use the eighth media processing pattern, configuring the computer system to use the eighth media processing pattern;
detecting a second request to capture media when the computer system is configured to use the eighth media processing style;
capturing a second corresponding media in response to detecting the second request to capture media when the computer system is configured to use the eighth media processing style; and
after capturing the second corresponding media, a second user interface is displayed that includes a representation of the second corresponding media, wherein the representation of the second corresponding media is displayed in the second user interface using the eighth media processing style.
15. The method of any of claims 1-14, wherein applying a respective media processing style to captured media comprises applying a first set of operations to the captured media, and applying the respective media processing style to a live preview of a portion of a field of view of one or more cameras comprises applying a second set of operations to the live preview.
16. The method of any of claims 1 to 15, wherein, when the first media processing style is used to display a first portion of the representation and a second portion of the representation, an identification corresponding to the first media processing style is displayed.
17. The method of any of claims 1-16, wherein displaying a first portion of the representation using the second media processing style while continuing to display a second portion of the representation using the first media processing style comprises displaying a separation between the first portion of the representation and the second portion of the representation.
18. The method of any of claims 1-17, wherein the input directed to the representation is a movement input.
19. The method of any of claims 1 to 18, wherein the computer system is in a first capture mode, the method further comprising:
Detecting input directed to the style selection user interface while the style selection user interface is displayed and while the computer system is in the first capture mode; and
in response to detecting the input directed to the style selection user interface, the computer system is transitioned from being in the first capture mode to being in a different capture mode.
20. The method of claim 19, further comprising:
detecting a request to capture media after transitioning the computer system from being in the first capture mode to being in the different capture mode; and
in response to detecting the request to capture media, capturing media with the different capture mode based on the currently selected media processing style, comprising:
in accordance with a determination that the currently selected media processing style is the first media processing style, capturing the media in the different capture mode with the first media processing style; and
in accordance with a determination that the currently selected media processing style is the second media processing style, the media is captured in the different capture mode using the second media processing style.
21. The method of any of claims 1 to 20, wherein the computer system is in a third capture mode, the method further comprising:
upon detecting the input directed to the representation, detecting a request to display a second user interface comprising a second representation of media;
in response to detecting the request to display the second user interface comprising the second representation of media, displaying the second user interface comprising the second representation of media;
detecting an input directed to the second representation while the second user interface is displayed; and
in response to detecting the input directed to the second representation, and in accordance with a determination that the computer system is not in a first media processing style selection mode, the computer system is transitioned from being in the third capture mode to being in a fourth capture mode.
22. The method of claim 21, further comprising:
in response to detecting the input directed to the second representation, and in accordance with a determination that the computer system is in the first media processing style selection mode, the computer system is maintained in the third capture mode.
23. The method of any of claims 1-22, wherein, prior to detecting the input directed to the representation, the style selection user interface comprises a plurality of selectable user interface objects for the first media processing style.
24. The method of claim 23, further comprising:
in response to detecting the input directed to the representation, and in accordance with the determination, the input directed to the representation is in the first direction:
in accordance with a determination that the second media processing style is being applied to a fourth portion of the representation of the media, displaying a plurality of selectable user interface objects for the second media processing style and ceasing to display the plurality of selectable user interface objects for the first media processing style; and
in accordance with a determination that the second media processing style is not applied to a fourth portion of the representation of the media, continuing to display the plurality of selectable user interface objects for the first media processing style without displaying the plurality of selectable user interface objects for the second media processing style.
25. The method of any of claims 23-24, wherein the plurality of selectable user interface objects for the first media processing style are displayed at one or more locations on the representation of the media.
26. The method of any one of claims 1 to 25, further comprising:
detecting a first request to capture media when the first media processing style is selected for use;
capturing media to which the first media processing style is applied in response to detecting the first request to capture media;
detecting a second request to capture media after capturing the media to which the first media processing style applies and upon selection of use of the second media processing style; and
in response to detecting the second request to capture media, capturing media to which the second media processing style is applied.
27. The method of any one of claims 1 to 26, further comprising:
in response to detecting the input directed to the representation, and in accordance with a determination that an end of the input has been detected:
in accordance with a determination that the input directed to the representation meets one or more movement criteria, displaying a first portion of the representation and a second portion of the representation using the second media processing style; and
in accordance with a determination that the input to the representation does not meet one or more movement criteria, a first portion of the representation and a second portion of the representation are displayed using the first media processing style.
28. The method of any one of claims 1 to 27, further comprising:
after detecting the input directed to the representation, displaying a first portion of the representation and a second portion of the representation using the second media processing style;
detecting a second input directed to the representation while the representation is displayed and a second portion of the representation is displayed using the second media processing style; and
in response to detecting the second input pointing to the representation, in accordance with a determination that the second input pointing to the representation is in the first direction, a first portion of the representation is displayed using a ninth media processing style while continuing to display a second portion of the representation using the second media processing style.
29. The method of claim 28, wherein displaying the first portion of the representation using the ninth media processing style while continuing to display the second portion of the representation using the second media processing style comprises:
in response to detecting the first portion of the second input directed to the representation, the first portion of the representation is displayed using the ninth media processing style while the second portion of the representation and the third portion of the representation are displayed using the second media processing style.
30. The method of any of claims 28 to 29, further comprising:
in response to detecting the second input directed to the representation, and in accordance with a determination that an end of the second input has been detected:
in accordance with a determination that the second input directed to the representation meets one or more movement criteria, displaying a first portion of the representation and a second portion of the representation using the ninth media processing style; and
in accordance with a determination that the input to the representation does not meet one or more movement criteria, a first portion of the representation and a second portion of the representation are displayed using the second media processing style.
31. The method of any one of claims 1 to 30, further comprising:
before displaying the style selection user interface comprising the representation of the media displayed using the first media processing style, displaying a user interface comprising a user interface object for displaying the style selection user interface, the user interface object being displayed at a first respective location in the user interface comprising a fourth representation of the media;
detecting, while displaying the user interface object for displaying the style selection user interface, an input directed to the user interface object for displaying the style selection user interface; and
In response to detecting the input directed to the user interface object for displaying the style selection user interface, the style selection user interface is displayed.
32. The method of any of claims 1 to 31, wherein the style selection user interface comprises a user interface object for controlling settings at a second corresponding location in the style selection user interface, the method further comprising:
detecting an input directed to the second corresponding location in the style selection user interface while displaying the style selection user interface and the user interface object for controlling the setting at the second corresponding location; and
in response to detecting the input directed to the second corresponding location in the style selection user interface, ceasing to display the style selection user interface.
33. The method of any one of claims 1 to 32, further comprising: after displaying the style selection user interface:
receiving a request for displaying a camera user interface; and
in response to receiving the request to display the camera user interface, displaying a camera user interface including simultaneously displaying in the camera user interface:
A representation of the field of view of one or more cameras; and
a respective user interface object that, when selected, causes the style selection user interface to be displayed, comprising:
in accordance with a determination that the first media processing style is currently selected as a media processing style, displaying the corresponding user interface object with a first appearance; and
in accordance with a determination that the second media processing style is currently selected as a media processing style, the respective user interface object is displayed in a second appearance that is different from the first appearance.
34. The method of any of claims 1-33, wherein the user interface comprises a first user interface object displayed concurrently with a first portion of the representation and a second portion of the representation displayed using the first media processing style, the method further comprising:
detecting, via the one or more input devices, input directed to the first user interface object while the first user interface object is displayed concurrently with the first portion of the representation and the second portion of the representation being displayed using the first media processing style; and
In response to detecting the input directed to the first user interface object, a first portion of the representation and a second portion of the representation are displayed without using the first media processing style.
35. The method of any of claims 1-34, wherein the style selection user interface includes a selectable user interface object for capturing media, the method further comprising:
detecting input directed to the selectable user interface object for capturing media while the representation of the media is displayed using the first media processing style and the selectable user interface object for capturing media is displayed; and
in response to detecting the input directed to the selectable user interface object for capturing media, capturing media to which the first media processing style is applied.
36. The method of any of claims 1-35, wherein displaying the first portion of the representation using the first media processing style includes differently applying the first media processing style to one or more objects in the first portion of the representation and to a subset of the first portion that does not include the one or more objects.
37. The method of any of claims 1-36, wherein the first media processing style is applied to the representation of the media based on one or more parameters selected from contrast, sharpness, color temperature, and combinations thereof.
38. The method of any one of claims 1 to 37, further comprising:
detecting an end of the input directed to the representation while displaying a first portion of the representation and a third portion of the representation using the second media processing style while displaying a second portion of the representation using the first media processing style; and
in response to detecting the end of the input directed to the representation, ceasing to display a second portion of the representation using the first media processing style and reducing visual saliency of a subset of the second portion of the representation.
39. The method of claim 38, further comprising:
detecting a third input directed to the representation while displaying the subset of the second portion of the representation with reduced visual salience; and
in response to detecting the third input directed to the representation, the visual saliency of the subset of the second portion of the representation is increased.
40. The method of any of claims 1-39, wherein displaying the representation of the media comprises:
in accordance with a determination that a fourth portion of the representation is to be displayed in a first visual appearance using a tenth media processing style in response to detecting a fourth input directed to the representation of the media; and
in accordance with a determination that the representation is to be displayed without using the tenth media processing style in response to detecting a fourth input directed to the representation of the media, a fourth portion of the representation is to be displayed in a second visual appearance that is different from the first visual appearance.
41. A method as defined in any one of claims 1 to 40, wherein a sixth portion of the representation of the media is displayed using the first media processing style before the input directed to the representation is detected and while the first portion of the representation and the second portion of the representation are displayed using the first media processing style.
42. A method as defined in any one of claims 1 to 41, wherein a seventh portion of the representation of the media is not displayed using the first media processing style before the input directed to the representation is detected and while the first portion of the representation and the second portion of the representation are displayed using the first media processing style.
43. The method of claim 42, further comprising:
in response to detecting the input directed to the representation, displaying an animation of a seventh portion of the representation of the media: transition from not displaying using the first media processing style to displaying using the first media processing style.
44. The method of any one of claims 1 to 43, further comprising:
displaying, prior to displaying the style selection user interface, a user interface object for enabling a second media processing style selection mode, wherein a first portion of the representation and a second portion of the representation are displayed using the first media processing style applied to visual content of the media;
detecting, while displaying the user interface object for enabling the second media processing style selection mode, an input directed to the user interface object for enabling the second media processing style selection mode; and
in response to detecting the input directed to the user interface object for enabling the second media processing style selection mode, a respective user interface is displayed, including simultaneously displaying a representation of previously captured media to which the first media processing style was applied and a representation of previously captured media to which the second media processing style was applied.
45. The method of any of claims 1-44, wherein the style selection user interface comprises a first style mode user interface object that, when selected, causes the style selection user interface to be displayed, and wherein the first style mode user interface object is displayed concurrently with one or more camera settings user interface objects.
46. The method of any of claims 1-45, wherein the style selection user interface comprises a second style mode user interface object that, when selected, causes the style selection user interface to be displayed, the method further comprising:
displaying the second style mode user interface object in a third appearance while displaying a first portion of the representation using the second media processing style while displaying a second portion of the representation and a third portion of the representation using the first media processing style;
after displaying a first portion of the representation using the second media processing style while displaying a second portion of the representation and a third portion of the representation using the first media processing style, and in response to detecting a second portion of the input directed to the representation, changing the second style mode user interface object from being displayed in the third appearance to being displayed in a fourth appearance different from the third appearance.
47. The method of claim 46, wherein changing the second style mode user interface object from being displayed in the third appearance to being displayed in the fourth appearance comprises:
changing a display of a first visual aspect of the second style mode user interface object in accordance with determining that a value of a first parameter of the first media processing style is different from a value of the first parameter of the second media processing style; and
in accordance with a determination that a value of a second parameter of the first media processing style is different from a value of the second parameter of the second media processing style, wherein the first parameter is different from the second parameter, a display of a second visual aspect of the second style mode user interface object is changed, wherein the second visual aspect is different from the first visual aspect.
48. The method of any of claims 46 to 47, wherein the computer system is configured to store media in a first file format, the method further comprising:
detecting a request to configure the computer system to capture and store media in a second file format different from the first file format when the computer system is configured to capture and store media in the first file format and when the second style mode user interface object is displayed in an active state; and
In response to detecting a request to configure the computer system to capture and store media in the second file format, ceasing to display the second style mode user interface object in the active state.
49. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for performing the method of any of claims 1-48.
50. A computer system configured to communicate with a display generation component and one or more input devices, the computer system comprising:
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-48.
51. A computer system configured to communicate with a display generation component and one or more input devices, comprising:
Apparatus for performing the method of any one of claims 1 to 48.
52. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for performing the method of any of claims 1-48.
53. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for:
displaying, via the display generating component, a style selection user interface comprising a representation of media, wherein a first portion of the representation and a second portion of the representation are displayed using a first media processing style applied to visual content of the media;
detecting, via the one or more input devices, an input directed to the representation while displaying a first portion of the representation and a second portion of the representation using the first media processing style; and
In response to detecting the input directed to the representation and in accordance with a determination that the input is in a first direction, displaying a first portion of the representation via the display generating component using a second media processing style while continuing to display a second portion of the representation using the first media processing style, comprising:
in response to detecting a first portion of the input directed to the representation, wherein the first portion of the input has a first input magnitude, displaying the first portion of the representation using the second media processing style while displaying a second portion of the representation and a third portion of the representation located between the first portion of the representation and the second portion of the representation using the first media processing style; and
after displaying a first portion of the representation using the second media processing style while displaying a second portion of the representation and a third portion of the representation using the first media processing style, and in response to detecting a second portion of the input directed to the representation, wherein the second portion of the input has a second input magnitude that is greater than the first input magnitude, displaying the first portion of the representation and the third portion of the representation using the second media processing style while displaying the second portion of the representation using the first media processing style.
54. A computer system configured to communicate with a display generation component and one or more input devices, comprising:
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for:
displaying, via the display generating component, a style selection user interface comprising a representation of media, wherein a first portion of the representation and a second portion of the representation are displayed using a first media processing style applied to visual content of the media;
detecting, via the one or more input devices, an input directed to the representation while displaying a first portion of the representation and a second portion of the representation using the first media processing style; and
in response to detecting the input directed to the representation and in accordance with a determination that the input is in a first direction, displaying a first portion of the representation via the display generating component using a second media processing style while continuing to display a second portion of the representation using the first media processing style, comprising:
In response to detecting a first portion of the input directed to the representation,
wherein a first portion of the input has a first input magnitude, displaying a first portion of the representation using the second media processing style while displaying a second portion of the representation and a third portion of the representation located between the first portion of the representation and the second portion of the representation using the first media processing style; and
after displaying a first portion of the representation using the second media processing style while displaying a second portion of the representation and a third portion of the representation using the first media processing style, and in response to detecting a second portion of the input directed to the representation, wherein the second portion of the input has a second input magnitude that is greater than the first input magnitude, displaying the first portion of the representation and the third portion of the representation using the second media processing style while displaying the second portion of the representation using the first media processing style.
55. A computer system configured to communicate with a display generation component and one or more input devices, comprising:
Means for: displaying, via the display generating component, a style selection user interface comprising a representation of media, wherein a first portion of the representation and a second portion of the representation are displayed using a first media processing style applied to visual content of the media;
means for: detecting, via the one or more input devices, an input directed to the representation while displaying a first portion of the representation and a second portion of the representation using the first media processing style; and
means for: in response to detecting the input directed to the representation and in accordance with a determination that the input is in a first direction, displaying a first portion of the representation via the display generating component using a second media processing style while continuing to display a second portion of the representation using the first media processing style, comprising:
means for: in response to detecting a first portion of the input directed to the representation, wherein the first portion of the input has a first input magnitude, displaying the first portion of the representation using the second media processing style while displaying a second portion of the representation and a third portion of the representation located between the first portion of the representation and the second portion of the representation using the first media processing style; and
Means for: after displaying a first portion of the representation using the second media processing style while displaying a second portion of the representation and a third portion of the representation using the first media processing style, and in response to detecting a second portion of the input directed to the representation, wherein the second portion of the input has a second input magnitude that is greater than the first input magnitude,
the first portion of the representation and the third portion of the representation are displayed using the second media processing style while the second portion of the representation is displayed using the first media processing style.
56. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for:
displaying, via the display generating component, a style selection user interface comprising a representation of media, wherein a first portion of the representation and a second portion of the representation are displayed using a first media processing style applied to visual content of the media;
Detecting, via the one or more input devices, an input directed to the representation while displaying a first portion of the representation and a second portion of the representation using the first media processing style; and
in response to detecting the input directed to the representation and in accordance with a determination that the input is in a first direction, displaying a first portion of the representation via the display generating component using a second media processing style while continuing to display a second portion of the representation using the first media processing style, comprising:
in response to detecting a first portion of the input directed to the representation, wherein the first portion of the input has a first input magnitude, displaying the first portion of the representation using the second media processing style while displaying a second portion of the representation and a third portion of the representation located between the first portion of the representation and the second portion of the representation using the first media processing style; and
after displaying a first portion of the representation using the second media processing style while displaying a second portion of the representation and a third portion of the representation using the first media processing style, and in response to detecting a second portion of the input directed to the representation, wherein the second portion of the input has a second input magnitude that is greater than the first input magnitude, displaying the first portion of the representation and the third portion of the representation using the second media processing style while displaying the second portion of the representation using the first media processing style.
57. A method, comprising:
at a computer system in communication with a display generation component and one or more input devices:
displaying, via the display generating component, a user interface comprising a representation of media, wherein the representation of media is displayed using a first media processing style applied to visual content of the media;
while displaying the representation of the media using the first media processing style, concurrently displaying, via the display generating component, a plurality of selectable user interface objects for the first media processing style, comprising:
a first selectable user interface object for editing a first parameter of the first media processing style, the first selectable user interface object being displayed with a representation of a current value of the first parameter of the first media processing style; and
a second selectable user interface object for editing a second parameter of the first media processing style, the second selectable user interface object being displayed with a representation of a current value of the second parameter of the first media processing style, wherein the first parameter is different from the second parameter;
while displaying the plurality of selectable user interface objects for the first media processing style, detecting input directed to the plurality of selectable user interface objects for the first media processing style via the one or more input devices; and
In response to detecting the input directed to the plurality of selectable user interface objects for the first media processing style:
in accordance with a determination that the input is directed to the first selectable user interface object for editing the first parameter of the first media processing style, displaying, via the display generating component, a first control for adjusting a current value of the first parameter; and
in accordance with a determination that the input is directed to the second user interface object for editing the second parameter of the first media processing style, a second control for adjusting a current value of the second parameter is displayed via the display generating component.
58. The method of claim 57, wherein:
displaying the first control includes displaying a second representation of a current value of the first parameter of the first media processing style; and
displaying the second control includes displaying a second representation of a current value of the second parameter of the first media processing style.
59. The method of any one of claims 57 to 58, further comprising:
while displaying the representation of the media using the first media processing style and displaying the plurality of selectable user interface objects for the first media processing style, detecting a request to display the representation of the media using a second media processing style applied to visual content of the media;
In response to detecting the request to display the representation of the media using the second media processing style applied to visual content of the media, ceasing to display the plurality of selectable user interface objects for the first media processing style.
60. The method of any of claims 57-59, wherein displaying, via the display generation component, the first control for adjusting a current value of the first parameter includes expanding the first selectable user interface object to edit the first parameter of the first media processing style.
61. The method of any one of claims 57 to 60, further comprising:
detecting an end of the input directed to the plurality of selectable user interface objects for the first media processing style while the first control for adjusting the current value of the first parameter is displayed via the display generating component; and
in response to detecting the end of the input directed to the plurality of selectable user interface objects for the first media processing style, a size of the first control for adjusting a current value of the first parameter is reduced.
62. The method of any of claims 57-61, wherein prior to detecting the input directed to the plurality of selectable user interface objects for the first media processing style, a current value of the first parameter is a first value, the method further comprising:
detecting an end of the input directed to the plurality of selectable user interface objects for the first media processing style while the first control for adjusting the current value of the first parameter is displayed via the display generating component; and
in response to detecting the end of the input directed to the plurality of selectable user interface objects for the first media processing style, the representation of a current value of the first parameter is displayed, wherein the current value is a second value different from the first value.
63. The method of any one of claims 57 to 62, wherein:
the first selectable user interface object for editing the first parameter is displayed with a first representation of a first range of values of the first parameter, the first range of values having a first distance between a first point in the first representation of the first range of values and a second point in the first representation of the first range of values, the first point representing a first value and the second point representing a second value; and
Displaying the first control includes displaying a second representation of a range of values having a second distance between a first point in the second representation of the range of values and a second point in the second representation of the range of values, the first point representing the first value and the second point representing the second value, the second distance being greater than the first distance.
64. The method of any of claims 57-63, wherein the first control is displayed with a third representation of a third range of values of the first parameter, the third range of values having a third distance between a first point in the third representation of the third range of values and a second point in the third representation of the third range of values, the first point representing a third value, the second point representing a fourth value, the method further comprising:
detecting an end of the input directed to the plurality of selectable user interface objects for the first media processing style while displaying the first control and the third representation of the third range of values of the first parameter via the display generating component; and
in response to detecting the end of the input directed to the plurality of selectable user interface objects for the first media processing style, displaying a fifth representation of the first selectable user interface object and a range of values for editing the first parameter, the range of values having a fourth distance between a first point in the fifth representation of the range of values and a second point in the fifth representation of the range of values, the first point representing a third value, the second point representing a fourth value, the fourth distance being less than the third distance.
65. The method of any one of claims 57 to 64, further comprising:
in response to detecting the input directed to the plurality of selectable user interface objects for the first media processing style, and in accordance with a determination that the input is directed to the first selectable user interface object to edit the first parameter of the first media processing style, the second control for adjusting a current value of the second parameter is moved from a first location on a user interface to a second location on the user interface.
66. The method of any one of claims 57 to 64, further comprising:
in response to detecting the input directed to the plurality of selectable user interface objects for the first media processing style, and in accordance with a determination that the input is directed to the first selectable user interface object to edit the first parameter of the first media processing style, the second control for adjusting a current value of the second parameter is stopped from being displayed.
67. The method of any of claims 57-66, wherein, upon displaying the representation of the media using the first media processing style and prior to detecting the input directed to the plurality of selectable user interface objects for the first media processing style, displaying a first identification corresponding to the first media processing style, the method further comprising:
In response to detecting the input directed to the plurality of selectable user interface objects for the first media processing style:
in accordance with a determination that a current value of the first parameter has changed to a value different from a default value of the first parameter of the first media processing style, a second identification corresponding to a third media processing style is displayed, wherein the second identification is different from the first identification.
68. The method of any of claims 57-67, wherein the user interface includes a selectable user interface object for resetting one or more parameters of the first media processing style, further comprising:
detecting input directed to the selectable user interface object for resetting one or more parameters of the first media processing style while the selectable user interface object for resetting the one or more parameters of the first media processing style is displayed; and
in response to detecting the input directed to the selectable user interface object for resetting the one or more parameters of the first media processing style:
displaying the representation of the current value of the first parameter of the first media processing style as a second default value of the first parameter of the first media processing style; and
The representation of the current value of the second parameter of the first media processing style is displayed as a second default value of the second parameter of the first media processing style.
69. The method of claim 68, further comprising:
in response to detecting the input of the selectable user interface object directed to the one or more parameters for resetting the first media processing style, displaying an animation of a current value of the first parameter of the first media processing style changing to the second default value of the first parameter of the first media processing style.
70. The method of claim 68, further comprising:
in response to detecting the input directed to the selectable user interface object for resetting the one or more parameters of the first media processing style, a prompt confirming resetting of the one or more parameters of the first media processing style is displayed.
71. The method of claim 70, wherein the cue is displayed with an indication of how at least one of the one or more parameters of the first media processing style is to be reset.
72. The method of claims 68-71, further comprising:
Before detecting the input of the selectable user interface object directed to the one or more parameters for resetting the first media processing style, displaying a first style mode user interface object that, when selected, causes the representation to be displayed with a first appearance based on a current value of the first parameter of the first media processing style if a first selected media processing style is applied or without the first selected corresponding media processing style being applied; and
in response to detecting the input directed to the selectable user interface object for resetting the one or more parameters of the first media processing style, displaying the first style mode user interface object transitions from displaying in the first appearance to animating in a second appearance, the first appearance being based on a current value of the first parameter of the first media processing style, the second appearance being based on the second default value of the first parameter of the first media processing style.
73. The method of any one of claims 57 to 72, further comprising:
while the first control for adjusting the current value of the first parameter is displayed, and in response to detecting movement of an input directed to the first control, changing the current value of the first parameter from a third value of the first parameter to a fourth value of the first parameter.
74. The method of claim 73, further comprising:
upon displaying the first control for adjusting a current value of the first parameter, and in response to detecting the movement of the input to the first control, a second representation of media is displayed using a modified first media processing style, wherein the second representation of media using the modified first media processing style is different from the representation of media using the first media processing style.
75. The method of claim 74, further comprising:
detecting a first request to capture media while displaying the representation of the media using the first media processing style;
capturing a first media in response to detecting the first request to capture media;
Detecting a second request to capture media while displaying the second representation of the media using the first media processing style;
capturing second media in response to detecting the first request to capture media; and
after capturing the first media and the second media:
displaying a representation of the first media having the first media processing style; and
a representation of the second media with the modified first media processing style is displayed.
76. The method of any of claims 57-75, wherein the user interface includes a second selectable user interface object for capturing media, the method further comprising:
detecting input directed to the second selectable user interface object for capturing media while the representation of the media is displayed using the first media processing style and the second selectable user interface object for capturing media is displayed; and
in response to detecting the input directed to the second selectable user interface object for capturing media, capturing third media to which the first media processing style is applied.
77. The method of any of claims 57-76, wherein displaying the representation using the first media processing style includes applying the first media processing style differently to one or more objects in a first portion of the representation and to a second portion of the representation that does not include the one or more objects.
78. The method of any one of claims 57 to 77, further comprising:
while displaying the plurality of selectable user interface objects for the first media processing style, detecting a first input directed to the representation of the media via the one or more input devices; and
in response to detecting the first input directed to the representation of the media:
a representation of a current value of a first parameter of a fourth media processing style is displayed and the display of the representation of the current value of the first parameter of the first media processing style is stopped.
79. A method according to claim 78, wherein displaying the representation of the current value of the first parameter of the fourth media processing style includes displaying an animation that changes the representation of the current value of the first parameter of the first media processing style to the representation of the current value of the first parameter of the fourth media processing style.
80. The method of any one of claims 57 to 79, further comprising:
while displaying the plurality of selectable user interface objects for the first media processing style, detecting, via the one or more input devices, a second input directed to the representation of the media; and
In response to detecting the second input directed to the representation of the media, displaying a portion of the representation of the media using a fifth media processing style; and
upon displaying the portion of the representation of the media using the fifth media processing style and in accordance with a determination that the portion of the representation of the media using the fifth media processing style is greater than a threshold amount of the representation, displaying a representation of a current value of a first parameter of the fifth media processing style and ceasing to display the representation of the current value of the first parameter of the first media processing style.
81. The method of any one of claims 57 to 80, further comprising:
in response to detecting the input directed to the plurality of selectable user interface objects for the first media processing style, and in accordance with a determination that at least one current value of one or more parameters of the first media processing style is different from one or more default values of the one or more parameters of the first media processing style, a first custom media processing style that is different from the first media processing style is added to a set of available media processing styles.
82. The method of claim 81, further comprising:
detecting a first request to change one or more parameters of the first custom media processing pattern when the set of available media processing patterns includes the first custom media processing pattern; and
in response to detecting the first request to change the one or more parameters of the first custom media processing pattern, and in accordance with a determination that the first custom media processing pattern will be the same as one or more other available media processing patterns after the first request is fulfilled, the first custom media processing pattern is removed from the set of available media processing patterns.
83. The method of any one of claims 81 to 82, further comprising:
after adding the first custom media processing style to the set of available media processing styles, displaying a respective user interface comprising a respective representation of media displayed using the respective media processing style;
detecting, while displaying the respective user interface including the respective representation of media displayed using the respective media processing style and when the set of available media processing styles includes the first custom media processing style, a request to display the respective representation of media using a next available media processing style from the set of available media processing styles; and
In response to detecting a request to display the respective representation of media using the next available media processing style while the respective representation of media is displayed using the respective media processing style:
in accordance with a determination that the respective media processing style is the first media processing style, displaying at least a portion of the respective representation of the media using the first customized media processing style; and
in accordance with a determination that the respective media processing style is not the first media processing style, the display of the at least a portion of the respective representation of the media using the first customized media processing style is abandoned.
84. The method of any one of claims 81 to 83, further comprising:
detecting a second request to change one or more parameters of the first custom media processing pattern when the set of available media processing patterns includes the first custom media processing pattern; and
in response to detecting the second request to change the one or more parameters of the first custom media processing pattern, the one or more parameters of the first custom media processing pattern are updated.
85. The method of any one of claims 81 to 83, further comprising:
Detecting a third request to change one or more parameters of the first custom media processing pattern when the set of available media processing patterns includes the first custom media processing pattern; and
in response to detecting the third request to change the one or more parameters of the first customized media processing style:
a second custom media processing style is added to the set of available media processing styles without updating the one or more parameters of the first custom media processing style.
86. The method of claim 85, wherein the first custom media processing style and the second custom media processing style have the same respective text identifications, the method further comprising:
upon displaying the user interface including the representation of media:
in accordance with a determination that the first custom media processing style is being applied to the representation of the media, simultaneously displaying the same respective text identifier with an indication of a parameter of the first custom media processing style; and
in accordance with a determination that the second custom media processing style is being applied to the representation of the media, simultaneously displaying the same respective text identification and an indication of the parameter of the second custom media processing style, wherein the indication of the parameter of the first custom media processing style is different from the indication identification of the parameter of the second custom media processing style.
87. The method of claims 81-86, further comprising:
in response to detecting the input directed to the plurality of selectable user interface objects for the first media processing style, and in accordance with a determination that at least one current value of one or more parameters of the first media processing style is different from one or more default values of the one or more parameters of the first media processing style:
in accordance with a determination that a first difference between the at least one current value of the one or more parameters of the first media processing style and the one or more default values of the one or more parameters of the first media processing style is a first difference, displaying a first text identification of the first customized media processing style; and
in accordance with a determination that a difference between the at least one current value of the one or more parameters of the first media processing style and the one or more default values of the one or more parameters of the first media processing style is a second difference different from the first difference, a second text identification of the first customized media processing style is displayed, wherein the second text identification is different from the first text identification.
88. The method of claim 87, wherein the first media processing style has a third text identifier that is different from the first text identifier and the second text identifier, the method further comprising:
detecting a third request to change one or more parameters of the first custom media processing pattern when the set of available media processing patterns includes the first custom media processing pattern; and
in response to detecting the third request to change the one or more parameters of the first customized media processing pattern, and in accordance with a determination that the first customized media processing pattern will be the same as one or more other available media processing patterns after the first request is fulfilled, the third text identification is displayed.
89. The method of any of claims 57-88, wherein the plurality of selectable user interface objects for the first media processing style are displayed in response to detecting the request to edit the first media processing style.
90. The method of any of claims 57-89, wherein the user interface including the representation of the media includes a second style mode user interface object that, when selected, causes the representation to be displayed with a second selected media processing style applied, the method further comprising:
Detecting a corresponding input; and
in response to detecting the respective input, and in accordance with a determination that the respective input is directed to the first control for adjusting a current value of the first parameter, a first appearance of the second style mode user interface object is changed.
91. The method of claim 90, wherein the first appearance of the second style mode user interface object gradually changes as the current value of the first parameter is modified.
92. The method of any of claims 90-91, wherein changing the first appearance of the second style mode user interface object comprises:
in accordance with a determination that the respective input is in a first direction, updating a first visual aspect of the second style mode user interface object in a first manner; and
in accordance with a determination that the respective input is in a second direction different from the first direction, the first visual aspect is updated in a second manner different from the first manner.
93. The method of any of claims 87-92, wherein changing the first appearance of the second style mode user interface object comprises displaying the second style mode user interface object in a visual element, the visual element being an open shape having an opening, wherein:
In accordance with a determination that the respective input has set a second current value of the first parameter to a maximum value of the first parameter, the opening being on a first side of the opening shape; and
in accordance with a determination that the respective input has set the second current value of the first parameter to a minimum value of the first parameter that is different from the maximum value of the first parameter, the opening is on a second side of the opening shape that is different from the first side.
94. The method of any of claims 87-93, wherein changing an appearance of the second style mode user interface object comprises changing a display of a third visual aspect of the second style mode user interface object.
95. The method of claim 94, further comprising:
in response to detecting the respective input, and in accordance with a determination that the respective input is directed to the first control for adjusting a current value of the first parameter, changing a second appearance of the second style mode user interface object, wherein changing the second appearance of the second style mode user interface object comprises changing a display of a fourth visual aspect of the second style mode user interface object, and wherein the fourth visual aspect is different from the third visual aspect.
96. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for performing the method of any of claims 57-95.
97. A computer system configured to communicate with a display generation component and one or more input devices, the computer system comprising:
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 57-95.
98. A computer system configured to communicate with a display generation component and one or more input devices, comprising:
apparatus for performing the method of any one of claims 57 to 95.
99. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for performing the method of any of claims 57-95.
100. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for:
displaying, via the display generating component, a user interface comprising a representation of media, wherein the representation of media is displayed using a first media processing style applied to visual content of the media;
while displaying the representation of the media using the first media processing style, concurrently displaying, via the display generating component, a plurality of selectable user interface objects for the first media processing style, comprising:
a first selectable user interface object for editing a first parameter of the first media processing style, the first selectable user interface object being displayed with a representation of a current value of the first parameter of the first media processing style; and
a second selectable user interface object for editing a second parameter of the first media processing style, the second selectable user interface object being displayed with a representation of a current value of the second parameter of the first media processing style, wherein the first parameter is different from the second parameter;
While displaying the plurality of selectable user interface objects for the first media processing style, detecting input directed to the plurality of selectable user interface objects for the first media processing style via the one or more input devices; and
in response to detecting the input directed to the plurality of selectable user interface objects for the first media processing style:
in accordance with a determination that the input is directed to the first selectable user interface object for editing the first parameter of the first media processing style, displaying, via the display generating component, a first control for adjusting a current value of the first parameter; and
in accordance with a determination that the input is directed to the second user interface object for editing the second parameter of the first media processing style, a second control for adjusting a current value of the second parameter is displayed via the display generating component.
101. A computer system configured to communicate with a display generation component and one or more input devices, comprising:
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for:
Displaying, via the display generating component, a user interface comprising a representation of media, wherein the representation of media is displayed using a first media processing style applied to visual content of the media;
while displaying the representation of the media using the first media processing style, concurrently displaying, via the display generating component, a plurality of selectable user interface objects for the first media processing style, comprising:
a first selectable user interface object for editing a first parameter of the first media processing style, the first selectable user interface object being displayed with a representation of a current value of the first parameter of the first media processing style; and
a second selectable user interface object for editing a second parameter of the first media processing style, the second selectable user interface object being displayed with a representation of a current value of the second parameter of the first media processing style, wherein the first parameter is different from the second parameter;
while displaying the plurality of selectable user interface objects for the first media processing style, detecting input directed to the plurality of selectable user interface objects for the first media processing style via the one or more input devices; and
In response to detecting the input directed to the plurality of selectable user interface objects for the first media processing style:
in accordance with a determination that the input is directed to the first selectable user interface object for editing the first parameter of the first media processing style, displaying, via the display generating component, a first control for adjusting a current value of the first parameter; and
in accordance with a determination that the input is directed to the second user interface object for editing the second parameter of the first media processing style, a second control for adjusting a current value of the second parameter is displayed via the display generating component.
102. A computer system configured to communicate with a display generation component and one or more input devices, comprising:
means for: displaying, via the display generating component, a user interface comprising a representation of media, wherein the representation of media is displayed using a first media processing style applied to visual content of the media;
means for: while displaying the representation of the media using the first media processing style, concurrently displaying, via the display generating component, a plurality of selectable user interface objects for the first media processing style, comprising:
A first selectable user interface object for editing a first parameter of the first media processing style, the first selectable user interface object being displayed with a representation of a current value of the first parameter of the first media processing style; and
a second selectable user interface object for editing a second parameter of the first media processing style, the second selectable user interface object being displayed with a representation of a current value of the second parameter of the first media processing style, wherein the first parameter is different from the second parameter;
means for: while displaying the plurality of selectable user interface objects for the first media processing style, detecting input directed to the plurality of selectable user interface objects for the first media processing style via the one or more input devices; and
means for: in response to detecting the input directed to the plurality of selectable user interface objects for the first media processing style:
in accordance with a determination that the input is directed to the first selectable user interface object for editing the first parameter of the first media processing style, displaying, via the display generating component, a first control for adjusting a current value of the first parameter; and
In accordance with a determination that the input is directed to the second user interface object for editing the second parameter of the first media processing style, a second control for adjusting a current value of the second parameter is displayed via the display generating component.
103. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for:
displaying, via the display generating component, a user interface comprising a representation of media, wherein the representation of media is displayed using a first media processing style applied to visual content of the media;
while displaying the representation of the media using the first media processing style, concurrently displaying, via the display generating component, a plurality of selectable user interface objects for the first media processing style, comprising:
a first selectable user interface object for editing a first parameter of the first media processing style, the first selectable user interface object being displayed with a representation of a current value of the first parameter of the first media processing style; and
A second selectable user interface object for editing a second parameter of the first media processing style, the second selectable user interface object being displayed with a representation of a current value of the second parameter of the first media processing style, wherein the first parameter is different from the second parameter;
while displaying the plurality of selectable user interface objects for the first media processing style, detecting input directed to the plurality of selectable user interface objects for the first media processing style via the one or more input devices; and
in response to detecting the input directed to the plurality of selectable user interface objects for the first media processing style:
in accordance with a determination that the input is directed to the first selectable user interface object for editing the first parameter of the first media processing style, displaying, via the display generating component, a first control for adjusting a current value of the first parameter; and
in accordance with a determination that the input is directed to the second user interface object for editing the second parameter of the first media processing style, a second control for adjusting a current value of the second parameter is displayed via the display generating component.
CN202280026338.7A 2021-06-01 2022-05-24 User interface for managing media styles Pending CN117178250A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311602718.3A CN117539375A (en) 2021-06-01 2022-05-24 User interface for managing media styles

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US202163195679P 2021-06-01 2021-06-01
US63/195,679 2021-06-01
US202163243633P 2021-09-13 2021-09-13
US63/243,633 2021-09-13
US17/721,039 US20220382440A1 (en) 2021-06-01 2022-04-14 User interfaces for managing media styles
US17/721,039 2022-04-14
PCT/US2022/030704 WO2022256200A1 (en) 2021-06-01 2022-05-24 User interfaces for managing media styles

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202311602718.3A Division CN117539375A (en) 2021-06-01 2022-05-24 User interface for managing media styles

Publications (1)

Publication Number Publication Date
CN117178250A true CN117178250A (en) 2023-12-05

Family

ID=84193026

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202311602718.3A Pending CN117539375A (en) 2021-06-01 2022-05-24 User interface for managing media styles
CN202280026338.7A Pending CN117178250A (en) 2021-06-01 2022-05-24 User interface for managing media styles

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202311602718.3A Pending CN117539375A (en) 2021-06-01 2022-05-24 User interface for managing media styles

Country Status (5)

Country Link
US (1) US20220382440A1 (en)
EP (1) EP4298500A1 (en)
JP (1) JP2024514783A (en)
KR (1) KR20230164069A (en)
CN (2) CN117539375A (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9854156B1 (en) 2016-06-12 2017-12-26 Apple Inc. User interface for camera effects
DK180859B1 (en) 2017-06-04 2022-05-23 Apple Inc USER INTERFACE CAMERA EFFECTS
US11722764B2 (en) 2018-05-07 2023-08-08 Apple Inc. Creative camera
US11770601B2 (en) 2019-05-06 2023-09-26 Apple Inc. User interfaces for capturing and managing visual media
US11128792B2 (en) 2018-09-28 2021-09-21 Apple Inc. Capturing and displaying images with multiple focal planes
US11321857B2 (en) 2018-09-28 2022-05-03 Apple Inc. Displaying and editing images with depth information
US11706521B2 (en) 2019-05-06 2023-07-18 Apple Inc. User interfaces for capturing and managing visual media
US11054973B1 (en) 2020-06-01 2021-07-06 Apple Inc. User interfaces for managing media
US11212449B1 (en) * 2020-09-25 2021-12-28 Apple Inc. User interfaces for media capture and management
USD1015361S1 (en) * 2020-12-30 2024-02-20 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
US11778339B2 (en) 2021-04-30 2023-10-03 Apple Inc. User interfaces for altering visual media
US20230308769A1 (en) * 2022-03-25 2023-09-28 Google Llc Methods and Systems for User Adjustable Region based Brightness Settings

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2380599B (en) * 2000-12-22 2003-10-29 Kent Ridge Digital Labs System and method for media production
JP3754943B2 (en) * 2002-08-19 2006-03-15 キヤノン株式会社 Image processing method, apparatus, storage medium, and program
JP2007535733A (en) * 2004-03-03 2007-12-06 バーチャル アイリス スタジオ,インク. System that enables image distribution and interactive operation
US7868921B2 (en) * 2006-02-17 2011-01-11 Canon Kabushiki Kaisha Image processing file setting system
US20090319897A1 (en) * 2008-06-20 2009-12-24 Microsoft Corporation Enhanced user interface for editing images
US8619093B2 (en) * 2010-07-20 2013-12-31 Apple Inc. Keying an image
US8468465B2 (en) * 2010-08-09 2013-06-18 Apple Inc. Two-dimensional slider control
US9781309B2 (en) * 2012-02-06 2017-10-03 Apple Inc. Editing media using composite bumps
US8977077B2 (en) * 2013-01-21 2015-03-10 Apple Inc. Techniques for presenting user adjustments to a digital image
US20140281966A1 (en) * 2013-03-15 2014-09-18 Canon Kabushiki Kaisha Electronic apparatus, and control method therefor
US10671252B2 (en) * 2014-09-04 2020-06-02 Home Box Office, Inc. Styling system
US9860451B2 (en) * 2015-06-07 2018-01-02 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US10311366B2 (en) * 2015-07-29 2019-06-04 Adobe Inc. Procedurally generating sets of probabilistically distributed styling attributes for a digital design
KR20170027052A (en) * 2015-09-01 2017-03-09 엘지전자 주식회사 Mobile device and method for controlling the mobile device
US10440288B2 (en) * 2015-09-02 2019-10-08 Microsoft Technology Licensing, Llc Methods and apparatuses for capturing image frames with interlaced exposure
US9857953B2 (en) * 2015-11-17 2018-01-02 Adobe Systems Incorporated Image color and tone style transfer
US9854156B1 (en) * 2016-06-12 2017-12-26 Apple Inc. User interface for camera effects
WO2018017625A1 (en) * 2016-07-18 2018-01-25 mPerpetuo, Inc. User interface for smart digital camera
DK179932B1 (en) * 2017-05-16 2019-10-11 Apple Inc. Devices, methods, and graphical user interfaces for navigating, displaying, and editing media items with multiple display modes
DK180859B1 (en) * 2017-06-04 2022-05-23 Apple Inc USER INTERFACE CAMERA EFFECTS
US10674072B1 (en) * 2019-05-06 2020-06-02 Apple Inc. User interfaces for capturing and managing visual media
US11295494B2 (en) * 2019-06-26 2022-04-05 Adobe Inc. Image modification styles learned from a limited set of modified images
CN115480852A (en) * 2021-05-31 2022-12-16 北京字跳网络技术有限公司 Page display method and device of application program and electronic equipment

Also Published As

Publication number Publication date
CN117539375A (en) 2024-02-09
JP2024514783A (en) 2024-04-03
KR20230164069A (en) 2023-12-01
US20220382440A1 (en) 2022-12-01
EP4298500A1 (en) 2024-01-03

Similar Documents

Publication Publication Date Title
DK180452B1 (en) USER INTERFACES FOR RECEIVING AND HANDLING VISUAL MEDIA
US20220382440A1 (en) User interfaces for managing media styles
CN111399734B (en) User interface camera effects
CN116719595A (en) User interface for media capture and management
CN115793941A (en) Editing features of an avatar
CN113454983B (en) User interface for managing media
KR102582146B1 (en) User interfaces for managing media
WO2022256200A1 (en) User interfaces for managing media styles
US20240080543A1 (en) User interfaces for camera management
CN115474002A (en) User interface for altering visual media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication