EP3966676A2 - User interfaces for capturing and managing visual media - Google Patents

User interfaces for capturing and managing visual media

Info

Publication number
EP3966676A2
EP3966676A2 EP20728854.9A EP20728854A EP3966676A2 EP 3966676 A2 EP3966676 A2 EP 3966676A2 EP 20728854 A EP20728854 A EP 20728854A EP 3966676 A2 EP3966676 A2 EP 3966676A2
Authority
EP
European Patent Office
Prior art keywords
cameras
displaying
camera
view
media
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20728854.9A
Other languages
German (de)
English (en)
French (fr)
Inventor
Behkish J. MANZARI
Lee S. Broughton
Alok Deshpande
Alan C. Dye
Craig M. Federighi
Lukas Robert Tom Girling
Martha E. Hankey
Paul Hubel
Nicholas Lupinetti
Jonathan MCCORMACK
Grant Paul
Daniel Trent Preston
William A. Sorrentino III
Andre Souza Dos Santos
Jeffrey A. Brasket
Rasmus R. JENSEN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/582,595 external-priority patent/US10674072B1/en
Application filed by Apple Inc filed Critical Apple Inc
Priority claimed from PCT/US2020/031643 external-priority patent/WO2020227386A2/en
Publication of EP3966676A2 publication Critical patent/EP3966676A2/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • G06F9/44526Plug-ins; Add-ons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/684Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
    • H04N23/6845Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time by combination of a plurality of images sequentially taken
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images

Definitions

  • the present disclosure relates generally to computer user interfaces and, more specifically, to techniques for capturing and managing visual media.
  • Some techniques for capturing and managing media using electronic devices are generally cumbersome and inefficient. For example, some existing techniques use a complex and time-consuming user interface, which may include multiple key presses or keystrokes. Existing techniques require more time than necessary, wasting user time and device energy. This latter consideration is particularly important in battery-operated devices.
  • the present technique provides electronic devices with faster, more efficient methods and interfaces for capturing and managing media. Such methods and interfaces optionally complement or replace other methods for capturing and managing media. Such methods and interfaces reduce the cognitive burden on a user and produce a more efficient human-machine interface. For battery-operated computing devices, such methods and interfaces conserve power and increase the time between battery charges.
  • the present technique enables users to edit captured media in a time- and input-efficient manner, thereby reducing the amount of processing the device needs to do.
  • the present technique manages framerates, thereby conserving storage space and reducing processing requirements.
  • a method is described. The method is performed at an electronic device having a display device and one or more cameras. The method comprises: displaying, via the display device, a camera user interface, the camera user interface including: a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; and a camera control region, the camera control region including a plurality of control affordances; and while a first predefined condition and a second predefined condition are not met, displaying the camera user interface without displaying a first control affordance associated with the first predefined condition and without displaying a second control affordance associated with the second predefined condition; while displaying the camera user interface without displaying the first control affordance and without displaying the second control affordance, detecting a change in conditions; and in response to detecting the change in conditions: in accordance with a determination that the first predefined condition is met, displaying the first control affordance; and in accordance with a determination that the second predefined condition is met,
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a camera user interface, the camera user interface including: a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; and a camera control region, the camera control region including a plurality of control affordances; and while a first predefined condition and a second predefined condition are not met, displaying the camera user interface without displaying a first control affordance associated with the first predefined condition and without displaying a second control affordance associated with the second predefined condition; while displaying the camera user interface without displaying the first control affordance and without displaying the second control affordance, detecting a change in conditions; and in response to detecting the change in conditions: in accordance with
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a camera user interface, the camera user interface including: a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; and a camera control region, the camera control region including a plurality of control affordances; and while a first predefined condition and a second predefined condition are not met, displaying the camera user interface without displaying a first control affordance associated with the first predefined condition and without displaying a second control affordance associated with the second predefined condition; while displaying the camera user interface without displaying the first control affordance and without displaying the second control affordance, detecting a change in conditions; and in response to detecting the change in conditions: in accordance with a determination that the
  • an electronic device comprises: a display device; one or more cameras; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display device, a camera user interface, the camera user interface including: a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; and a camera control region, the camera control region including a plurality of control affordances; and while a first predefined condition and a second predefined condition are not met, displaying the camera user interface without displaying a first control affordance associated with the first predefined condition and without displaying a second control affordance associated with the second predefined condition; while displaying the camera user interface without displaying the first control affordance and without displaying the second control affordance, detecting a change in conditions; and in response to detecting the change in conditions: in accordance with a determination that the first predefined condition is
  • an electronic device comprises: a display device; one or more cameras; means for displaying, via the display device, a camera user interface, the camera user interface including: a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; and a camera control region, the camera control region including a plurality of control affordances; and means, while a first predefined condition and a second predefined condition are not met, for displaying the camera user interface without displaying a first control affordance associated with the first predefined condition and without displaying a second control affordance associated with the second predefined condition; means, while displaying the camera user interface without displaying the first control affordance and without displaying the second control affordance, for detecting a change in conditions; and in response to detecting the change in conditions: in accordance with a determination that the first predefined condition is met, displaying the first control affordance; and in accordance with a determination that the second predefined condition is met,
  • a method is described. The method is performed at an electronic device having a display device and one or more cameras. The method comprises: displaying, via the display device, a camera user interface, the camera user interface including: a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; and a camera control region, the camera control region including a plurality of camera mode affordances at a first location; while displaying the camera user interface, detecting a first gesture on the camera user interface; and in response to detecting the first gesture, modifying an appearance of the camera control region, including: in accordance with a determination that the first gesture is a gesture of a first type, displaying one or more additional camera mode affordances at the first location; and in accordance with a determination that the first gesture is a gesture of a second type different from the first type, ceasing to display the plurality of camera mode affordances and displaying a plurality of camera setting affordances at the first location,
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a camera user interface, the camera user interface including: a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; and a camera control region, the camera control region including a plurality of camera mode affordances at a first location; while displaying the camera user interface, detecting a first gesture on the camera user interface; and in response to detecting the first gesture, modifying an appearance of the camera control region, including: in accordance with a determination that the first gesture is a gesture of a first type, displaying one or more additional camera mode affordances at the first location; and in accordance with a determination that the first gesture is a gesture of a second type different from
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a camera user interface, the camera user interface including: a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; and a camera control region, the camera control region including a plurality of camera mode affordances at a first location; while displaying the camera user interface, detecting a first gesture on the camera user interface; and in response to detecting the first gesture, modifying an appearance of the camera control region, including: in accordance with a determination that the first gesture is a gesture of a first type, displaying one or more additional camera mode affordances at the first location; and in accordance with a determination that the first gesture is a gesture of a second type different from the first type,
  • an electronic device comprises: a display device; one or more cameras; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display device, a camera user interface, the camera user interface including: a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; and a camera control region, the camera control region including a plurality of camera mode affordances at a first location; while displaying the camera user interface, detecting a first gesture on the camera user interface; and in response to detecting the first gesture, modifying an appearance of the camera control region, including: in accordance with a determination that the first gesture is a gesture of a first type, displaying one or more additional camera mode affordances at the first location; and in accordance with a determination that the first gesture is a gesture of a second type different from the first type, ceasing to display
  • an electronic device comprises: a display device; one or more cameras; means for displaying, via the display device, a camera user interface, the camera user interface including: a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; and a camera control region, the camera control region including a plurality of camera mode affordances at a first location; means, while displaying the camera user interface, for detecting a first gesture on the camera user interface; and means responsive to detecting the first gesture, for modifying an appearance of the camera control region, including: in accordance with a determination that the first gesture is a gesture of a first type, displaying one or more additional camera mode affordances at the first location; and in accordance with a determination that the first gesture is a gesture of a second type different from the first type, ceasing to display the plurality of camera mode affordances and displaying a plurality of camera setting affordances at the first location, wherein
  • a method is described. The method is performed at an electronic device having a display device and one or more cameras. The method comprises: receiving a request to display a camera user interface; in response to receiving the request to display the camera user interface and in accordance with a determination that respective criteria are not satisfied: displaying, via the display device, the camera user interface, the camera user interface including: a first region, the first region including a representation of a first portion of a field-of-view of the one or more cameras; and a second region, the second region including a representation of a second portion of the field-of-view of the one or more cameras, wherein the second portion of the field-of-view of the one or more cameras is visually distinguished from the first portion; while the camera user interface is displayed, detecting an input corresponding to a request to capture media with the one or more cameras; and in response to detecting the input corresponding to the request to capture media with the one or more cameras, capturing, with the one or more cameras, a media item that includes
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: receiving a request to display a camera user interface; in response to receiving the request to display the camera user interface and in accordance with a determination that respective criteria are not satisfied: displaying, via the display device, the camera user interface, the camera user interface including: a first region, the first region including a representation of a first portion of a field-of-view of the one or more cameras; and a second region, the second region including a representation of a second portion of the field-of-view of the one or more cameras, wherein the second portion of the field-of-view of the one or more cameras is visually distinguished from the first portion; while the camera user interface is displayed, detecting an input corresponding to a request to capture media with the one or more cameras; and in response
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: receiving a request to display a camera user interface; in response to receiving the request to display the camera user interface and in accordance with a determination that respective criteria are not satisfied: displaying, via the display device, the camera user interface, the camera user interface including: a first region, the first region including a representation of a first portion of a field-of-view of the one or more cameras; and a second region, the second region including a representation of a second portion of the field-of-view of the one or more cameras, wherein the second portion of the field-of-view of the one or more cameras is visually distinguished from the first portion; while the camera user interface is displayed, detecting an input corresponding to a request to capture media with the one or more cameras; and in response to detecting the
  • an electronic device comprises: a display device; one or more cameras; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving a request to display a camera user interface; in response to receiving the request to display the camera user interface and in accordance with a determination that respective criteria are not satisfied: displaying, via the display device, the camera user interface, the camera user interface including: a first region, the first region including a representation of a first portion of a field-of-view of the one or more cameras; and a second region, the second region including a representation of a second portion of the field-of-view of the one or more cameras, wherein the second portion of the field-of-view of the one or more cameras is visually distinguished from the first portion; while the camera user interface is displayed, detecting an input corresponding to a request to capture media with the one or more cameras; and in response to detecting the input corresponding to the
  • an electronic device comprises: a display device; one or more cameras; means for receiving a request to display a camera user interface; means, responsive to receiving the request to display the camera user interface and in accordance with a determination that respective criteria are not satisfied, for: displaying, via the display device, the camera user interface, the camera user interface including: a first region, the first region including a representation of a first portion of a field-of-view of the one or more cameras; and a second region, the second region including a representation of a second portion of the field-of-view of the one or more cameras, wherein the second portion of the field-of-view of the one or more cameras is visually distinguished from the first portion; means, while the camera user interface is displayed, for detecting an input corresponding to a request to capture media with the one or more cameras; and means, responsive to detecting the input corresponding to the request to capture media with the one or more cameras, for capturing, with the one or more cameras, a media item
  • a method is described. The method is performed at an electronic device having a display device and one or more cameras. The method comprises: displaying, via the display device, a camera user interface the camera user interface including a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; while displaying the camera user interface, detecting a request to capture media corresponding to the field-of-view of the one or more cameras; in response to detecting the request to capture media corresponding to the field-of-view of the one or more cameras, capturing media corresponding to the field-of-view of the one or more cameras and displaying a representation of the captured media; while displaying the representation of the captured media, detecting that the representation of the captured media has been displayed for a predetermined period of time; and in response to detecting that the representation of the captured media has been displayed for the predetermined period of time, ceasing to display at least a first portion of the representation of the captured media while maintaining display of the camera user interface
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a camera user interface the camera user interface including a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; while displaying the camera user interface, detecting a request to capture media corresponding to the field-of-view of the one or more cameras; in response to detecting the request to capture media corresponding to the field-of-view of the one or more cameras, capturing media corresponding to the field-of-view of the one or more cameras and displaying a representation of the captured media; while displaying the representation of the captured media, detecting that the representation of the captured media has been displayed for a predetermined period of time; and in response to detecting that the representation of the captured media has
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a camera user interface the camera user interface including a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; while displaying the camera user interface, detecting a request to capture media corresponding to the field-of-view of the one or more cameras; in response to detecting the request to capture media corresponding to the field-of-view of the one or more cameras, capturing media corresponding to the field-of-view of the one or more cameras and displaying a representation of the captured media; while displaying the representation of the captured media, detecting that the representation of the captured media has been displayed for a predetermined period of time; and in response to detecting that the representation of the captured media has been displayed for the
  • an electronic device comprises: a display device; one or more cameras; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display device, a camera user interface the camera user interface including a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; while displaying the camera user interface, detecting a request to capture media corresponding to the field-of-view of the one or more cameras; in response to detecting the request to capture media corresponding to the field-of-view of the one or more cameras, capturing media corresponding to the field-of- view of the one or more cameras and displaying a representation of the captured media; while displaying the representation of the captured media, detecting that the representation of the captured media has been displayed for a predetermined period of time; and in response to detecting that the representation of the captured media has been displayed for the predetermined period of time
  • an electronic device comprises: a display device; one or more cameras; means for displaying, via the display device, a camera user interface the camera user interface including a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; means, while displaying the camera user interface, for detecting a request to capture media corresponding to the field-of-view of the one or more cameras; means, responsive to detecting the request to capture media corresponding to the field-of-view of the one or more cameras, for capturing media corresponding to the field-of-view of the one or more cameras and displaying a representation of the captured media; means, while displaying the representation of the captured media, for detecting that the representation of the captured media has been displayed for a predetermined period of time; and means, responsive to detecting that the representation of the captured media has been displayed for the predetermined period of time, for ceasing to display at least a first portion of the representation of the captured media while maintaining display of
  • a method is described. The method is performed at an electronic device having a display device and one or more cameras. The method comprises: displaying, via the display device, a camera user interface, the camera user interface including a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; while the electronic device is configured to capture media with a first aspect ratio in response to receiving a request to capture media, detecting a first input including a first contact at a respective location on the representation of the field-of- view of the one or more cameras; and in response to detecting the first input: in accordance with a determination that a set of aspect ratio change criteria is met, configuring the electronic device to capture media with a second aspect ratio that is different from the first aspect ratio in response to a request to capture media, wherein the set of aspect ratio change criteria includes a criterion that is met when the first input includes maintaining the first contact at a first location
  • a non-transitory computer-readable storage medium is described.
  • the non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a camera user interface, the camera user interface including a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; while the electronic device is configured to capture media with a first aspect ratio in response to receiving a request to capture media, detecting a first input including a first contact at a respective location on the representation of the field-of-view of the one or more cameras; and in response to detecting the first input: in accordance with a determination that a set of aspect ratio change criteria is met, configuring the electronic device to capture media with a second aspect ratio that is different from the first aspect ratio in response to a request to capture media, wherein the set of aspect ratio change criteria includes a criterion that is met when the first input includes maintaining the first contact at a first location corresponding to
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a camera user interface, the camera user interface including a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; while the electronic device is configured to capture media with a first aspect ratio in response to receiving a request to capture media, detecting a first input including a first contact at a respective location on the representation of the field-of-view of the one or more cameras; and in response to detecting the first input: in accordance with a determination that a set of aspect ratio change criteria is met, configuring the electronic device to capture media with a second aspect ratio that is different from the first aspect ratio in response to a request to capture media, wherein the set of aspect ratio change criteria includes a criterio
  • an electronic device comprises: a display device; one or more cameras; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display device, a camera user interface, the camera user interface including a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; while the electronic device is configured to capture media with a first aspect ratio in response to receiving a request to capture media, detecting a first input including a first contact at a respective location on the representation of the field-of-view of the one or more cameras; and in response to detecting the first input: in accordance with a determination that a set of aspect ratio change criteria is met, configuring the electronic device to capture media with a second aspect ratio that is different from the first aspect ratio in response to a request to capture media, wherein the set of aspect ratio change criteria includes a criterion that is met when
  • an electronic device comprises: a display device; one or more cameras; means for displaying, via the display device, a camera user interface, the camera user interface including a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; means, while the electronic device is configured to capture media with a first aspect ratio in response to receiving a request to capture media, for detecting a first input including a first contact at a respective location on the representation of the field-of-view of the one or more cameras; and means, responsive to detecting the first input, for: in accordance with a
  • the electronic device determines whether a set of aspect ratio change criteria is met, configuring the electronic device to capture media with a second aspect ratio that is different from the first aspect ratio in response to a request to capture media, wherein the set of aspect ratio change criteria includes a criterion that is met when the first input includes maintaining the first contact at a first location corresponding to a predefined portion of the camera display region that indicates at least a portion of a boundary of the media that will be captured in response to a request to capture media for at least a threshold amount of time, followed by detecting movement of the first contact to a second location different from the first location.
  • a method is described. The method is performed at an electronic device having a display device and one or more cameras. The method comprises: while the electronic device is in a first orientation, displaying, via the display device, a first camera user interface for capturing media in a first camera orientation at a first zoom level; detecting a change in orientation of the electronic device from the first orientation to a second orientation; and in response to detecting the change in orientation of the electronic device from the first orientation to a second orientation: in accordance with a determination that a set of automatic zoom criteria are satisfied, automatically, without intervening user inputs, displaying a second camera user interface for capturing media in a second camera orientation at a second zoom level that is different from the first zoom level.
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: while the electronic device is in a first orientation, displaying, via the display device, a first camera user interface for capturing media in a first camera orientation at a first zoom level; detecting a change in orientation of the electronic device from the first orientation to a second orientation; and in response to detecting the change in orientation of the electronic device from the first orientation to a second orientation: in accordance with a determination that a set of automatic zoom criteria are satisfied, automatically, without intervening user inputs, displaying a second camera user interface for capturing media in a second camera orientation at a second zoom level that is different from the first zoom level.
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: while the electronic device is in a first orientation, displaying, via the display device, a first camera user interface for capturing media in a first camera orientation at a first zoom level;
  • detecting a change in orientation of the electronic device from the first orientation to a second orientation in response to detecting the change in orientation of the electronic device from the first orientation to a second orientation: in accordance with a determination that a set of automatic zoom criteria are satisfied, automatically, without intervening user inputs, displaying a second camera user interface for capturing media in a second camera orientation at a second zoom level that is different from the first zoom level.
  • an electronic device comprises: a display device; one or more cameras; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: while the electronic device is in a first orientation, displaying, via the display device, a first camera user interface for capturing media in a first camera orientation at a first zoom level; detecting a change in orientation of the electronic device from the first orientation to a second orientation; and in response to detecting the change in orientation of the electronic device from the first orientation to a second orientation: in accordance with a determination that a set of automatic zoom criteria are satisfied, automatically, without intervening user inputs, displaying a second camera user interface for capturing media in a second camera orientation at a second zoom level that is different from the first zoom level.
  • an electronic device comprises: a display device; one or more cameras; means, while the electronic device is in a first orientation, for displaying, via the display device, a first camera user interface for capturing media in a first camera orientation at a first zoom level; means for detecting a change in orientation of the electronic device from the first orientation to a second orientation; and means, responsive to detecting the change in orientation of the electronic device from the first orientation to a second orientation, for: in accordance with a determination that a set of automatic zoom criteria are satisfied, automatically, without intervening user inputs, displaying a second camera user interface for capturing media in a second camera orientation at a second zoom level that is different from the first zoom level.
  • a method is described. The method is performed at an electronic device having a display device and one or more cameras. The method comprises: displaying, via the display device, a media capture user interface that includes displaying a representation of a field-of-view of the one or more cameras; while displaying the media capture user interface, detecting, via the one or more cameras, changes in the field-of- view of the one or more cameras; and in response to detecting the changes in the field-of-view of the one or more cameras and in accordance with a determination that variable frame rate criteria are satisfied: in accordance with a determination that the detected changes in the field-of-view of the one or more cameras satisfy movement criteria, updating the representation of the field-of- view of the one or more cameras based on the detected changes in the field-of-view of the one or more cameras at a first frame rate; and in accordance with a determination that the detected changes in the field-of-view of the one or more cameras do not satisfy the movement criteria, updating the representation of the field
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a media capture user interface that includes displaying a representation of a field-of-view of the one or more cameras; while displaying the media capture user interface, detecting, via the one or more cameras, changes in the field-of-view of the one or more cameras; and in response to detecting the changes in the field-of-view of the one or more cameras and in accordance with a determination that variable frame rate criteria are satisfied: in accordance with a determination that the detected changes in the field-of-view of the one or more cameras satisfy movement criteria, updating the representation of the field-of-view of the one or more cameras based on the detected changes in the field-of-view of the one or more cameras at a first frame rate; and in
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a media capture user interface that includes displaying a representation of a field-of-view of the one or more cameras; while displaying the media capture user interface, detecting, via the one or more cameras, changes in the field-of-view of the one or more cameras; and in response to detecting the changes in the field-of-view of the one or more cameras and in accordance with a determination that variable frame rate criteria are satisfied: in accordance with a determination that the detected changes in the field-of-view of the one or more cameras satisfy movement criteria, updating the representation of the field-of-view of the one or more cameras based on the detected changes in the field-of-view of the one or more cameras at a first frame rate; and in accordance with
  • an electronic device is described.
  • the electronic device comprises: a display device; one or more cameras; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display device, a media capture user interface that includes displaying a representation of a field-of-view of the one or more cameras; while displaying the media capture user interface, detecting, via the one or more cameras, changes in the field-of-view of the one or more cameras; and in response to detecting the changes in the field-of-view of the one or more cameras and in accordance with a
  • variable frame rate criteria in accordance with a determination that the detected changes in the field-of-view of the one or more cameras satisfy movement criteria, updating the representation of the field-of-view of the one or more cameras based on the detected changes in the field-of-view of the one or more cameras at a first frame rate; and in accordance with a determination that the detected changes in the field-of-view of the one or more cameras do not satisfy the movement criteria, updating the representation of the field-of-view of the one or more cameras based on the detected changes in the field-of-view of the one or more cameras at a second frame rate, wherein the second frame rate is lower than the first frame rate.
  • an electronic device comprises: a display device; one or more cameras; means for displaying, via the display device, a media capture user interface that includes displaying a representation of a field-of-view of the one or more cameras; means, while displaying the media capture user interface, for detecting, via the one or more cameras, changes in the field-of-view of the one or more cameras; and means, responsive to detecting the changes in the field-of-view of the one or more cameras and in accordance with a determination that variable frame rate criteria are satisfied, for: in accordance with a determination that the detected changes in the field-of-view of the one or more cameras satisfy movement criteria, updating the representation of the field-of- view of the one or more cameras based on the detected changes in the field-of-view of the one or more cameras at a first frame rate; and in accordance with a determination that the detected changes in the field-of-view of the one or more cameras do not satisfy the movement criteria, updating the representation of the field
  • a method is described. The method is performed at an electronic device having a display device and one or more cameras. The method comprises: receiving a request to display a camera user interface; and in response to receiving the request to display the camera user interface, displaying, via the display device, a camera user interface that includes: displaying, via the display device, a representation of a field-of-view of the one or more cameras; and in accordance with a determination that low-light conditions have been met, wherein the low-light conditions include a condition that is met when ambient light in the field-of-view of the one or more cameras is below a respective threshold, displaying, concurrently with the representation of the field-of-view of the one or more cameras, a control for adjusting a capture duration for capturing media in response to a request to capture media; and in accordance with a determination that the low-light conditions have not been met, forgoing display of the control for adjusting the capture duration.
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: receiving a request to display a camera user interface; and in response to receiving the request to display the camera user interface, displaying, via the display device, a camera user interface that includes: displaying, via the display device, a representation of a field-of-view of the one or more cameras; and in accordance with a determination that low-light conditions have been met, wherein the low-light conditions include a condition that is met when ambient light in the field- of-view of the one or more cameras is below a respective threshold, displaying, concurrently with the representation of the field-of-view of the one or more cameras, a control for adjusting a capture duration for capturing media in response to a request to capture media; and in accordance with a determination that the low
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: receiving a request to display a camera user interface; and in response to receiving the request to display the camera user interface, displaying, via the display device, a camera user interface that includes: displaying, via the display device, a representation of a field-of-view of the one or more cameras; and in accordance with a determination that low-light conditions have been met, wherein the low-light conditions include a condition that is met when ambient light in the field- of-view of the one or more cameras is below a respective threshold, displaying, concurrently with the representation of the field-of-view of the one or more cameras, a control for adjusting a capture duration for capturing media in response to a request to capture media; and in accordance with a determination that the low-light conditions have
  • an electronic device comprises: a display device; one or more cameras; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving a request to display a camera user interface; and in response to receiving the request to display the camera user interface, displaying, via the display device, a camera user interface that includes: displaying, via the display device, a representation of a field-of-view of the one or more cameras; and in accordance with a determination that low-light conditions have been met, wherein the low-light conditions include a condition that is met when ambient light in the field-of-view of the one or more cameras is below a respective threshold, displaying, concurrently with the representation of the field-of-view of the one or more cameras, a control for adjusting a capture duration for capturing media in response to a request to capture media; and in accordance with a determination that the low-light conditions have not been met, for
  • an electronic device comprises: a display device; one or more cameras; means for receiving a request to display a camera user interface; and means, responsive to receiving the request to display the camera user interface, for displaying, via the display device, a camera user interface that includes: displaying, via the display device, a representation of a field-of-view of the one or more cameras; and in accordance with a determination that low-light conditions have been met, wherein the low-light conditions include a condition that is met when ambient light in the field- of-view of the one or more cameras is below a respective threshold, displaying, concurrently with the representation of the field-of-view of the one or more cameras, a control for adjusting a capture duration for capturing media in response to a request to capture media; and in accordance with a determination that the low-light conditions have not been met, forgoing display of the control for adjusting the capture duration.
  • a method is described. The method is performed at an electronic device having a display device and one or more cameras. The method comprises: displaying, via the display device, a camera user interface; while displaying the camera user interface, detecting, via one or more sensors of the electronic device, an amount of light in a field-of-view of the one or more cameras; and in response detecting, the amount of light in the field-of-view of the one or more cameras: in accordance with a determination that the amount of light in the field-of-view of the one or more cameras satisfies low-light environment criteria, wherein the low-light environment criteria include a criterion that is satisfied when the amount of light in the field-of-view of the one or more cameras is below a predetermined threshold, concurrently displaying, in the camera user interface: a flash status indicator that indicates a status of a flash operation; and a low-light capture status indicator that indicates a status of a low-light capture mode; and in accordance with a
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a camera user interface; while displaying the camera user interface, detecting, via one or more sensors of the electronic device, an amount of light in a field-of-view of the one or more cameras; and in response detecting, the amount of light in the field-of-view of the one or more cameras: in accordance with a determination that the amount of light in the field-of-view of the one or more cameras satisfies low-light environment criteria, wherein the low-light environment criteria include a criterion that is satisfied when the amount of light in the field-of-view of the one or more cameras is below a predetermined threshold, concurrently displaying, in the camera user interface: a flash status indicator that indicates a status
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a camera user interface; while displaying the camera user interface, detecting, via one or more sensors of the electronic device, an amount of light in a field-of-view of the one or more cameras; and in response detecting, the amount of light in the field-of-view of the one or more cameras: in accordance with a determination that the amount of light in the field-of-view of the one or more cameras satisfies low-light environment criteria, wherein the low-light environment criteria include a criterion that is satisfied when the amount of light in the field-of-view of the one or more cameras is below a predetermined threshold, concurrently displaying, in the camera user interface: a flash status indicator that indicates a status of a flash
  • an electronic device comprises: a display device; one or more cameras; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display device, a camera user interface; while displaying the camera user interface, detecting, via one or more sensors of the electronic device, an amount of light in a field-of-view of the one or more cameras; and in response detecting, the amount of light in the field-of-view of the one or more cameras: in accordance with a determination that the amount of light in the field-of-view of the one or more cameras satisfies low-light environment criteria, wherein the low-light environment criteria include a criterion that is satisfied when the amount of light in the field-of-view of the one or more cameras is below a predetermined threshold, concurrently displaying, in the camera user interface: a flash status indicator that indicates a status of a flash operation; and a
  • an electronic device comprises: a display device; one or more cameras; means for displaying, via the display device, a camera user interface; means, while displaying the camera user interface, for detecting, via one or more sensors of the electronic device, an amount of light in a field-of- view of the one or more cameras; and means, responsive to detecting, the amount of light in the field-of-view of the one or more cameras, for: in accordance with a determination that the amount of light in the field-of-view of the one or more cameras satisfies low-light environment criteria, wherein the low-light environment criteria include a criterion that is satisfied when the amount of light in the field-of-view of the one or more cameras is below a predetermined threshold, concurrently displaying, in the camera user interface: a flash status indicator that indicates a status of a flash operation; and a low-light capture status indicator that indicates a status of a low-light capture mode; and in accordance with
  • a method is described. The method is performed at an electronic device having a display device. The method comprises: displaying, on the display device, a media editing user interface including: a representation of a visual media; a first affordance corresponding to a first editable parameter to edit the representation of the visual media; and a second affordance corresponding to a second editable parameter to edit the representation of the visual media; while displaying the media editing user interface, detecting a first user input corresponding to selection of the first affordance; in response to detecting the first user input corresponding to selection of the first affordance, displaying, on the display device, at a respective location in the media editing user interface, an adjustable control for adjusting the first editable parameter; while displaying the adjustable control for adjusting the first editable parameter and while the first editable parameter is selected, detecting a first gesture directed to the adjustable control for adjusting the first editable parameter; in response to detecting the first gesture directed to the adjustable control for adjusting the first editable parameter while the first editable parameter
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device, the one or more programs including instructions for: displaying, on the display device, a media editing user interface including: a representation of a visual media; a first affordance corresponding to a first editable parameter to edit the representation of the visual media; and a second affordance corresponding to a second editable parameter to edit the representation of the visual media; while displaying the media editing user interface, detecting a first user input corresponding to selection of the first affordance; in response to detecting the first user input corresponding to selection of the first affordance, displaying, on the display device, at a respective location in the media editing user interface, an adjustable control for adjusting the first editable parameter; while displaying the adjustable control for adjusting the first editable parameter and while the first editable parameter is selected, detecting a first gesture directed to the adjustable control
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device, the one or more programs including instructions for: displaying, on the display device, a media editing user interface including: a representation of a visual media; a first affordance corresponding to a first editable parameter to edit the representation of the visual media; and a second affordance corresponding to a second editable parameter to edit the representation of the visual media; while displaying the media editing user interface, detecting a first user input corresponding to selection of the first affordance; in response to detecting the first user input corresponding to selection of the first affordance, displaying, on the display device, at a respective location in the media editing user interface, an adjustable control for adjusting the first editable parameter; while displaying the adjustable control for adjusting the first editable parameter and while the first editable parameter is selected, detecting a first gesture directed to the adjustable control for adjusting the
  • an electronic device comprises: displaying, on the display device, a media editing user interface including: a representation of a visual media; a first affordance corresponding to a first editable parameter to edit the representation of the visual media; and a second affordance corresponding to a second editable parameter to edit the representation of the visual media; while displaying the media editing user interface, detecting a first user input corresponding to selection of the first affordance; in response to detecting the first user input corresponding to selection of the first affordance, displaying, on the display device, at a respective location in the media editing user interface, an adjustable control for adjusting the first editable parameter; while displaying the adjustable control for adjusting the first editable parameter and while the first editable parameter is selected, detecting a first gesture directed to the adjustable control for adjusting the first editable parameter; in response to detecting the first gesture directed to the adjustable control for adjusting the first editable parameter while the first editable parameter is selected, adjusting a current value of the first edit
  • an electronic device comprises: a display device; means for displaying, on the display device, a media editing user interface including: a representation of a visual media; a first affordance corresponding to a first editable parameter to edit the representation of the visual media; and a second affordance corresponding to a second editable parameter to edit the representation of the visual media; means, while displaying the media editing user interface, for detecting a first user input corresponding to selection of the first affordance; means, responsive to detecting the first user input corresponding to selection of the first affordance, for displaying, on the display device, at a respective location in the media editing user interface, an adjustable control for adjusting the first editable parameter; means, while displaying the adjustable control for adjusting the first editable parameter and while the first editable parameter is selected, for detecting a first gesture directed to the adjustable control for adjusting the first editable parameter; means, responsive to detecting the first gesture directed to the adjustable control for adjusting the first editable parameter while the first
  • a method is described. The method is performed at an electronic device having a display device. The method comprises: displaying, on the display device, a first user interface that includes concurrently displaying: a first representation of a first visual media; and an adjustable control that includes an indication of a current amount of adjustment for a perspective distortion of the first visual media; while displaying, on the display device, the first user interface, detecting user input that includes a gesture directed to the adjustable control; an in response to detecting the user input that includes the gesture directed to the adjustable control: displaying, on the display device, a second representation of the first visual media with a respective amount of adjustment for the perspective distortion selected based on a magnitude of the gesture.
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device, the one or more programs including instructions for: displaying, on the display device, a first user interface that includes concurrently displaying: a first representation of a first visual media; and an adjustable control that includes an indication of a current amount of adjustment for a perspective distortion of the first visual media; while displaying, on the display device, the first user interface, detecting user input that includes a gesture directed to the adjustable control; an in response to detecting the user input that includes the gesture directed to the adjustable control: displaying, on the display device, a second representation of the first visual media with a respective amount of adjustment for the perspective distortion selected based on a magnitude of the gesture.
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device, the one or more programs including instructions for: displaying, on the display device, a first user interface that includes concurrently displaying: a first representation of a first visual media; and an adjustable control that includes an indication of a current amount of adjustment for a perspective distortion of the first visual media; while displaying, on the display device, the first user interface, detecting user input that includes a gesture directed to the adjustable control; an in response to detecting the user input that includes the gesture directed to the adjustable control: displaying, on the display device, a second representation of the first visual media with a respective amount of adjustment for the perspective distortion selected based on a magnitude of the gesture.
  • an electronic device comprises: a display device; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for displaying, on the display device, a first user interface that includes concurrently displaying: a first representation of a first visual media; and an adjustable control that includes an indication of a current amount of adjustment for a perspective distortion of the first visual media; while displaying, on the display device, the first user interface, detecting user input that includes a gesture directed to the adjustable control; an in response to detecting the user input that includes the gesture directed to the adjustable control: displaying, on the display device, a second representation of the first visual media with a respective amount of adjustment for the perspective distortion selected based on a magnitude of the gesture.
  • an electronic device comprises: a display device; means for displaying, on the display device, a first user interface that includes concurrently displaying: a first representation of a first visual media; and an adjustable control that includes an indication of a current amount of adjustment for a perspective distortion of the first visual media; means, while displaying, on the display device, the first user interface, for detecting user input that includes a gesture directed to the adjustable control; an means, responsive to detecting the user input that includes the gesture directed to the adjustable control, for: displaying, on the display device, a second representation of the first visual media with a respective amount of adjustment for the perspective distortion selected based on a magnitude of the gesture.
  • a method is described. The method is performed at an electronic device having a display device. The method comprises: displaying, via the display device, a media capture user interface that includes: displaying a representation of a field-of-view of one or more cameras; and while a low-light camera mode is active, displaying a control for adjusting a capture duration for capturing media, where displaying the control includes: in accordance with a determination that a set of first capture duration criteria is satisfied: displaying an indication that the control is set to a first capture duration; and configuring the electronic device to capture a first plurality of images over the first capture duration responsive to a single request to capture an image corresponding to a field-of-view of the one or more cameras; and in accordance with a determination that a set of second capture duration criteria is satisfied, wherein the set of second capture duration criteria is different from the set of first capture duration criteria: displaying an indication that the control is set to a second capture duration that is greater than the first capture duration; and con
  • a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device with a display device, the one or more programs including instructions for: displaying, via the display device, a media capture user interface that includes: displaying a representation of a field-of- view of one or more cameras; and while a low-light camera mode is active, displaying a control for adjusting a capture duration for capturing media, where displaying the control includes: in accordance with a determination that a set of first capture duration criteria is satisfied: displaying an indication that the control is set to a first capture duration; and configuring the electronic device to capture a first plurality of images over the first capture duration responsive to a single request to capture an image corresponding to a field-of-view of the one or more cameras; and in accordance with a determination that a set of second capture duration criteria is satisfied, wherein the set of second capture duration criteria
  • a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device with a display device, the one or more programs including instructions for: displaying, via the display device, a media capture user interface that includes: displaying a representation of a field-of- view of one or more cameras; and while a low-light camera mode is active, displaying a control for adjusting a capture duration for capturing media, where displaying the control includes: in accordance with a determination that a set of first capture duration criteria is satisfied: displaying an indication that the control is set to a first capture duration; and configuring the electronic device to capture a first plurality of images over the first capture duration responsive to a single request to capture an image corresponding to a field-of-view of the one or more cameras; and in accordance with a determination that a set of second capture duration criteria is satisfied, wherein the set of second capture duration criteria is different
  • an electronic device includes one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display device, a media capture user interface that includes: displaying a representation of a field-of-view of one or more cameras; and while a low-light camera mode is active, displaying a control for adjusting a capture duration for capturing media, where displaying the control includes: in accordance with a determination that a set of first capture duration criteria is satisfied: displaying an indication that the control is set to a first capture duration; and configuring the electronic device to capture a first plurality of images over the first capture duration responsive to a single request to capture an image corresponding to a field-of-view of the one or more cameras; and in accordance with a determination that a set of second capture duration criteria is satisfied, wherein the set of second capture duration criteria is different from the set of first capture duration criteria: displaying an indication that
  • an electronic device includes: a display device; means for displaying, via the display device, a media capture user interface that includes: displaying a representation of a field-of-view of one or more cameras; and means, while a low-light camera mode is active, for displaying a control for adjusting a capture duration for capturing media, where displaying the control includes: in accordance with a determination that a set of first capture duration criteria is satisfied: displaying an indication that the control is set to a first capture duration; and configuring the electronic device to capture a first plurality of images over the first capture duration responsive to a single request to capture an image corresponding to a field-of-view of the one or more cameras; and in accordance with a determination that a set of second capture duration criteria is satisfied, wherein the set of second capture duration criteria is different from the set of first capture duration criteria: displaying an indication that the control is set to a second capture duration that is greater than the first capture duration; and configuring the electronic
  • a method is described. The method is performed at an electronic device having a display device and one or more cameras. The method comprises: displaying, via the display device, a media capture user interface that includes a representation of a field-of-view of the one or more cameras; while displaying, via the display device, the media capture user interface, receiving a request to capture media; in response to receiving the request to capture media, initiating capture, via the one or more cameras, of media; and at a first time after initiating capture, via the one or more cameras, of media: in accordance with a determination that a set of guidance criteria is satisfied, wherein the set of guidance criteria includes a criterion that is met when a low-light mode is active, displaying, via the display device, a visual indication of a difference between a pose of the electronic device when capture of the media was initiated and a pose of the electronic device at the first time after initiating capture of media.
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device, the one or more programs including instructions for: displaying, via the display device, a media capture user interface that includes a representation of a field-of-view of the one or more cameras; while displaying, via the display device, the media capture user interface, receiving a request to capture media; in response to receiving the request to capture media, initiating capture, via the one or more cameras, of media; and at a first time after initiating capture, via the one or more cameras, of media: in accordance with a determination that a set of guidance criteria is satisfied, wherein the set of guidance criteria includes a criterion that is met when a low-light mode is active, displaying, via the display device, a visual indication of a difference between a pose of the electronic device when capture of the media was initiated and a pose of the electronic
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device, the one or more programs including instructions for: displaying, via the display device, a media capture user interface that includes a representation of a field-of-view of the one or more cameras; while displaying, via the display device, the media capture user interface, receiving a request to capture media; in response to receiving the request to capture media, initiating capture, via the one or more cameras, of media; and at a first time after initiating capture, via the one or more cameras, of media: in accordance with a determination that a set of guidance criteria is satisfied, wherein the set of guidance criteria includes a criterion that is met when a low-light mode is active, displaying, via the display device, a visual indication of a difference between a pose of the electronic device when capture of the media was initiated and a pose of the electronic device at
  • an electronic device includes: a display device; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display device, a media capture user interface that includes a representation of a field-of-view of the one or more cameras; while displaying, via the display device, the media capture user interface, receiving a request to capture media; in response to receiving the request to capture media, initiating capture, via the one or more cameras, of media; and at a first time after initiating capture, via the one or more cameras, of media: in accordance with a determination that a set of guidance criteria is satisfied, wherein the set of guidance criteria includes a criterion that is met when a low-light mode is active, displaying, via the display device, a visual indication of a difference between a pose of the electronic device when capture of the media was initiated and a pose of the electronic device at the first time after initiating capture
  • an electronic device includes: a display device; means for displaying, via the display device, a media capture user interface that includes a representation of a field-of-view of the one or more cameras; means, while displaying, via the display device, the media capture user interface, for receiving a request to capture media; means, responsive to receiving the request to capture media, for initiating capture, via the one or more cameras, of media; and at a first time after initiating capture, via the one or more cameras, of media: means, in accordance with a determination that a set of guidance criteria is satisfied, wherein the set of guidance criteria includes a criterion that is met when a low-light mode is active, for displaying, via the display device, a visual indication of a difference between a pose of the electronic device when capture of the media was initiated and a pose of the electronic device at the first time after initiating capture of media.
  • a method is described. The method is performed at an electronic device having a display device and one or more cameras. The method comprises: displaying, via the display device, a camera user interface, the camera user interface including: a first region, the first region including a first representation of a first portion of a field-of-view of the one or more cameras; and a second region that is outside of the first region and is visually distinguished from the first region, including: in accordance with a determination that a set of first respective criteria is satisfied, wherein the set of first respective criteria includes a criterion that is satisfied when a first respective object in the field-of-view of the one or more cameras is a first distance from the one or more cameras, displaying, in the second region, a second portion of the field-of-view of the one or more cameras with a first visual appearance; and in accordance with a determination that a set of second respective criteria is satisfied, wherein the set of second respective criteria includes a criterion that is satisfied when the first
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device, the one or more programs including instructions for: displaying, via the display device, a camera user interface, the camera user interface including: a first region, the first region including a first representation of a first portion of a field-of-view of the one or more cameras; and a second region that is outside of the first region and is visually distinguished from the first region, including: in accordance with a determination that a set of first respective criteria is satisfied, wherein the set of first respective criteria includes a criterion that is satisfied when a first respective object in the field-of-view of the one or more cameras is a first distance from the one or more cameras, displaying, in the second region, a second portion of the field-of-view of the one or more cameras with a first visual appearance; and in accordance with a determination that
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device, the one or more programs including instructions for: displaying, via the display device, a camera user interface, the camera user interface including: a first region, the first region including a first representation of a first portion of a field-of-view of the one or more cameras; and a second region that is outside of the first region and is visually distinguished from the first region, including: in accordance with a determination that a set of first respective criteria is satisfied, wherein the set of first respective criteria includes a criterion that is satisfied when a first respective object in the field-of-view of the one or more cameras is a first distance from the one or more cameras, displaying, in the second region, a second portion of the field-of-view of the one or more cameras with a first visual appearance; and in accordance with a determination that a
  • an electronic device includes: a display device; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display device, a camera user interface, the camera user interface including: a first region, the first region including a first representation of a first portion of a field-of-view of the one or more cameras; and a second region that is outside of the first region and is visually distinguished from the first region, including: in accordance with a determination that a set of first respective criteria is satisfied, wherein the set of first respective criteria includes a criterion that is satisfied when a first respective object in the field-of-view of the one or more cameras is a first distance from the one or more cameras, displaying, in the second region, a second portion of the field-of-view of the one or more cameras with a first visual appearance; and in accordance with a determination that a set of second respective criteria is satisfied
  • an electronic device includes: a display device; one or more cameras; and means for displaying, via the display device, a camera user interface, the camera user interface including: a first region, the first region including a first representation of a first portion of a field-of-view of the one or more cameras; and a second region that is outside of the first region and is visually distinguished from the first region, including: in accordance with a determination that a set of first respective criteria is satisfied, where the set of first respective criteria includes a criterion that is satisfied when a first respective object in the field-of-view of the one or more cameras is a first distance from the one or more cameras, displaying, in the second region, a second portion of the field-of-view of the one or more cameras with a first visual appearance; and in accordance with a determination that a set of second respective criteria is satisfied, where the set of second respective criteria includes a criterion that is satisfied when the first respective object in the field-of-view of the one or more cameras
  • a method is described. The method is performed at an electronic device having a display device, a first camera that has a field-of-view and a second camera that has a wider field-of-view than the field-of-view of the first camera.
  • the method comprises: displaying, via the display device, a camera user interface that includes a representation of at least a portion of a field-of-view of one or more cameras displayed at a first zoom level, the camera user interface including: a first region, the first region including a representation of a first portion of the field-of-view of the first camera at the first zoom level; and a second region, the second region including a representation of a first portion of the field- of-view of the second camera at the first zoom level.
  • the method also comprises while displaying, via the display device, the camera user interface that includes the representation of at least a portion of a field-of-view of the one or more cameras displayed at the first zoom level, receiving a first request to increase the zoom level of the representation of the portion of the field-of-view of the one or more cameras to a second zoom level; and in response to receiving the first request to increase the zoom level of the representation of the portion of the field-of- view of the one or more cameras to the second zoom level: displaying, in the first region, at the second zoom level, a representation of a second portion of the field-of-view of the first camera that excludes at least a subset of the first portion of the field-of-view of the first camera; and displaying, in the second region, at the second zoom level, a representation of a second portion of the field-of-view of the second camera that overlaps with the subset of the portion of the field- of-view of the first camera that was excluded from the second portion of the field-of-view of the first camera without
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device, a first camera that has a field-of-view, and a second camera that has a wider field-of-view than the field-of-view of the first camera, the one or more programs including instructions for: displaying, via the display device, a camera user interface that includes a representation of at least a portion of a field-of-view of one or more cameras displayed at a first zoom level, the camera user interface including: a first region, the first region including a representation of a first portion of the field-of-view of the first camera at the first zoom level; and a second region, the second region including a representation of a first portion of the field- of-view of the second camera at the first zoom level.
  • the non - transitory computer - readable storage medium also includes while displaying, via the display device, the camera user interface that includes the representation of at least a portion of a field-of-view of the one or more cameras displayed at the first zoom level, receiving a first request to increase the zoom level of the representation of the portion of the field-of-view of the one or more cameras to a second zoom level; and in response to receiving the first request to increase the zoom level of the
  • representation of the portion of the field-of-view of the one or more cameras to the second zoom level displaying, in the first region, at the second zoom level, a representation of a second portion of the field-of-view of the first camera that excludes at least a subset of the first portion of the field-of-view of the first camera; and displaying, in the second region, at the second zoom level, a representation of a second portion of the field-of-view of the second camera that overlaps with the subset of the portion of the field-of-view of the first camera that was excluded from the second portion of the field-of-view of the first camera without displaying, in the second region, a representation of the subset of the portion of the field-of-view of the first camera that was excluded from the second portion of the field-of-view of the first camera.
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device, a first camera that has a field-of-view, and a second camera that has a wider field-of-view than the field-of-view of the first camera, the one or more programs including instructions for: displaying, via the display device, a camera user interface that includes a representation of at least a portion of a field-of-view of one or more cameras displayed at a first zoom level, the camera user interface including: a first region, the first region including a representation of a first portion of the field-of-view of the first camera at the first zoom level; and a second region, the second region including a representation of a first portion of the field- of-view of the second camera at the first zoom level.
  • the non - transitory computer - readable storage medium also includes while displaying, via the display device, the camera user interface that includes the representation of at least a portion of a field-of-view of the one or more cameras displayed at the first zoom level, receiving a first request to increase the zoom level of the representation of the portion of the field-of-view of the one or more cameras to a second zoom level; and in response to receiving the first request to increase the zoom level of the
  • representation of the portion of the field-of-view of the one or more cameras to the second zoom level displaying, in the first region, at the second zoom level, a representation of a second portion of the field-of-view of the first camera that excludes at least a subset of the first portion of the field-of-view of the first camera; and displaying, in the second region, at the second zoom level, a representation of a second portion of the field-of-view of the second camera that overlaps with the subset of the portion of the field-of-view of the first camera that was excluded from the second portion of the field-of-view of the first camera without displaying, in the second region, a representation of the subset of the portion of the field-of-view of the first camera that was excluded from the second portion of the field-of-view of the first camera.
  • an electronic device includes: a display device; a first camera that has a field-of-view; a second camera that has a wider field-of-view than the field-of-view of the first camera; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display device, a camera user interface that includes a representation of at least a portion of a field-of- view of one or more cameras displayed at a first zoom level, the camera user interface including: a first region, the first region including a representation of a first portion of the field-of-view of the first camera at the first zoom level; and a second region, the second region including a representation of a first portion of the field-of-view of the second camera at the first zoom level.
  • the electronic device also includes while displaying, via the display device, the camera user interface that includes the representation of at least a portion of a field-of-view of the one or more cameras displayed at the first zoom level, receiving a first request to increase the zoom level of the representation of the portion of the field-of-view of the one or more cameras to a second zoom level; and in response to receiving the first request to increase the zoom level of the representation of the portion of the field-of-view of the one or more cameras to the second zoom level: displaying, in the first region, at the second zoom level, a representation of a second portion of the field-of-view of the first camera that excludes at least a subset of the first portion of the field-of-view of the first camera; and displaying, in the second region, at the second zoom level, a representation of a second portion of the field-of-view of the second camera that overlaps with the subset of the portion of the field-of-view of the first camera that was excluded from the second portion of the field-of-view of the first camera
  • an electronic device includes: a display device; a first camera that has a field-of-view; a second camera that has a wider field-of-view than the field-of-view of the first camera; one or more cameras; means for displaying, via the display device, a camera user interface that includes a representation of at least a portion of a field-of-view of one or more cameras displayed at a first zoom level, the camera user interface including: a first region, the first region including a representation of a first portion of the field-of-view of the first camera at the first zoom level; and a second region, the second region including a representation of a first portion of the field- of-view of the second camera at the first zoom level.
  • the electronic device also includes means, while displaying, via the display device, the camera user interface that includes the
  • a method is described. The method is performed at: an electronic device having a display device and one or more cameras. The method comprises: displaying, via the display device, a camera user interface that includes a first representation of at least a portion of a field-of-view of the one or more cameras displayed at a first zoom level, the camera user interface including a plurality of zoom affordances, the plurality of zoom affordances including a first zoom affordance and a second zoom affordance.
  • the method also comprises while displaying the plurality of zoom affordances, receiving a first gesture directed to one of the plurality of zoom affordances; and in response to receiving the first gesture: in accordance with a determination that the first gesture is a gesture directed to the first zoom affordance, displaying, at a second zoom level, a second representation of at least a portion of a field-of-view of the one or more cameras; and in accordance with a determination that the first gesture is a gesture directed to the second zoom affordance, displaying, at a third zoom level, a third representation of at least a portion of a field-of-view of the one or more cameras, where the third zoom level is different from the first zoom level and the second zoom level.
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a camera user interface that includes a first representation of at least a portion of a field-of-view of the one or more cameras displayed at a first zoom level, the camera user interface including a plurality of zoom affordances, the plurality of zoom
  • affordances including a first zoom affordance and a second zoom affordance; while displaying the plurality of zoom affordances, receiving a first gesture directed to one of the plurality of zoom affordances; and in response to receiving the first gesture: in accordance with a
  • a determination that the first gesture is a gesture directed to the first zoom affordance displaying, at a second zoom level, a second representation of at least a portion of a field-of-view of the one or more cameras; and in accordance with a determination that the first gesture is a gesture directed to the second zoom affordance, displaying, at a third zoom level, a third representation of at least a portion of a field-of-view of the one or more cameras, where the third zoom level is different from the first zoom level and the second zoom level.
  • the non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a camera user interface that includes a first representation of at least a portion of a field-of-view of the one or more cameras displayed at a first zoom level, the camera user interface including a plurality of zoom affordances, the plurality of zoom
  • affordances including a first zoom affordance and a second zoom affordance; while displaying the plurality of zoom affordances, receiving a first gesture directed to one of the plurality of zoom affordances; and in response to receiving the first gesture: in accordance with a
  • the determination that the first gesture is a gesture directed to the first zoom affordance displaying, at a second zoom level, a second representation of at least a portion of a field-of-view of the one or more cameras; and in accordance with a determination that the first gesture is a gesture directed to the second zoom affordance, displaying, at a third zoom level, a third representation of at least a portion of a field-of-view of the one or more cameras, where the third zoom level is different from the first zoom level and the second zoom level.
  • an electronic device includes: a display device; one or more cameras; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display device, a camera user interface that includes a first representation of at least a portion of a field-of-view of the one or more cameras displayed at a first zoom level, the camera user interface including a plurality of zoom affordances, the plurality of zoom affordances including a first zoom affordance and a second zoom affordance; while displaying the plurality of zoom affordances, receiving a first gesture directed to one of the plurality of zoom affordances; and in response to receiving the first gesture: in accordance with a determination that the first gesture is a gesture directed to the first zoom affordance, displaying, at a second zoom level, a second representation of at least a portion of a field-of-view of the one or more cameras; and in accordance with
  • an electronic device includes: a display device; one or more cameras; and means for displaying, via the display device, a camera user interface that includes a first representation of at least a portion of a field-of-view of the one or more cameras displayed at a first zoom level, the camera user interface including a plurality of zoom affordances, the plurality of zoom affordances including a first zoom affordance and a second zoom affordance; means while displaying the plurality of zoom affordances, for receiving a first gesture directed to one of the plurality of zoom
  • a method is described. The method is performed at an electronic device having a display device and one or more cameras. The method comprises: displaying, via the display device, a camera user interface, the camera user interface including: a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; and a camera control region, the camera control region including a first plurality of camera mode affordances indicating different modes of operation of the one or more cameras at a first location.
  • the method also comprises while displaying the first plurality of camera mode affordances indicating different modes of operation of the one or more cameras, detecting a first gesture directed toward the camera user interface; in response to detecting the first gesture directed toward the camera user interface: displaying a first set of camera setting affordances at the first location, where the first set of camera setting affordances are settings for adjusting image capture for a first camera mode; and ceasing to display the first plurality of camera mode affordances indicating different modes of operation of the camera at the first location.
  • the method also comprises while displaying the first set of camera setting affordances at the first location and while the electronic device is configured to capture media in the first camera mode, receiving a second gesture directed toward the camera user interface; and in response to receiving the second gesture directed toward the camera user interface:
  • the electronic device configuring the electronic device to capture media in a second camera mode that is different from the first camera mode, and displaying a second set of camera setting affordances at the first location without displaying the first plurality of camera mode affordances indicating different modes of operation of the one or more cameras at the first location.
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a camera user interface, the camera user interface including: a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; and a camera control region, the camera control region including a first plurality of camera mode affordances indicating different modes of operation of the one or more cameras at a first location.
  • the non - transitory computer - readable storage medium also includes while displaying the first plurality of camera mode affordances indicating different modes of operation of the one or more cameras, detecting a first gesture directed toward the camera user interface; in response to detecting the first gesture directed toward the camera user interface: displaying a first set of camera setting affordances at the first location, where the first set of camera setting affordances are settings for adjusting image capture for a first camera mode; and ceasing to display the first plurality of camera mode affordances indicating different modes of operation of the camera at the first location.
  • the non - transitory computer - readable storage medium also includes while displaying the first set of camera setting affordances at the first location and while the electronic device is configured to capture media in the first camera mode, receiving a second gesture directed toward the camera user interface; and in response to receiving the second gesture directed toward the camera user interface: configuring the electronic device to capture media in a second camera mode that is different from the first camera mode, and displaying a second set of camera setting affordances at the first location without displaying the first plurality of camera mode affordances indicating different modes of operation of the one or more cameras at the first location.
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a camera user interface, the camera user interface including: a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; and a camera control region, the camera control region including a first plurality of camera mode affordances indicating different modes of operation of the one or more cameras at a first location.
  • the non - transitory computer - readable storage medium also includes while displaying the first plurality of camera mode affordances indicating different modes of operation of the one or more cameras, detecting a first gesture directed toward the camera user interface; in response to detecting the first gesture directed toward the camera user interface: displaying a first set of camera setting affordances at the first location, where the first set of camera setting affordances are settings for adjusting image capture for a first camera mode; and ceasing to display the first plurality of camera mode affordances indicating different modes of operation of the camera at the first location.
  • the non - transitory computer - readable storage medium also includes while displaying the first set of camera setting affordances at the first location and while the electronic device is configured to capture media in the first camera mode, receiving a second gesture directed toward the camera user interface; and in response to receiving the second gesture directed toward the camera user interface: configuring the electronic device to capture media in a second camera mode that is different from the first camera mode, and displaying a second set of camera setting affordances at the first location without displaying the first plurality of camera mode affordances indicating different modes of operation of the one or more cameras at the first location.
  • an electronic device includes: a display device; one or more cameras; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display device, a camera user interface, the camera user interface including: a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; and a camera control region, the camera control region including a first plurality of camera mode affordances indicating different modes of operation of the one or more cameras at a first location.
  • the electronic device also includes while displaying the first plurality of camera mode affordances indicating different modes of operation of the one or more cameras, detecting a first gesture directed toward the camera user interface; in response to detecting the first gesture directed toward the camera user interface: displaying a first set of camera setting affordances at the first location, where the first set of camera setting affordances are settings for adjusting image capture for a first camera mode; and ceasing to display the first plurality of camera mode affordances indicating different modes of operation of the camera at the first location.
  • the electronic device also includes while displaying the first set of camera setting affordances at the first location and while the electronic device is configured to capture media in the first camera mode, receiving a second gesture directed toward the camera user interface; and in response to receiving the second gesture directed toward the camera user interface: configuring the electronic device to capture media in a second camera mode that is different from the first camera mode, and displaying a second set of camera setting affordances at the first location without displaying the first plurality of camera mode affordances indicating different modes of operation of the one or more cameras at the first location.
  • an electronic device includes: a display device; one or more cameras; means for displaying, via the display device, a camera user interface, the camera user interface including: a camera display region, the camera display region including a representation of a field-of-view of the one or more cameras; and a camera control region, the camera control region including a first plurality of camera mode affordances indicating different modes of operation of the one or more cameras at a first location.
  • the electronic device also includes means, while displaying the first plurality of camera mode affordances indicating different modes of operation of the one or more cameras, for detecting a first gesture directed toward the camera user interface; and means, responsive to detecting the first gesture directed toward the camera user interface, for: displaying a first set of camera setting affordances at the first location, where the first set of camera setting affordances are settings for adjusting image capture for a first camera mode; and ceasing to display the first plurality of camera mode affordances indicating different modes of operation of the camera at the first location.
  • the electronic device also includes means, while displaying the first set of camera setting affordances at the first location and while the electronic device is configured to capture media in the first camera mode, for receiving a second gesture directed toward the camera user interface; and means, responsive to receiving the second gesture directed toward the camera user interface, for: configuring the electronic device to capture media in a second camera mode that is different from the first camera mode; and displaying a second set of camera setting affordances at the first location without displaying the first plurality of camera mode affordances indicating different modes of operation of the one or more cameras at the first location.
  • a method is described. The method is performed at an electronic device with a display device and one or more cameras. The method comprises: receiving a request to display a representation of a previously captured media item that includes first content from a first portion of a field-of-view of the one or more cameras and second content from a second portion of the field-of-view of the one or more cameras; and in response to receiving the request to display the representation of the previously captured media item: in accordance with a determination that automatic media correction criteria are satisfied, displaying, via the display device, a representation of the previously captured media item that includes a combination of the first content and the second content; and in accordance with a determination that automatic media correction criteria are not satisfied, displaying, via the display device, a representation of the previously captured media item that includes the first content and does not include the second content.
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: receiving a request to display a representation of a previously captured media item that includes first content from a first portion of a field-of-view of the one or more cameras and second content from a second portion of the field-of-view of the one or more cameras; and in response to receiving the request to display the representation of the previously captured media item: in accordance with a determination that automatic media correction criteria are satisfied, displaying, via the display device, a representation of the previously captured media item that includes a combination of the first content and the second content; and in accordance with a determination that automatic media correction criteria are not satisfied, displaying, via the display device, a representation of the previously captured media item that includes the first content and does not include the second content.
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: receiving a request to display a representation of a previously captured media item that includes first content from a first portion of a field-of-view of the one or more cameras and second content from a second portion of the field-of-view of the one or more cameras; and in response to receiving the request to display the representation of the previously captured media item: in accordance with a determination that automatic media correction criteria are satisfied, displaying, via the display device, a representation of the previously captured media item that includes a combination of the first content and the second content; and in accordance with a determination that automatic media correction criteria are not satisfied, displaying, via the display device, a representation of the previously captured media item that includes the first content and does not include the second content.
  • an electronic device includes: a display device; one or more cameras; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving a request to display a
  • representation of a previously captured media item that includes first content from a first portion of a field-of-view of the one or more cameras and second content from a second portion of the field-of-view of the one or more cameras; and in response to receiving the request to display the representation of the previously captured media item: in accordance with a determination that automatic media correction criteria are satisfied, displaying, via the display device, a
  • an electronic device includes: a display device; one or more cameras; means for displaying, via the display device, a media capture user interface that includes a representation of a field-of-view of the one or more cameras; means, while displaying, via the display device, the media capture user interface, for receiving a request to capture media; means, responsive to receiving the request to capture media, for initiating capture, via the one or more cameras, of media; means, at a first time after initiating capture, via the one or more cameras, of media, for detecting movement of the electronic device; and means, responsive to detecting movement of the electronic device at the first time after initiating capture of media, for: in accordance with a determination that a set of guidance criteria is satisfied, wherein the set of guidance criteria include a criterion that is satisfied when the detected movement of the electronic device is above a movement threshold, displaying, via the display device, a visual indication of one or more differences between a pose of the electronic device when capture of media was
  • a method is described. The method is performed at computer system with one or more cameras, wherein the computer system is in communication with one or more display devices and one or more input devices.
  • the method comprises: displaying a camera user interface with a camera preview for capturing media at a first zoom level, wherein the camera user interface includes a selectable user interface object for changing the zoom level; while displaying the camera user interface, detecting an input corresponding to selection of the selectable user interface object; and in response to detecting the input corresponding to selection of the selectable user interface object: in accordance with a determination that available light is below a threshold, changing the zoom level to a second zoom level and enabling a low-light capture mode; and in accordance with a determination that the available light is above the threshold, changing the zoom level without enabling the low-light capture mode.
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system with one or more cameras, wherein the computer system is in communication with one or more display devices and one or more input devices, the one or more programs including instructions for: displaying a camera user interface with a camera preview for capturing media at a first zoom level, wherein the camera user interface includes a selectable user interface object for changing the zoom level; while displaying the camera user interface, detecting an input corresponding to selection of the selectable user interface object; and in response to detecting the input corresponding to selection of the selectable user interface object: in accordance with a determination that available light is below a threshold, changing the zoom level to a second zoom level and enabling a low-light capture mode; and in accordance with a determination that the available light is above the threshold, changing the zoom level without enabling the low-light capture mode.
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system with one or more cameras, wherein the computer system is in communication with one or more display devices and one or more input devices, the one or more programs including instructions for: displaying a camera user interface with a camera preview for capturing media at a first zoom level, wherein the camera user interface includes a selectable user interface object for changing the zoom level; while displaying the camera user interface, detecting an input corresponding to selection of the selectable user interface object; and in response to detecting the input corresponding to selection of the selectable user interface object: in accordance with a determination that the available light is below a threshold, changing the zoom level to a second zoom level and enabling a low-light capture mode; and in accordance with a determination that the available light is above the threshold, changing the zoom level without enabling the low-light capture mode.
  • a computer system includes: one or more cameras, wherein the computer system is in
  • the one or more programs including instructions for: displaying a camera user interface with a camera preview for capturing media at a first zoom level, wherein the camera user interface includes a selectable user interface object for changing the zoom level; while displaying the camera user interface, detecting an input corresponding to selection of the selectable user interface object; and in response to detecting the input corresponding to selection of the selectable user interface object: in accordance with a determination that available light is below a threshold, changing the zoom level to a second zoom level and enabling a low-light capture mode; and in accordance with a determination that the available light is above the threshold, changing the zoom level without enabling the low-light capture mode.
  • a computer system includes: one or more cameras, wherein the computer system is in
  • Executable instructions for performing these functions are, optionally, included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors. Executable instructions for performing these functions are, optionally, included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
  • devices are provided with faster, more efficient methods and interfaces for capturing and managing media, thereby increasing the effectiveness, efficiency, and user satisfaction with such devices.
  • Such methods and interfaces may complement or replace other methods for capturing and managing media.
  • FIG. 1 A is a block diagram illustrating a portable multifunction device with a touch- sensitive display in accordance with some embodiments.
  • FIG. IB is a block diagram illustrating exemplary components for event handling in accordance with some embodiments.
  • FIG. 2 illustrates a portable multifunction device having a touch screen in accordance with some embodiments.
  • FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
  • FIG. 4A illustrates an exemplary user interface for a menu of applications on a portable multifunction device in accordance with some embodiments.
  • FIG. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface that is separate from the display in accordance with some embodiments.
  • FIG. 5 A illustrates a personal electronic device in accordance with some
  • FIG. 5B is a block diagram illustrating a personal electronic device in accordance with some embodiments.
  • FIGS. 5C-5D illustrate exemplary components of a personal electronic device having a touch-sensitive display and intensity sensors in accordance with some embodiments.
  • FIGS. 5E-5H illustrate exemplary components and user interfaces of a personal electronic device in accordance with some embodiments.
  • FIGS. 6A-6V illustrate exemplary techniques and user interfaces for accessing media controls using an electronic device in accordance with some embodiments.
  • FIGS. 7A-7C are a flow diagram illustrating a method for accessing media controls using an electronic device in accordance with some embodiments.
  • FIGS. 8A-8V illustrate exemplary techniques and user interfaces for displaying media controls using an electronic device in accordance with some embodiments.
  • FIGS. 9A-9C are a flow diagram illustrating a method for displaying media controls using an electronic device in accordance with some embodiments.
  • FIGS. 10A-10K illustrate exemplary techniques and user interfaces for displaying a camera field-of-view using an electronic device in accordance with some embodiments.
  • FIGS. 11 A-l 1C are a flow diagram illustrating a method for displaying a camera field-of-view using an electronic device in accordance with some embodiments.
  • FIGS. 12A-12K illustrate exemplary techniques and user interfaces for accessing media items using an electronic device in accordance with some embodiments.
  • FIGS. 13A-13B are a flow diagram illustrating a method for accessing media items using an electronic device in accordance with some embodiments.
  • FIGS. 14A-14U illustrate exemplary techniques and user interfaces for modifying media items using an electronic device in accordance with some embodiments.
  • FIGS. 15A-15C are a flow diagram illustrating a method for modifying media items using an electronic device in accordance with some embodiments.
  • FIGS. 16A-16Q illustrate exemplary techniques and user interfaces for varying zoom levels using an electronic device in accordance with some embodiments.
  • FIGS. 17A-17B are a flow diagram illustrating a method for varying zoom levels using an electronic device in accordance with some embodiments.
  • FIGS. 18A-18X illustrate exemplary techniques and user interfaces for managing media using an electronic device in accordance with some embodiments.
  • FIGS. 19A-19B are a flow diagram illustrating a method for varying frame rates using an electronic device in accordance with some embodiments.
  • FIGS. 20A-20C are a flow diagram illustrating a method for accommodating light conditions using an electronic device in accordance with some embodiments.
  • FIGS. 21 A-21C are a flow diagram illustrating a method for providing camera indications using an electronic device in accordance with some embodiments.
  • FIGS. 22A-22AM illustrate exemplary user interfaces for editing captured media in accordance with some embodiments.
  • FIGS. 23 A-23B are a flow diagram illustrating a method for editing captured media using an electronic device in accordance with some embodiments.
  • FIGS. 24A-24AB illustrate exemplary user interfaces for editing captured media in accordance with some embodiments.
  • FIGS. 25A-25B are a flow diagram illustrating a method for editing captured media using an electronic device in accordance with some embodiments.
  • FIGS. 26A-26U illustrate exemplary user interfaces for managing media using an electronic device in accordance with some embodiments.
  • FIGS. 27A-27C are a flow diagram illustrating a method for managing media using an electronic device in accordance with some embodiments.
  • FIGS. 28A-28B are a flow diagram illustrating a method for providing guidance while capturing media.
  • FIGS. 29A-29P illustrate exemplary user interfaces for managing the capture of media controlled by using an electronic device with multiple cameras in accordance with some embodiments.
  • FIGS. 30A-30C are a flow diagram illustrating a method for managing the capture of media controlled by using an electronic device with multiple cameras in accordance with some embodiments.
  • FIGS. 31 A- 3 II illustrate exemplary user interfaces for displaying a camera user interface at various zoom level using different cameras of an electronic device in accordance with some embodiments.
  • FIGS. 32A-32C are a flow diagram illustrating a method for displaying a camera user interface at various zoom level using different cameras of an electronic device in accordance with some embodiments.
  • FIGS. 33 A-33Q illustrate exemplary user interfaces for varying zoom levels using an electronic device in accordance with some embodiments.
  • FIGS. 34A-34B are a flow diagram illustrating a method for varying zoom levels using an electronic device in accordance with some embodiments.
  • FIGS. 35A-35I illustrate exemplary user interfaces for accessing media capture controls using an electronic device in accordance with some embodiments.
  • FIGS. 36A-36B are a flow diagram illustrating a method for accessing media capture controls using an electronic device in accordance with some embodiments.
  • FIGS. 37A-37AA illustrate exemplary user interfaces for automatically adjusting captured media using an electronic device in accordance with some embodiments.
  • FIGS. 38A-38C are a flow diagram illustrating a method for automatically adjusting captured media using an electronic device in accordance with some embodiments.
  • FIGS. 39A-39Q illustrate exemplary user interfaces for providing guidance while capturing media.
  • FIGS. 40A-40B are a flow diagram illustrating a method for providing guidance while capturing media.
  • FIGS. 41 A-41F illustrate exemplary user interfaces for automatically managing a media capture mode based on a set of conditions.
  • FIGS. 42A-42B are a flow diagram illustrating a method for automatically managing a media capture mode based on a set of conditions.
  • FIGS. 1A-1B, 2, 3, 4A-4B, and 5A-5H provide a description of exemplary devices for performing the techniques for managing event notifications.
  • FIGS. 6A-6V illustrate exemplary techniques and user interfaces for accessing media controls using an electronic device in accordance with some embodiments.
  • FIGS. 7A-7C are a flow diagram illustrating a method for accessing media controls using an electronic device in accordance with some embodiments. The user interfaces in FIGS. 6A-6V are used to illustrate the processes described below, including the processes in 7A-7C.
  • FIGS. 8A-8V illustrate exemplary techniques and user interfaces for displaying media controls using an electronic device in accordance with some embodiments.
  • FIGS. 9A-9C are a flow diagram illustrating a method for displaying media controls using an electronic device in accordance with some embodiments. The user interfaces in FIGS. 8A-8V are used to illustrate the processes described below, including the processes in FIGS. 9A-9C.
  • FIGS. 10A-10K illustrate exemplary techniques and user interfaces for displaying a camera field-of-view using an electronic device in accordance with some embodiments.
  • FIGS. 10A-10K illustrate exemplary techniques and user interfaces for displaying a camera field-of-view using an electronic device in accordance with some embodiments.
  • FIGS. 10A-10K are used to illustrate the processes described below, including the processes in FIGS. 11 A-l 1C.
  • FIGS. 12A-12K illustrate exemplary techniques and user interfaces for accessing media items using an electronic device in accordance with some embodiments.
  • FIGS. 13A-13B are a flow diagram illustrating a method for accessing media items using an electronic device in accordance with some embodiments.
  • the user interfaces in FIGS. 12A-12K are used to illustrate the processes described below, including the processes in FIGS. 13A-13B.
  • FIGS. 14A-14U illustrate exemplary techniques and user interfaces for modifying media items using an electronic device in accordance with some embodiments.
  • FIGS. 15A-15C are a flow diagram illustrating a method for modifying media items using an electronic device in accordance with some embodiments. The user interfaces in FIGS. 14A-14U are used to illustrate the processes described below, including the processes in FIGS. 15A-15C.
  • FIGS. 16A-16Q illustrate exemplary techniques and user interfaces for varying zoom levels using an electronic device in accordance with some embodiments.
  • FIGS. 17A-17B are a flow diagram illustrating a method for varying zoom levels using an electronic device in accordance with some embodiments. The user interfaces in FIGS. 16A-16Q are used to illustrate the processes described below, including the processes in FIGS. 17A-17B.
  • FIGS. 18A-18X illustrate exemplary techniques and user interfaces for managing media using an electronic device in accordance with some embodiments.
  • FIGS. 19A-19B are a flow diagram illustrating a method for varying frame rates using an electronic device in accordance with some embodiments.
  • FIGS. 20A-20C are a flow diagram illustrating a method for accommodating light conditions using an electronic device in accordance with some embodiments.
  • FIGS. 21 A-21C are a flow diagram illustrating a method for providing camera indications using an electronic device in accordance with some embodiments.
  • the user interfaces in FIGS. 18A-18X are used to illustrate the processes described below, including the processes in FIGS. 19A-19B, 20A-20C, and 21 A-21C.
  • FIGS. 22A-22AM illustrate exemplary user interfaces for editing captured media in accordance with some embodiments.
  • FIGS. 23A-23B are a flow diagram illustrating a method for editing captured media using an electronic device in accordance with some embodiments.
  • FIGS. 22A-22AM are used to illustrate the processes described below, including the processes in FIGS. 23A-23B.
  • FIGS. 24A-24AB illustrate exemplary user interfaces for editing captured media in accordance with some embodiments.
  • FIGS. 25A-25B are a flow diagram illustrating a method for editing captured media using an electronic device in accordance with some embodiments.
  • FIGS. 24A-24AB are used to illustrate the processes described below, including the processes in FIGS. 25A-25B.
  • FIGS. 26A-26U illustrate exemplary user interfaces for managing media using an electronic device in accordance with some embodiments.
  • FIGS. 27A-27C are a flow diagram illustrating a method for managing media using an electronic device in accordance with some embodiments.
  • FIGS. 28A-28B are a flow diagram illustrating a method for providing guidance while capturing media.
  • the user interfaces in FIGS. 26A-26U are used to illustrate the processes described below, including the processes in FIGS. 27A-27C and FIGS. 28A-28B.
  • FIGS. 29A-29P illustrate exemplary user interfaces for managing the capture of media controlled by using an electronic device with multiple cameras in accordance with some embodiments.
  • FIGS. 30A-30C are a flow diagram illustrating a method for managing the capture of media controlled by using an electronic device with multiple cameras in accordance with some embodiments.
  • the user interfaces in FIGS. 29A-29P are used to illustrate the processes described below, including the processes in FIGS. 30A-30C.
  • FIGS. 31 A- 3 II illustrate exemplary user interfaces for displaying a camera user interface at various zoom level using different cameras of an electronic device in accordance with some embodiments.
  • FIGS. 32A-32C are a flow diagram illustrating a method for displaying a camera user interface at various zoom level using different cameras of an electronic device in accordance with some embodiments.
  • the user interfaces in FIGS. 31 A-3 II are used to illustrate the processes described below, including the processes in FIGS. 32A-32C.
  • FIGS. 33 A-33Q illustrate exemplary user interfaces for varying zoom levels using an electronic device in accordance with some embodiments.
  • FIGS. 34A-34B are a flow diagram illustrating a method for varying zoom levels using an electronic device in accordance with some embodiments.
  • the user interfaces in FIGS. 33A-33Q are used to illustrate the processes described below, including the processes in FIGS. 34A-34B.
  • FIGS. 35 A-351 illustrate exemplary user interfaces for accessing media capture controls using an electronic device in accordance with some embodiments.
  • FIGS. 36A-36B are a flow diagram illustrating a method for accessing media capture controls using an electronic device in accordance with some embodiments.
  • the user interfaces in FIGS. 35 A-351 are used to illustrate the processes described below, including the processes in FIGS. 36A-36B.
  • FIGS. 37A-37AA illustrate exemplary user interfaces for automatically adjusting captured media using an electronic device in accordance with some embodiments.
  • FIGS. 37A- 37AA are used to illustrate the processes described below, including the processes in FIGS. 38A- 38C.
  • FIGS. 39A-39Q illustrate exemplary user interfaces for providing guidance while capturing media using an electronic device in accordance with some embodiments.
  • FIGS. 40 A- 40B are a flow diagram illustrating a method for providing guidance while capturing media using an electronic device in accordance with some embodiments.
  • the user interfaces in FIGS. 39A- 39Q are used to illustrate the processes described below, including the processes in FIGS. 40 A- 40B.
  • FIGS. 41 A-41F illustrate exemplary user interfaces for automatically managing a media capture mode based on a set of conditions using an electronic device in accordance with some embodiments.
  • FIGS. 42A-42B are a flow diagram illustrating a method for automatically managing a media capture mode based on a set of conditions using an electronic device in accordance with some embodiments.
  • the user interfaces in FIGS. 41 A-41F are used to illustrate the processes described below, including the processes in FIGS. 42A-42B.
  • the term“if’ is, optionally, construed to mean“when” or“upon” or“in response to determining” or“in response to detecting,” depending on the context.
  • the phrase“if it is determined” or“if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or“in response to determining” or“upon detecting [the stated condition or event]” or“in response to detecting [the stated condition or event],” depending on the context.
  • the device is a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions.
  • portable multifunction devices include, without limitation, the iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, California.
  • Other portable electronic devices such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch screen displays and/or touchpads), are, optionally, used. It should also be understood that, in some embodiments, the device is not a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions.
  • portable multifunction devices include, without limitation, the iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, California.
  • Other portable electronic devices such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch screen displays and/or touchpads), are, optionally, used. It should also be understood that, in some embodiments, the device is not a portable
  • a communications device but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touchpad).
  • a touch-sensitive surface e.g., a touch screen display and/or a touchpad
  • an electronic device that includes a display and a touch-sensitive surface is described. It should be understood, however, that the electronic device optionally includes one or more other physical user-interface devices, such as a physical keyboard, a mouse, and/or a joystick.
  • the device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
  • applications such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
  • the various applications that are executed on the device optionally use at least one common physical user-interface device, such as the touch-sensitive surface.
  • One or more functions of the touch-sensitive surface as well as corresponding information displayed on the device are, optionally, adjusted and/or varied from one application to the next and/or within a respective application.
  • a common physical architecture (such as the touch-sensitive surface) of the device optionally supports the variety of applications with user interfaces that are intuitive and transparent to the user.
  • FIG. 1 A is a block diagram illustrating portable multifunction device 100 with touch-sensitive display system 112 in accordance with some embodiments.
  • Touch-sensitive display 112 is sometimes called a“touch screen” for convenience and is sometimes known as or called a“touch-sensitive display system.”
  • Device 100 includes memory 102 (which optionally includes one or more computer-readable storage mediums), memory controller 122, one or more processing units (CPUs) 120, peripherals interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, input/output (I/O) subsystem 106, other input control devices 116, and external port 124.
  • Device 100 optionally includes one or more optical sensors 164.
  • Device 100 optionally includes one or more contact intensity sensors 165 for detecting intensity of contacts on device 100 (e.g., a touch-sensitive surface such as touch-sensitive display system 112 of device 100).
  • Device 100 optionally includes one or more tactile output generators 167 for generating tactile outputs on device 100 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 112 of device 100 or touchpad 355 of device 300). These components optionally communicate over one or more communication buses or signal lines 103.
  • the term“intensity” of a contact on a touch- sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (proxy) for the force or pressure of a contact on the touch-sensitive surface.
  • the intensity of a contact has a range of values that includes at least four distinct values and more typically includes hundreds of distinct values (e.g., at least 256). Intensity of a contact is, optionally, determined (or measured) using various approaches and various sensors or combinations of sensors.
  • one or more force sensors underneath or adjacent to the touch-sensitive surface are, optionally, used to measure force at various points on the touch-sensitive surface.
  • force measurements from multiple force sensors are combined (e.g., a weighted average) to determine an estimated force of a contact.
  • a pressure-sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on the touch-sensitive surface.
  • the size of the contact area detected on the touch-sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive surface proximate to the contact and/or changes thereto are, optionally, used as a substitute for the force or pressure of the contact on the touch-sensitive surface.
  • the substitute measurements for contact force or pressure are used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the substitute measurements).
  • the substitute measurements for contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure).
  • the intensity threshold is a pressure threshold measured in units of pressure.
  • the term“tactile output” refers to physical displacement of a device relative to a previous position of the device, physical displacement of a component (e.g., a touch-sensitive surface) of a device relative to another component (e.g., housing) of the device, or displacement of the component relative to a center of mass of the device that will be detected by a user with the user’s sense of touch.
  • a component e.g., a touch-sensitive surface
  • another component e.g., housing
  • the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in physical characteristics of the device or the component of the device.
  • movement of a touch-sensitive surface is, optionally, interpreted by the user as a“down click” or“up click” of a physical actuator button.
  • a user will feel a tactile sensation such as an“down click” or“up click” even when there is no movement of a physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user’s movements.
  • movement of the touch-sensitive surface is, optionally, interpreted or sensed by the user as“roughness” of the touch-sensitive surface, even when there is no change in smoothness of the touch-sensitive surface.
  • a tactile output is described as corresponding to a particular sensory perception of a user (e.g., an“up click,” a“down click,” “roughness”), unless otherwise stated, the generated tactile output corresponds to physical displacement of the device or a component thereof that will generate the described sensory perception for a typical (or average) user.
  • device 100 is only one example of a portable
  • device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components.
  • the various components shown in FIG. 1 A are implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application-specific integrated circuits.
  • Memory 102 optionally includes high-speed random access memory and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices.
  • Memory controller 122 optionally controls access to memory 102 by other components of device 100.
  • Peripherals interface 118 can be used to couple input and output peripherals of the device to CPU 120 and memory 102.
  • the one or more processors 120 run or execute various software programs and/or sets of instructions stored in memory 102 to perform various functions for device 100 and to process data.
  • peripherals interface 118, CPU 120, and memory controller 122 are, optionally, implemented on a single chip, such as chip 104. In some other embodiments, they are, optionally, implemented on separate chips.
  • RF (radio frequency) circuitry 108 receives and sends RF signals, also called electromagnetic signals.
  • RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals.
  • RF circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth.
  • an antenna system an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth.
  • SIM subscriber identity module
  • RF circuitry 108 optionally communicates with networks, such as the Internet, also referred to as the World Wide Web (WWW), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices by wireless communication.
  • the RF circuitry 108 optionally includes well-known circuitry for detecting near field communication (NFC) fields, such as by a short-range communication radio.
  • the wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies, including but not limited to Global System for Mobile
  • GSM Global System for Mobile Communications
  • EDGE Enhanced Data GSM Environment
  • HSDPA high-speed downlink packet access
  • HSUPA high-speed uplink packet access
  • Evolution, Data-Only (EV- DO) HSPA, HSPA+, Dual-Cell HSPA (DC-HSPDA), long term evolution (LTE), near field communication (NFC), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Bluetooth Low Energy (BTLE), Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.
  • VoIP voice over Internet Protocol
  • Wi-MAX a protocol for e-mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant
  • Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. Audio circuitry 110 receives audio data from peripherals interface 118, converts the audio data to an electrical signal, and transmits the electrical signal to speaker 111. Speaker 111 converts the electrical signal to human-audible sound waves. Audio circuitry 110 also receives electrical signals converted by microphone 113 from sound waves. Audio circuitry 110 converts the electrical signal to audio data and transmits the audio data to peripherals interface 118 for processing.
  • Audio circuitry 110 receives audio data from peripherals interface 118, converts the audio data to an electrical signal, and transmits the electrical signal to speaker 111. Speaker 111 converts the electrical signal to human-audible sound waves. Audio circuitry 110 also receives electrical signals converted by microphone 113 from sound waves. Audio circuitry 110 converts the electrical signal to audio data and transmits the audio data to peripherals interface 118 for processing.
  • Audio data is, optionally, retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripherals interface 118.
  • audio circuitry 110 also includes a headset jack (e.g., 212, FIG. 2).
  • the headset jack provides an interface between audio circuitry 110 and removable audio input/output peripherals, such as output-only headphones or a headset with both output (e.g., a headphone for one or both ears) and input (e.g., a microphone).
  • I/O subsystem 106 couples input/output peripherals on device 100, such as touch screen 112 and other input control devices 116, to peripherals interface 118.
  • I/O subsystem 106 optionally includes display controller 156, optical sensor controller 158, depth camera controller 169, intensity sensor controller 159, haptic feedback controller 161, and one or more input controllers 160 for other input or control devices.
  • the one or more input controllers 160 receive/send electrical signals from/to other input control devices 116.
  • the other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth.
  • input controller(s) 160 are, optionally, coupled to any (or none) of the following: a keyboard, an infrared port, a USB port, and a pointer device such as a mouse.
  • the one or more buttons optionally include an up/down button for volume control of speaker 111 and/or microphone 113.
  • the one or more buttons optionally include a push button (e.g., 206, FIG. 2).
  • a quick press of the push button optionally disengages a lock of touch screen 112 or optionally begins a process that uses gestures on the touch screen to unlock the device, as described in U.S. Patent Application 11/322,549,“Unlocking a Device by Performing Gestures on an Unlock Image,” filed December 23, 2005, U.S. Pat. No. 7,657,849, which is hereby incorporated by reference in its entirety.
  • a longer press of the push button e.g., 206) optionally turns power to device 100 on or off.
  • the functionality of one or more of the buttons are, optionally, user-customizable.
  • Touch screen 112 is used to implement virtual or soft buttons and one or more soft keyboards.
  • Touch-sensitive display 112 provides an input interface and an output interface between the device and a user.
  • Display controller 156 receives and/or sends electrical signals from/to touch screen 112.
  • Touch screen 112 displays visual output to the user.
  • the visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively termed“graphics”). In some embodiments, some or all of the visual output optionally corresponds to user-interface objects.
  • Touch screen 112 has a touch-sensitive surface, sensor, or set of sensors that accepts input from the user based on haptic and/or tactile contact.
  • Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch screen 112 and convert the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages, or images) that are displayed on touch screen 112.
  • user-interface objects e.g., one or more soft keys, icons, web pages, or images
  • a point of contact between touch screen 112 and the user corresponds to a finger of the user.
  • Touch screen 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments.
  • Touch screen 112 and display controller 156 optionally detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen
  • projected mutual capacitance sensing technology is used, such as that found in the iPhone® and iPod Touch® from Apple Inc. of Cupertino, California.
  • a touch-sensitive display in some embodiments of touch screen 112 is, optionally, analogous to the multi-touch sensitive touchpads described in the following U.S. Patents: 6,323,846 (Westerman et al.), 6,570,557 (Westerman et al.), and/or 6,677,932 (Westerman), and/or U.S. Patent Publication 2002/0015024A1, each of which is hereby incorporated by reference in its entirety.
  • touch screen 112 displays visual output from device 100, whereas touch-sensitive touchpads do not provide visual output.
  • a touch-sensitive display in some embodiments of touch screen 112 is described in the following applications: (1) U.S. Patent Application No. 11/381,313,“Multipoint Touch Surface Controller,” filed May 2, 2006; (2) U.S. Patent Application No. 10/840,862,“Multipoint Touchscreen,” filed May 6, 2004; (3) U.S. Patent Application No. 10/903,964,“Gestures For Touch Sensitive Input Devices,” filed July 30, 2004; (4) U.S. Patent Application No. 11/048,264, “Gestures For Touch Sensitive Input Devices,” filed January 31, 2005; (5) U.S. Patent
  • Touch screen 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen has a video resolution of approximately 160 dpi.
  • the user optionally makes contact with touch screen 112 using any suitable object or appendage, such as a stylus, a finger, and so forth.
  • the user interface is designed to work primarily with finger-based contacts and gestures, which can be less precise than stylus-based input due to the larger area of contact of a finger on the touch screen.
  • the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user.
  • device 100 in addition to the touch screen, device 100 optionally includes a touchpad for activating or deactivating particular functions.
  • the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output.
  • the touchpad is, optionally, a touch-sensitive surface that is separate from touch screen 112 or an extension of the touch-sensitive surface formed by the touch screen.
  • Device 100 also includes power system 162 for powering the various components.
  • Power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in portable devices.
  • power sources e.g., battery, alternating current (AC)
  • AC alternating current
  • a recharging system e.g., a recharging system
  • a power failure detection circuit e.g., a power failure detection circuit
  • a power converter or inverter e.g., a power converter or inverter
  • a power status indicator e.g., a light-emitting diode (LED)
  • Device 100 optionally also includes one or more optical sensors 164.
  • FIG. 1 A shows an optical sensor coupled to optical sensor controller 158 in EO subsystem 106.
  • Optical sensor 164 optionally includes charge-coupled device (CCD) or complementary metal-oxide
  • CMOS complementary metal-oxide-semiconductor
  • optical sensor 164 optionally captures still images or video.
  • an optical sensor is located on the back of device 100, opposite touch screen display 112 on the front of the device so that the touch screen display is enabled for use as a viewfinder for still and/or video image acquisition.
  • an optical sensor is located on the front of the device so that the user’s image is, optionally, obtained for video conferencing while the user views the other video conference participants on the touch screen display.
  • the position of optical sensor 164 can be changed by the user (e.g., by rotating the lens and the sensor in the device housing) so that a single optical sensor 164 is used along with the touch screen display for both video conferencing and still and/or video image acquisition.
  • Device 100 optionally also includes one or more depth camera sensors 175.
  • FIG. 1 A shows a depth camera sensor coupled to depth camera controller 169 in I/O subsystem 106.
  • Depth camera sensor 175 receives data from the environment to create a three dimensional model of an object (e.g., a face) within a scene from a viewpoint (e.g., a depth camera sensor).
  • an object e.g., a face
  • a viewpoint e.g., a depth camera sensor
  • depth camera sensor 175 in conjunction with imaging module 143 (also called a camera module), depth camera sensor 175 is optionally used to determine a depth map of different portions of an image captured by the imaging module 143.
  • a depth camera sensor is located on the front of device 100 so that the user’s image with depth information is, optionally, obtained for video conferencing while the user views the other video conference participants on the touch screen display and to capture selfies with depth map data.
  • the depth camera sensor 175 is located on the back of device, or on the back and the front of the device 100.
  • the position of depth camera sensor 175 can be changed by the user (e.g., by rotating the lens and the sensor in the device housing) so that a depth camera sensor 175 is used along with the touch screen display for both video conferencing and still and/or video image acquisition.
  • a depth map (e.g., depth map image) contains information (e.g., values) that relates to the distance of objects in a scene from a viewpoint (e.g., a camera, an optical sensor, a depth camera sensor).
  • a viewpoint e.g., a camera, an optical sensor, a depth camera sensor.
  • each depth pixel defines the position in the viewpoint's Z-axis where its corresponding two-dimensional pixel is located.
  • a depth map is composed of pixels where each pixel is defined by a value (e.g., 0 - 255).
  • the "0" value represents pixels that are located at the most distant place in a "three dimensional” scene and the "255" value represents pixels that are located closest to a viewpoint (e.g., a camera, an optical sensor, a depth camera sensor) in the "three dimensional” scene.
  • a depth map represents the distance between an object in a scene and the plane of the viewpoint.
  • the depth map includes information about the relative depth of various features of an object of interest in view of the depth camera (e.g., the relative depth of eyes, nose, mouth, ears of a user’s face).
  • the depth map includes information that enables the device to determine contours of the object of interest in a z direction.
  • Device 100 optionally also includes one or more contact intensity sensors 165.
  • FIG. 1A shows a contact intensity sensor coupled to intensity sensor controller 159 in I/O subsystem 106.
  • Contact intensity sensor 165 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electric force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors used to measure the force (or pressure) of a contact on a touch-sensitive surface).
  • Contact intensity sensor 165 receives contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment.
  • at least one contact intensity sensor is collocated with, or proximate to, a touch-sensitive surface (e.g., touch- sensitive display system 112).
  • at least one contact intensity sensor is located on the back of device 100, opposite touch screen display 112, which is located on the front of device 100.
  • Device 100 optionally also includes one or more proximity sensors 166.
  • FIG. 1 A shows proximity sensor 166 coupled to peripherals interface 118.
  • proximity sensor 166 is, optionally, coupled to input controller 160 in I/O subsystem 106.
  • Proximity sensor 166 optionally performs as described in U.S. Patent Application Nos. 11/241,839,“Proximity Detector In Handheld Device”; 11/240,788,“Proximity Detector In Handheld Device”;
  • the proximity sensor turns off and disables touch screen 112 when the multifunction device is placed near the user’s ear (e.g., when the user is making a phone call).
  • Device 100 optionally also includes one or more tactile output generators 167.
  • FIG. 1 A shows a tactile output generator coupled to haptic feedback controller 161 in I/O subsystem 106.
  • Tactile output generator 167 optionally includes one or more electroacoustic devices such as speakers or other audio components and/or electromechanical devices that convert energy into linear motion such as a motor, solenoid, electroactive polymer, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component that converts electrical signals into tactile outputs on the device).
  • Contact intensity sensor 165 receives tactile feedback generation instructions from haptic feedback module 133 and generates tactile outputs on device 100 that are capable of being sensed by a user of device 100.
  • At least one tactile output generator is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system 112) and, optionally, generates a tactile output by moving the touch-sensitive surface vertically (e.g., in/out of a surface of device 100) or laterally (e.g., back and forth in the same plane as a surface of device 100).
  • at least one tactile output generator sensor is located on the back of device 100, opposite touch screen display 112, which is located on the front of device 100.
  • Device 100 optionally also includes one or more accelerometers 168.
  • FIG. 1 A shows accelerometer 168 coupled to peripherals interface 118.
  • accelerometer 168 is, optionally, coupled to an input controller 160 in I/O subsystem 106.
  • Accelerometer 168 optionally performs as described in U.S. Patent Publication No. 20050190059,“Acceleration- based Theft Detection System for Portable Electronic Devices,” and U.S. Patent Publication No. 20060017692,“Methods And Apparatuses For Operating A Portable Device Based On An Accelerometer,” both of which are incorporated by reference herein in their entirety.
  • information is displayed on the touch screen display in a portrait view or a landscape view based on an analysis of data received from the one or more accelerometers.
  • Device 100 optionally includes, in addition to accelerometer(s) 168, a magnetometer and a GPS (or GLONASS or other global navigation system) receiver for obtaining information concerning the location and orientation (e.g., portrait or landscape) of device 100.
  • GPS or GLONASS or other global navigation system
  • the software components stored in memory 102 include operating system 126, communication module (or set of instructions) 128, contact/motion module (or set of instructions) 130, graphics module (or set of instructions) 132, text input module (or set of instructions) 134, Global Positioning System (GPS) module (or set of instructions) 135, and applications (or sets of instructions) 136.
  • memory 102 FIG. 1A
  • 370 FIG. 3
  • Device/global internal state 157 includes one or more of: active application state, indicating which applications, if any, are currently active; display state, indicating what applications, views or other information occupy various regions of touch screen display 112; sensor state, including information obtained from the device’s various sensors and input control devices 116; and location information concerning the device’s location and/or attitude.
  • Operating system 126 e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS,
  • WINDOWS or an embedded operating system such as VxWorks
  • VxWorks includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
  • general system tasks e.g., memory management, storage device control, power management, etc.
  • Communication module 128 facilitates communication with other devices over one or more external ports 124 and also includes various software components for handling data received by RF circuitry 108 and/or external port 124.
  • External port 124 e.g., Universal Serial Bus (USB), FIREWIRE, etc.
  • USB Universal Serial Bus
  • FIREWIRE FireWire
  • the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or similar to and/or compatible with, the 30-pin connector used on iPod® (trademark of Apple Inc.) devices.
  • Contact/motion module 130 optionally detects contact with touch screen 112 (in conjunction with display controller 156) and other touch-sensitive devices (e.g., a touchpad or physical click wheel).
  • Contact/motion module 130 includes various software components for performing various operations related to detection of contact, such as determining if contact has occurred (e.g., detecting a finger-down event), determining an intensity of the contact (e.g., the force or pressure of the contact or a substitute for the force or pressure of the contact), determining if there is movement of the contact and tracking the movement across the touch- sensitive surface (e.g., detecting one or more finger-dragging events), and determining if the contact has ceased (e.g., detecting a finger-up event or a break in contact).
  • Contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact, which is represented by a series of contact data, optionally includes determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact. These operations are, optionally, applied to single contacts (e.g., one finger contacts) or to multiple simultaneous contacts (e.g., “multitouch’Vmultiple finger contacts). In some embodiments, contact/motion module 130 and display controller 156 detect contact on a touchpad.
  • contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether a user has“clicked” on an icon).
  • at least a subset of the intensity thresholds are determined in accordance with software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and can be adjusted without changing the physical hardware of device 100).
  • a mouse“click” threshold of a trackpad or touch screen display can be set to any of a large range of predefined threshold values without changing the trackpad or touch screen display hardware.
  • a user of the device is provided with software settings for adjusting one or more of the set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting a plurality of intensity thresholds at once with a system-level click “intensity” parameter).
  • Contact/motion module 130 optionally detects a gesture input by a user.
  • Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts).
  • a gesture is, optionally, detected by detecting a particular contact pattern.
  • detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (liftoff) event at the same position (or substantially the same position) as the finger-down event (e.g., at the position of an icon).
  • detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (liftoff) event.
  • Graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual property) of graphics that are displayed.
  • the term“graphics” includes any object that can be displayed to a user, including, without limitation, text, web pages, icons (such as user-interface objects including soft keys), digital images, videos, animations, and the like.
  • graphics module 132 stores data representing graphics to be used. Each graphic is, optionally, assigned a corresponding code. Graphics module 132 receives, from applications etc., one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller 156.
  • Haptic feedback module 133 includes various software components for generating instructions used by tactile output generator(s) 167 to produce tactile outputs at one or more locations on device 100 in response to user interactions with device 100.
  • Text input module 134 which is, optionally, a component of graphics module 132, provides soft keyboards for entering text in various applications (e.g., contacts 137, e-mail 140, IM 141, browser 147, and any other application that needs text input).
  • applications e.g., contacts 137, e-mail 140, IM 141, browser 147, and any other application that needs text input.
  • GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to telephone 138 for use in location-based dialing; to camera 143 as picture/video metadata; and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets).
  • applications e.g., to telephone 138 for use in location-based dialing; to camera 143 as picture/video metadata; and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets).
  • Applications 136 optionally include the following modules (or sets of instructions), or a subset or superset thereof:
  • Contacts module 137 (sometimes called an address book or contact list);
  • Video conference module 139 • Video conference module 139;
  • Camera module 143 for still and/or video images
  • Calendar module 148 • Calendar module 148;
  • Widget modules 149 which optionally include one or more of: weather widget 149-1, stocks widget 149-2, calculator widget 149-3, alarm clock widget 149-4, dictionary widget 149-5, and other widgets obtained by the user, as well as user-created widgets 149-6;
  • Widget creator module 150 for making user-created widgets 149-6;
  • Video and music player module 152 which merges video player module and music
  • Map module 154 • Map module 154;
  • Examples of other applications 136 that are, optionally, stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
  • contacts module 137 are, optionally, used to manage an address book or contact list (e.g., stored in application internal state 192 of contacts module 137 in memory 102 or memory 370), including: adding name(s) to the address book; deleting name(s) from the address book; associating telephone number(s), e-mail address(es), physical address(es) or other information with a name; associating an image with a name;
  • an address book or contact list e.g., stored in application internal state 192 of contacts module 137 in memory 102 or memory 370
  • categorizing and sorting names providing telephone numbers or e-mail addresses to initiate and/or facilitate communications by telephone 138, video conference module 139, e-mail 140, or IM 141; and so forth.
  • telephone module 138 are optionally, used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contacts module 137, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation, and disconnect or hang up when the conversation is completed.
  • the wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies.
  • video conference module 139 includes executable instructions to initiate, conduct, and terminate a video conference between a user and one or more other participants in accordance with user instructions.
  • e-mail client module 140 In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, e-mail client module 140 includes executable instructions to create, send, receive, and manage e-mail in response to user instructions. In conjunction with image management module 144, e-mail client module 140 makes it very easy to create and send e-mails with still or video images taken with camera module 143.
  • the instant messaging module 141 includes executable instructions to enter a sequence of characters corresponding to an instant message, to modify previously entered characters, to transmit a respective instant message (for example, using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for telephony -based instant messages or using XMPP, SIMPLE, or IMPS for Internet-based instant messages), to receive instant messages, and to view received instant messages.
  • SMS Short Message Service
  • MMS Multimedia Message Service
  • XMPP extensible Markup Language
  • SIMPLE Session Initation Protocol
  • IMPS Internet Messaging Protocol
  • transmitted and/or received instant messages optionally include graphics, photos, audio files, video files and/or other attachments as are supported in an MMS and/or an Enhanced Messaging Service (EMS).
  • EMS Enhanced Messaging Service
  • instant messaging refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and Internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
  • workout support module 142 includes executable instructions to create workouts (e.g., with time, distance, and/or calorie burning goals);
  • workout sensors sports devices
  • receive workout sensor data calibrate sensors used to monitor a workout
  • select and play music for a workout and display, store, and transmit workout data.
  • camera module 143 includes executable instructions to capture still images or video (including a video stream) and store them into memory 102, modify
  • image management module 144 includes executable instructions to arrange, modify (e.g., edit), or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album), and store still and/or video images.
  • modify e.g., edit
  • present e.g., in a digital slide show or album
  • browser module 147 includes executable instructions to browse the Internet in accordance with user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages.
  • calendar module 148 includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to- do lists, etc.) in accordance with user instructions.
  • widget modules 149 are mini-applications that are, optionally, downloaded and used by a user (e.g., weather widget 149-1, stocks widget 149-2, calculator widget 149-3, alarm clock widget 149-4, and dictionary widget 149-5) or created by the user (e.g., user-created widget 149- 6).
  • a widget includes an HTML (Hypertext Markup Language) file, a CSS (Cascading Style Sheets) file, and a JavaScript file.
  • a widget includes an XML (Extensible Markup Language) file and a JavaScript file (e.g., Yahoo! Widgets).
  • the widget creator module 150 are, optionally, used by a user to create widgets (e.g., turning a user-specified portion of a web page into a widget).
  • search module 151 includes executable instructions to search for text, music, sound, image, video, and/or other files in memory 102 that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions.
  • search criteria e.g., one or more user-specified search terms
  • video and music player module 152 includes executable instructions that allow the user to download and play back recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, and executable instructions to display, present, or otherwise play back videos (e.g., on touch screen 112 or on an external, connected display via external port 124).
  • device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple Inc.).
  • notes module 153 includes executable instructions to create and manage notes, to-do lists, and the like in accordance with user instructions.
  • map module 154 are, optionally, used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data on stores and other points of interest at or near a particular location, and other location-based data) in accordance with user instructions.
  • maps e.g., driving directions, data on stores and other points of interest at or near a particular location, and other location-based data
  • online video module 155 includes instructions that allow the user to access, browse, receive (e.g., by streaming and/or download), play back (e.g., on the touch screen or on an external, connected display via external port 124), send an e-mail with a link to a particular online video, and otherwise manage online videos in one or more file formats, such as H.264.
  • instant messaging module 141 rather than e-mail client module 140, is used to send a link to a particular online video.
  • Each of the above-identified modules and applications corresponds to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein).
  • These modules e.g., sets of instructions
  • modules need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules are, optionally, combined or otherwise rearranged in various embodiments.
  • video player module is, optionally, combined with music player module into a single module (e.g., video and music player module 152, FIG. 1A).
  • memory 102 optionally stores a subset of the modules and data structures identified above. Furthermore, memory 102 optionally stores additional modules and data structures not described above.
  • device 100 is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad.
  • a touch screen and/or a touchpad as the primary input control device for operation of device 100, the number of physical input control devices (such as push buttons, dials, and the like) on device 100 is, optionally, reduced.
  • the predefined set of functions that are performed exclusively through a touch screen and/or a touchpad optionally include navigation between user interfaces.
  • the touchpad when touched by the user, navigates device 100 to a main, home, or root menu from any user interface that is displayed on device 100.
  • a“menu button” is implemented using a touchpad.
  • the menu button is a physical push button or other physical input control device instead of a touchpad.
  • FIG. IB is a block diagram illustrating exemplary components for event handling in accordance with some embodiments.
  • memory 102 (FIG. 1 A) or 370 (FIG. 3) includes event sorter 170 (e.g., in operating system 126) and a respective application 136-1 (e.g., any of the aforementioned applications 137-151, 155, 380-390).
  • event sorter 170 e.g., in operating system 126
  • application 136-1 e.g., any of the aforementioned applications 137-151, 155, 380-390.
  • Event sorter 170 receives event information and determines the application 136-1 and application view 191 of application 136-1 to which to deliver the event information.
  • Event sorter 170 includes event monitor 171 and event dispatcher module 174.
  • application 136-1 includes application internal state 192, which indicates the current application view(s) displayed on touch-sensitive display 112 when the application is active or executing.
  • device/global internal state 157 is used by event sorter 170 to determine which application(s) is (are) currently active, and application internal state 192 is used by event sorter 170 to determine application views 191 to which to deliver event information.
  • application internal state 192 includes additional information, such as one or more of: resume information to be used when application 136-1 resumes execution, user interface state information that indicates information being displayed or that is ready for display by application 136-1, a state queue for enabling the user to go back to a prior state or view of application 136-1, and a redo/undo queue of previous actions taken by the user.
  • Event monitor 171 receives event information from peripherals interface 118.
  • Event information includes information about a sub-event (e.g., a user touch on touch-sensitive display 112, as part of a multi-touch gesture).
  • Peripherals interface 118 transmits information it receives from I/O subsystem 106 or a sensor, such as proximity sensor 166, accelerometer(s) 168, and/or microphone 113 (through audio circuitry 110).
  • Information that peripherals interface 118 receives from I/O subsystem 106 includes information from touch-sensitive display 112 or a touch-sensitive surface.
  • event monitor 171 sends requests to the peripherals interface 118 at predetermined intervals. In response, peripherals interface 118 transmits event information. In other embodiments, peripherals interface 118 transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or for more than a predetermined duration).
  • event sorter 170 also includes a hit view determination module 172 and/or an active event recognizer determination module 173.
  • Hit view determination module 172 provides software procedures for determining where a sub-event has taken place within one or more views when touch-sensitive display 112 displays more than one view. Views are made up of controls and other elements that a user can see on the display. [244] Another aspect of the user interface associated with an application is a set of views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. The application views (of a respective application) in which a touch is detected optionally correspond to programmatic levels within a programmatic or view hierarchy of the application.
  • the lowest level view in which a touch is detected is, optionally, called the hit view
  • the set of events that are recognized as proper inputs are, optionally, determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture.
  • Hit view determination module 172 receives information related to sub-events of a touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 172 identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (e.g., the first sub-event in the sequence of sub-events that form an event or potential event). Once the hit view is identified by the hit view determination module 172, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
  • Active event recognizer determination module 173 determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some
  • active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views.
  • Event dispatcher module 174 dispatches the event information to an event recognizer (e.g., event recognizer 180). In embodiments including active event recognizer determination module 173, event dispatcher module 174 delivers the event information to an event recognizer determined by active event recognizer determination module 173. In some embodiments, event dispatcher module 174 stores in an event queue the event information, which is retrieved by a respective event receiver 182.
  • an event recognizer e.g., event recognizer 180.
  • event dispatcher module 174 delivers the event information to an event recognizer determined by active event recognizer determination module 173.
  • event dispatcher module 174 stores in an event queue the event information, which is retrieved by a respective event receiver 182.
  • operating system 126 includes event sorter 170.
  • application 136-1 includes event sorter 170.
  • event sorter 170 is a stand-alone module, or a part of another module stored in memory 102, such as contact/motion module 130.
  • application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for handling touch events that occur within a respective view of the application’s user interface.
  • Each application view 191 of the application 136-1 includes one or more event recognizers 180.
  • a respective application view 191 includes a plurality of event recognizers 180.
  • one or more of event recognizers 180 are part of a separate module, such as a user interface kit or a higher level object from which application 136-1 inherits methods and other properties.
  • a respective event handler 190 includes one or more of: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170.
  • Event handler 190 optionally utilizes or calls data updater 176, object updater 177, or GUI updater 178 to update the application internal state 192.
  • one or more of the application views 191 include one or more respective event handlers 190.
  • one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
  • a respective event recognizer 180 receives event information (e.g., event data 179) from event sorter 170 and identifies an event from the event information.
  • Event recognizer 180 includes event receiver 182 and event comparator 184.
  • event recognizer 180 also includes at least a subset of: metadata 183, and event delivery instructions 188 (which optionally include sub-event delivery instructions).
  • Event receiver 182 receives event information from event sorter 170.
  • the event information includes information about a sub-event, for example, a touch or a touch movement.
  • the event information also includes additional information, such as location of the sub-event.
  • the event information optionally also includes speed and direction of the sub-event.
  • events include rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation (also called device attitude) of the device.
  • Event comparator 184 compares the event information to predefined event or sub event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event.
  • event comparator 184 includes event definitions 186.
  • Event definitions 186 contain definitions of events (e.g., predefined sequences of sub-events), for example, event 1 (187-1), event 2 (187-2), and others.
  • sub-events in an event (187) include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching.
  • the definition for event 1 (187-1) is a double tap on a displayed object.
  • the double tap for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first liftoff (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second liftoff (touch end) for a predetermined phase.
  • the definition for event 2 (187-2) is a dragging on a displayed object.
  • the dragging for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display 112, and liftoff of the touch (touch end).
  • the event also includes information for one or more associated event handlers 190.
  • event definition 187 includes a definition of an event for a respective user-interface object.
  • event comparator 184 performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on touch-sensitive display
  • event comparator 184 when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test to determine which of the three user-interface objects is associated with the touch (sub event). If each displayed object is associated with a respective event handler 190, the event comparator uses the result of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects an event handler associated with the sub event and the object triggering the hit test.
  • the definition for a respective event (187) also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer’s event type.
  • a respective event recognizer 180 determines that the series of sub-events do not match any of the events in event definitions 186, the respective event recognizer 180 enters an event impossible, event failed, or event ended state, after which it disregards subsequent sub events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture.
  • a respective event recognizer 180 includes metadata 183 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers.
  • metadata 183 includes configurable properties, flags, and/or lists that indicate how event recognizers interact, or are enabled to interact, with one another.
  • metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub events are delivered to varying levels in the view or programmatic hierarchy.
  • a respective event recognizer 180 activates event handler 190 associated with an event when one or more particular sub-events of an event are recognized.
  • a respective event recognizer 180 delivers event information associated with the event to event handler 190.
  • Activating an event handler 190 is distinct from sending (and deferred sending) sub-events to a respective hit view.
  • event recognizer 180 throws a flag associated with the recognized event, and event handler 190 associated with the flag catches the flag and performs a predefined process.
  • event delivery instructions 188 include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process.
  • data updater 176 creates and updates data used in application 136-1. For example, data updater 176 updates the telephone number used in contacts module 137, or stores a video file used in video player module.
  • object updater 177 creates and updates objects used in application 136-1. For example, object updater 177 creates a new user-interface object or updates the position of a user-interface object.
  • GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends it to graphics module 132 for display on a touch-sensitive display.
  • event handler(s) 190 includes or has access to data updater 176, object updater 177, and GUI updater 178.
  • data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
  • event handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices 100 with input devices, not all of which are initiated on touch screens.
  • mouse movement and mouse button presses optionally coordinated with single or multiple keyboard presses or holds; contact movements such as taps, drags, scrolls, etc. on touchpads; pen stylus inputs; movement of the device; oral instructions; detected eye
  • FIG. 2 illustrates a portable multifunction device 100 having a touch screen 112 in accordance with some embodiments.
  • the touch screen optionally displays one or more graphics within user interface (UI) 200.
  • UI user interface
  • a user is enabled to select one or more of the graphics by making a gesture on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figure) or one or more styluses 203 (not drawn to scale in the figure).
  • selection of one or more graphics occurs when the user breaks contact with the one or more graphics.
  • the gesture optionally includes one or more taps, one or more swipes (from left to right, right to left, upward and/or downward), and/or a rolling of a finger (from right to left, left to right, upward and/or downward) that has made contact with device 100.
  • inadvertent contact with a graphic does not select the graphic.
  • a swipe gesture that sweeps over an application icon optionally does not select the corresponding application when the gesture corresponding to selection is a tap.
  • Device 100 optionally also include one or more physical buttons, such as“home” or menu button 204.
  • menu button 204 is, optionally, used to navigate to any application 136 in a set of applications that are, optionally, executed on device 100.
  • the menu button is implemented as a soft key in a GUI displayed on touch screen 112.
  • device 100 includes touch screen 112, menu button 204, push button 206 for powering the device on/off and locking the device, volume adjustment button(s) 208, subscriber identity module (SIM) card slot 210, headset jack 212, and docking/charging external port 124.
  • Push button 206 is, optionally, used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process.
  • device 100 also accepts verbal input for activation or deactivation of some functions through microphone 113.
  • Device 100 also, optionally, includes one or more contact intensity sensors 165 for detecting intensity of contacts on touch screen 112 and/or one or more tactile output generators 167 for generating tactile outputs for a user of device 100.
  • FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
  • Device 300 need not be portable.
  • device 300 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child’s learning toy), a gaming system, or a control device (e.g., a home or industrial controller).
  • Device 300 typically includes one or more processing units (CPUs) 310, one or more network or other communications interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components.
  • CPUs processing units
  • Communication buses 320 optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components.
  • Device 300 includes input/output (I/O) interface 330 comprising display 340, which is typically a touch screen display.
  • I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and touchpad 355, tactile output generator 357 for generating tactile outputs on device 300 (e.g., similar to tactile output generator(s) 167 described above with reference to FIG. 1 A), sensors 359 (e.g., optical, acceleration, proximity, touch- sensitive, and/or contact intensity sensors similar to contact intensity sensor(s) 165 described above with reference to FIG. 1 A).
  • sensors 359 e.g., optical, acceleration, proximity, touch- sensitive, and/or contact intensity sensors similar to contact intensity sensor(s) 165 described above with reference to FIG. 1 A).
  • Memory 370 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 370 optionally includes one or more storage devices remotely located from CPU(s) 310. In some embodiments, memory 370 stores programs, modules, and data structures analogous to the programs, modules, and data structures stored in memory 102 of portable multifunction device 100 (FIG. 1 A), or a subset thereof. Furthermore, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100.
  • memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk authoring module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (FIG. 1 A) optionally does not store these modules.
  • Each of the above-identified elements in FIG. 3 is, optionally, stored in one or more of the previously mentioned memory devices.
  • Each of the above-identified modules corresponds to a set of instructions for performing a function described above.
  • the above-identified modules or programs (e.g., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules are, optionally, combined or otherwise rearranged in various embodiments.
  • memory 370 optionally stores a subset of the modules and data structures identified above. Furthermore, memory 370 optionally stores additional modules and data structures not described above.
  • FIG. 4A illustrates an exemplary user interface for a menu of applications on portable multifunction device 100 in accordance with some embodiments. Similar user interfaces are, optionally, implemented on device 300.
  • user interface 400 includes the following elements, or a subset or superset thereof:
  • Tray 408 with icons for frequently used applications such as: o Icon 416 for telephone module 138, labeled“Phone,” which optionally includes an indicator 414 of the number of missed calls or voicemail messages;
  • Icon 418 for e-mail client module 140 labeled“Mail,” which optionally includes an indicator 410 of the number of unread e-mails;
  • Icon 422 for video and music player module 152 also referred to as iPod (trademark of Apple Inc.) module 152, labeled“iPod;” and
  • Icons for other applications such as: o Icon 424 for IM module 141, labeled“Messages;”
  • Icon 426 for calendar module 148 labeled“Calendar;”
  • Icon 428 for image management module 144 labeled“Photos;”
  • Icon 442 for workout support module 142 labeled“Workout Support”
  • Icon 444 for notes module 153 labeled“Notes;”
  • Icon 446 for a settings application or module, labeled“Settings,” which provides access to settings for device 100 and its various applications 136.
  • icon labels illustrated in FIG. 4A are merely exemplary.
  • icon 422 for video and music player module 152 is labeled“Music” or“Music Player.”
  • Other labels are, optionally, used for various application icons.
  • a label for a respective application icon includes a name of an application corresponding to the respective application icon.
  • a label for a particular application icon is distinct from a name of an application corresponding to the particular application icon.
  • FIG. 4B illustrates an exemplary user interface on a device (e.g., device 300, FIG. 3) with a touch-sensitive surface 451 (e.g., a tablet or touchpad 355, FIG. 3) that is separate from the display 450 (e.g., touch screen display 112).
  • Device 300 also, optionally, includes one or more contact intensity sensors (e.g., one or more of sensors 359) for detecting intensity of contacts on touch-sensitive surface 451 and/or one or more tactile output generators 357 for generating tactile outputs for a user of device 300.
  • one or more contact intensity sensors e.g., one or more of sensors 359
  • tactile output generators 357 for generating tactile outputs for a user of device 300.
  • the device detects inputs on a touch-sensitive surface that is separate from the display, as shown in FIG. 4B.
  • the touch-sensitive surface e.g., 451 in FIG. 4B
  • the touch-sensitive surface has a primary axis (e.g., 452 in FIG. 4B) that corresponds to a primary axis (e.g., 453 in FIG. 4B) on the display (e.g., 450).
  • the device detects contacts (e.g., 460 and 462 in FIG.
  • finger inputs e.g., finger contacts, finger tap gestures, finger swipe gestures
  • one or more of the finger inputs are replaced with input from another input device (e.g., a mouse-based input or stylus input).
  • a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a contact) followed by movement of the cursor along the path of the swipe (e.g., instead of movement of the contact).
  • a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact).
  • a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact).
  • multiple user inputs it should be understood that multiple computer mice are, optionally, used simultaneously, or a mouse and finger contacts are, optionally, used simultaneously.
  • FIG. 5A illustrates exemplary personal electronic device 500.
  • Device 500 includes body 502.
  • device 500 can include some or all of the features described with respect to devices 100 and 300 (e.g., FIGS. 1 A-4B).
  • device 500 has touch-sensitive display screen 504, hereafter touch screen 504.
  • touch screen 504 optionally includes one or more intensity sensors for detecting intensity of contacts (e.g., touches) being applied.
  • the one or more intensity sensors of touch screen 504 (or the touch-sensitive surface) can provide output data that represents the intensity of touches.
  • the user interface of device 500 can respond to touches based on their intensity, meaning that touches of different intensities can invoke different user interface operations on device 500.
  • device 500 has one or more input mechanisms 506 and 508.
  • Input mechanisms 506 and 508, if included, can be physical. Examples of physical input mechanisms include push buttons and rotatable mechanisms.
  • device 500 has one or more attachment mechanisms. Such attachment mechanisms, if included, can permit attachment of device 500 with, for example, hats, eyewear, earrings, necklaces, shirts, jackets, bracelets, watch straps, chains, trousers, belts, shoes, purses, backpacks, and so forth. These attachment mechanisms permit device 500 to be worn by a user.
  • FIG. 5B depicts exemplary personal electronic device 500.
  • device 500 can include some or all of the components described with respect to FIGS. 1 A, IB, and 3.
  • Device 500 has bus 512 that operatively couples I/O section 514 with one or more computer processors 516 and memory 518.
  • EO section 514 can be connected to display 504, which can have touch-sensitive component 522 and, optionally, intensity sensor 524 (e.g., contact intensity sensor).
  • EO section 514 can be connected with communication unit 530 for receiving application and operating system data, using Wi-Fi, Bluetooth, near field communication (NFC), cellular, and/or other wireless communication techniques.
  • Device 500 can include input mechanisms 506 and/or 508.
  • Input mechanism 506 is, optionally, a rotatable input device or a depressible and rotatable input device, for example.
  • Input mechanism 508 is, optionally, a button, in some examples.
  • Input mechanism 508 is, optionally, a microphone, in some examples.
  • Personal electronic device 500 optionally includes various sensors, such as GPS sensor 532, accelerometer 534, directional sensor 540 (e.g., compass), gyroscope 536, motion sensor 538, and/or a combination thereof, all of which can be operatively connected to I/O section 514.
  • sensors such as GPS sensor 532, accelerometer 534, directional sensor 540 (e.g., compass), gyroscope 536, motion sensor 538, and/or a combination thereof, all of which can be operatively connected to I/O section 514.
  • Memory 518 of personal electronic device 500 can include one or more non- transitory computer-readable storage mediums, for storing computer-executable instructions, which, when executed by one or more computer processors 516, for example, can cause the computer processors to perform the techniques described below, including processes 700, 900, 1100, 1300, 1500, 1700, 1900, 2000, 2100, 2300, 2500, 2700, 2800, 3000, 3200, 3400, 3600, and 3800.
  • a computer-readable storage medium can be any medium that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device.
  • the storage medium is a transitory computer- readable storage medium.
  • the storage medium is a non-transitory computer- readable storage medium.
  • the non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. Examples of such storage include magnetic disks, optical discs based on CD, DVD, or Blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like.
  • Personal electronic device 500 is not limited to the components and configuration of FIG. 5B, but can include other or additional components in multiple configurations.
  • the term“affordance” refers to a user-interactive graphical user interface object that is, optionally, displayed on the display screen of devices 100, 300, and/or 500 (FIGS. 1 A, 3, and 5A-5B).
  • an image e.g., icon
  • a button e.g., button
  • text e.g., hyperlink
  • the term“focus selector” refers to an input element that indicates a current part of a user interface with which a user is interacting.
  • the cursor acts as a“focus selector” so that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 355 in FIG. 3 or touch-sensitive surface 451 in FIG. 4B) while the cursor is over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input.
  • a touch-sensitive surface e.g., touchpad 355 in FIG. 3 or touch-sensitive surface 451 in FIG. 4B
  • a particular user interface element e.g., a button, window, slider, or other user interface element
  • a detected contact on the touch screen acts as a“focus selector” so that when an input (e.g., a press input by the contact) is detected on the touch screen display at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input.
  • a particular user interface element e.g., a button, window, slider, or other user interface element
  • focus is moved from one region of a user interface to another region of the user interface without corresponding movement of a cursor or movement of a contact on a touch screen display (e.g., by using a tab key or arrow keys to move focus from one button to another button); in these implementations, the focus selector moves in accordance with movement of focus between different regions of the user interface.
  • the focus selector is generally the user interface element (or contact on a touch screen display) that is controlled by the user so as to communicate the user’s intended interaction with the user interface (e.g., by indicating, to the device, the element of the user interface with which the user is intending to interact).
  • a focus selector e.g., a cursor, a contact, or a selection box
  • a press input is detected on the touch-sensitive surface (e.g., a touchpad or touch screen) will indicate that the user is intending to activate the respective button (as opposed to other user interface elements shown on a display of the device).
  • the term“characteristic intensity” of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on multiple intensity samples.
  • characteristic intensity is, optionally, based on a predefined number of intensity samples, or a set of intensity samples collected during a predetermined time period (e.g., 0.05, 0.1, 0.2, 0.5, 1, 2,
  • a characteristic intensity of a contact is, optionally, based on one or more of: a maximum value of the intensities of the contact, a mean value of the intensities of the contact, an average value of the intensities of the contact, a top 10 percentile value of the intensities of the contact, a value at the half maximum of the intensities of the contact, a value at the 90 percent maximum of the intensities of the contact, or the like.
  • the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether an operation has been performed by a user.
  • the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold.
  • a contact with a characteristic intensity that does not exceed the first threshold results in a first operation
  • a contact with a characteristic intensity that exceeds the first intensity threshold and does not exceed the second intensity threshold results in a second operation
  • a contact with a characteristic intensity that exceeds the second threshold results in a third operation.
  • a comparison between the characteristic intensity and one or more thresholds is used to determine whether or not to perform one or more operations (e.g., whether to perform a respective operation or forgo performing the respective operation), rather than being used to determine whether to perform a first operation or a second operation.
  • FIG. 5C illustrates detecting a plurality of contacts 552A-552E on touch-sensitive display screen 504 with a plurality of intensity sensors 524A-524D.
  • FIG. 5C additionally includes intensity diagrams that show the current intensity measurements of the intensity sensors 524A-524D relative to units of intensity.
  • the intensity measurements of intensity sensors 524A and 524D are each 9 units of intensity
  • the intensity measurements of intensity sensors 524B and 524C are each 7 units of intensity.
  • an aggregate intensity is the sum of the intensity measurements of the plurality of intensity sensors 524A-524D, which in this example is 32 intensity units.
  • each contact is assigned a respective intensity that is a portion of the aggregate intensity.
  • each of contacts 552A, 552B, and 552E are assigned an intensity of contact of 8 intensity units of the aggregate intensity
  • each of contacts 552C and 552D are assigned an intensity of contact of 4 intensity units of the aggregate intensity.
  • Ij A (Dj/ ⁇ Di)
  • Dj the distance of the respective contact j to the center of force
  • the operations described with reference to FIGS. 5C-5D can be performed using an electronic device similar or identical to device 100, 300, or 500.
  • a characteristic intensity of a contact is based on one or more intensities of the contact.
  • the intensity sensors are used to determine a single characteristic intensity (e.g., a single characteristic intensity of a single contact). It should be noted that the intensity diagrams are not part of a displayed user interface, but are included in FIGS. 5C-5D to aid the reader.
  • a portion of a gesture is identified for purposes of determining a characteristic intensity.
  • a touch-sensitive surface optionally receives a continuous swipe contact transitioning from a start location and reaching an end location, at which point the intensity of the contact increases.
  • the characteristic intensity of the contact at the end location is, optionally, based on only a portion of the continuous swipe contact, and not the entire swipe contact (e.g., only the portion of the swipe contact at the end location).
  • a smoothing algorithm is, optionally, applied to the intensities of the swipe contact prior to determining the characteristic intensity of the contact.
  • the smoothing algorithm optionally includes one or more of: an unweighted sliding-average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm.
  • these smoothing algorithms eliminate narrow spikes or dips in the intensities of the swipe contact for purposes of
  • the intensity of a contact on the touch-sensitive surface is, optionally, characterized relative to one or more intensity thresholds, such as a contact-detection intensity threshold, a light press intensity threshold, a deep press intensity threshold, and/or one or more other intensity thresholds.
  • the light press intensity threshold corresponds to an intensity at which the device will perform operations typically associated with clicking a button of a physical mouse or a trackpad.
  • the deep press intensity threshold corresponds to an intensity at which the device will perform operations that are different from operations typically associated with clicking a button of a physical mouse or a trackpad.
  • the device when a contact is detected with a characteristic intensity below the light press intensity threshold (e.g., and above a nominal contact-detection intensity threshold below which the contact is no longer detected), the device will move a focus selector in accordance with movement of the contact on the touch-sensitive surface without performing an operation associated with the light press intensity threshold or the deep press intensity threshold.
  • a characteristic intensity below the light press intensity threshold e.g., and above a nominal contact-detection intensity threshold below which the contact is no longer detected
  • intensity thresholds are consistent between different sets of user interface figures.
  • An increase of characteristic intensity of the contact from an intensity below the light press intensity threshold to an intensity between the light press intensity threshold and the deep press intensity threshold is sometimes referred to as a“light press” input.
  • An increase of characteristic intensity of the contact from an intensity below the deep press intensity threshold to an intensity above the deep press intensity threshold is sometimes referred to as a“deep press” input.
  • An increase of characteristic intensity of the contact from an intensity below the contact- detection intensity threshold to an intensity between the contact-detection intensity threshold and the light press intensity threshold is sometimes referred to as detecting the contact on the touch- surface.
  • a decrease of characteristic intensity of the contact from an intensity above the contact- detection intensity threshold to an intensity below the contact-detection intensity threshold is sometimes referred to as detecting liftoff of the contact from the touch-surface.
  • the contact-detection intensity threshold is zero. In some embodiments, the contact-detection intensity threshold is greater than zero.
  • one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting the respective press input performed with a respective contact (or a plurality of contacts), where the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or plurality of contacts) above a press-input intensity threshold.
  • the respective operation is performed in response to detecting the increase in intensity of the respective contact above the press-input intensity threshold (e.g., a“down stroke” of the respective press input).
  • the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press-input threshold (e.g., an“up stroke” of the respective press input).
  • FIGS. 5E-5H illustrate detection of a gesture that includes a press input that corresponds to an increase in intensity of a contact 562 from an intensity below a light press intensity threshold (e.g.,“ITL”) in FIG. 5E, to an intensity above a deep press intensity threshold (e.g.,“ITD”) in FIG. 5H.
  • the gesture performed with contact 562 is detected on touch-sensitive surface 560 while cursor 576 is displayed over application icon 572B corresponding to App 2, on a displayed user interface 570 that includes application icons 572A-572D displayed in predefined region 574.
  • the gesture is detected on touch-sensitive display 504.
  • the intensity sensors detect the intensity of contacts on touch-sensitive surface 560.
  • the device determines that the intensity of contact 562 peaked above the deep press intensity threshold (e.g., “ITD”).
  • the deep press intensity threshold e.g., “ITD”.
  • Contact 562 is maintained on touch-sensitive surface 560.
  • reduced-scale representations 578A- 578C e.g., thumbnails
  • the intensity which is compared to the one or more intensity thresholds, is the characteristic intensity of a contact. It should be noted that the intensity diagram for contact 562 is not part of a displayed user interface, but is included in FIGS. 5E-5H to aid the reader.
  • the display of representations 578A-578C includes an animation.
  • representation 578A is initially displayed in proximity of application icon 572B, as shown in FIG. 5F.
  • representation 578A moves upward and representation 578B is displayed in proximity of application icon 572B, as shown in FIG.
  • representations 578A moves upward, 578B moves upward toward representation 578A, and representation 578C is displayed in proximity of application icon 572B, as shown in FIG. 5H.
  • Representations 578A-578C form an array above icon 572B.
  • the animation progresses in accordance with an intensity of contact 562, as shown in FIGS. 5F- 5G, where the representations 578A-578C appear and move upwards as the intensity of contact 562 increases toward the deep press intensity threshold (e.g.,“ITD”).
  • the intensity, on which the progress of the animation is based is the characteristic intensity of the contact.
  • the operations described with reference to FIGS. 5E-5H can be performed using an electronic device similar or identical to device 100, 300, or 500.
  • the device employs intensity hysteresis to avoid accidental inputs sometimes termed“jitter,” where the device defines or selects a hysteresis intensity threshold with a predefined relationship to the press-input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press-input intensity threshold or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press-input intensity threshold).
  • the hysteresis intensity threshold is X intensity units lower than the press-input intensity threshold or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press-input intensity threshold.
  • the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the hysteresis intensity threshold that corresponds to the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., an“up stroke” of the respective press input).
  • the press input is detected only when the device detects an increase in intensity of the contact from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press-input intensity threshold and, optionally, a subsequent decrease in intensity of the contact to an intensity at or below the hysteresis intensity, and the respective operation is performed in response to detecting the press input (e.g., the increase in intensity of the contact or the decrease in intensity of the contact, depending on the circumstances).
  • the descriptions of operations performed in response to a press input associated with a press-input intensity threshold or in response to a gesture including the press input are, optionally, triggered in response to detecting either: an increase in intensity of a contact above the press-input intensity threshold, an increase in intensity of a contact from an intensity below the hysteresis intensity threshold to an intensity above the press-input intensity threshold, a decrease in intensity of the contact below the press-input intensity threshold, and/or a decrease in intensity of the contact below the hysteresis intensity threshold corresponding to the press-input intensity threshold.
  • the operation is, optionally, performed in response to detecting a decrease in intensity of the contact below a hysteresis intensity threshold corresponding to, and lower than, the press-input intensity threshold.
  • an“installed application” refers to a software application that has been downloaded onto an electronic device (e.g., devices 100, 300, and/or 500) and is ready to be launched (e.g., become opened) on the device.
  • a downloaded application becomes an installed application by way of an installation program that extracts program portions from a downloaded package and integrates the extracted portions with the operating system of the computer system.
  • the terms“open application” or“executing application” refer to a software application with retained state information (e.g., as part of device/global internal state 157 and/or application internal state 192).
  • An open or executing application is, optionally, any one of the following types of applications:
  • a background application or background processes
  • a suspended or hibernated application which is not running, but has state information that is stored in memory (volatile and non-volatile, respectively) and that can be used to resume execution of the application.
  • closing an application refers to software applications without retained state information (e.g., state information for closed applications is not stored in a memory of the device). Accordingly, closing an application includes stopping and/or removing application processes for the application and removing state information for the application from the memory of the device. Generally, opening a second application while in a first application does not close the first application. When the second application is displayed and the first application ceases to be displayed, the first application becomes a background application.
  • multifunction device 100 device 300, or device 500.
  • FIGS. 6A-6V illustrate exemplary user interfaces for accessing media controls using an electronic device in accordance with some embodiments.
  • the user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 7A-7C.
  • FIG. 6A illustrates electronic device 600 displaying a live preview 630 that optionally extends from the top of the display to the bottom of the display.
  • Live preview 630 is based on images detected by one or more camera sensors.
  • device 600 captures images using a plurality of camera sensors and combines them to display live preview 630.
  • device 600 captures images using a single camera sensor to display live preview 630.
  • the camera user interface of FIG. 6 A includes indicator region 602 and control region 606, which are overlaid on live preview 630 such that indicators and controls can be displayed concurrently with the live preview.
  • Camera display region 604 is substantially not overlaid with indicators or controls.
  • the live preview includes subject 640 and a surrounding environment.
  • Live preview 630 is representation of a (e.g., partial) field-of-view of the one or more cameras of device 600.
  • indicator region 602 is overlaid onto live preview 630 and optionally includes a colored (e.g., gray; translucent) overlay.
  • Indicator region 602 includes flash indicator 602a.
  • flash indicator 602a indicates whether the flash is on, off, or in another mode (e.g., automatic mode). In FIG. 6A, flash indicator 602a indicates to the user that the flash is off.
  • camera display region 604 includes live preview 630 and zoom affordance 622.
  • control region 606 is overlaid onto live preview 630 and optionally includes a colored (e.g., gray; translucent) overlay.
  • control region 606 includes camera mode affordances 620, additional control affordance 614, shutter affordance 610, and camera switcher affordance 612.
  • Camera mode affordances 620 indicates which camera mode is currently selected and enables the user to change the camera mode.
  • camera modes affordances 620a-620e are displayed, and‘Photo’ camera mode 620c is indicated as being the current mode in which the camera is operating by the bolding of the text.
  • Additional control affordance 614 enables the user to access additional camera controls.
  • Shutter affordance 610 when activated, causes device 600 to capture media (e.g., a photo), using the one or more camera sensors, based on the current state of live preview 630 and the current state of the camera application.
  • the captured media is stored locally at electronic device 600 and/or transmitted to a remote server for storage.
  • Camera switcher affordance 612 when activated, causes device 600 to switch to showing the field-of- view of a different camera in live preview 630, such as by switching between a rear-facing camera sensor and a front-facing camera sensor.
  • a user has attached a tripod accessory 601 to device 600.
  • device 600 determines that a tripod-connected condition is met.
  • the tripod-connected condition is a condition that is met when the device detects a connected tripod and is not met when the device does not detect a connected tripod.
  • device 600 updates control region to expand additional control affordance 614 and display timer control affordance 614a.
  • device 600 ceases to display timer control affordance 614a after a predetermined period of time elapses when no input directed to timer control affordance 614a is received.
  • device 600 does not have a tripod accessory 601 attached. As a result, device 600 determines that the tripod-connected condition is not met. At FIG. 6A, based on the tripod-connected condition being met, device 600 does not display timer control affordance 614a.
  • device 600 detects, using a touch-sensitive surface, tap gesture 650a at a location that corresponds to display timer control affordance 614a.
  • device 600 shifts up a border of camera display region 604 (while maintaining the same size and aspect ratio) and visual boundary 608, thereby reducing the height of indicator region 602 and increasing the height of control region 606.
  • device 600 ceases to display flash indicator 602a.
  • device 600 ceases to display any indicators in indicator region 602 while indicator region 602 is in the reduced height mode.
  • device 600 replaces display of camera mode affordances 620 with adjustable timer control 634, including adjustable timer control affordances 634a-634d.
  • Adjustable timer control affordances 634a-634d when activated, change (or initiated processes for changing) a delay for capturing media when shutter affordance 610 is activated. For example, adjustable timer control affordance 634a, when activated, sets the delay to 0 seconds and adjustable timer control affordance 634b, when activated, sets the delay to 3 seconds.
  • device 600 is also no longer displaying zoom affordance 622.
  • device 600 detects, using the touch-sensitive surface, tap gesture 650b at a location that corresponds to adjustable timer control affordance 634d. As illustrated in FIG.
  • device 600 in response to detecting tap gesture 650b, device 600 updates adjustable timer control 634 to indicate that OFF’ is no longer selected and that‘ 10S’ is now selected (e.g., via bolding, highlighting). Additionally, device 600 sets a self-timer delay of 10 seconds for capturing media when shutter affordance 610 is activated. In some embodiments, further in response to detecting tap gesture 650b, and without receiving additional user input, device 600 ceases to display adjustable timer control 634 after a predetermined period of time after detecting tap gesture 650b.
  • device 600 detects, using the touch-sensitive surface, tap gesture 650c at a location that corresponds to additional control affordance 614. As illustrated in FIG. 6D, while adjustable timer control 634 is displayed and indicator region 602 is in the reduced height mode, device 600 detects, using the touch-sensitive surface, tap gesture 650c at a location that corresponds to additional control affordance 614. As illustrated in FIG.
  • device 600 in response to detecting tap gesture 650c, shifts down a border of camera display region 604 (while maintaining the same size and aspect ratio) and visual boundary 608, thereby increasing the height of indicator region 602 and reducing the height of control region 606.
  • device 600 re-displays flash indicator 602a in control region 606.
  • device 600 displays flash indicator 602a (regardless of the state (on, off, automatic)) in the indicator region 602 when indicator region 602 is not in the reduced-height mode (e.g., when indicators are being displayed in indicator region 602).
  • device 600 replaces display of adjustable timer control 634 with camera mode affordances 620.
  • timer status indicator 602b provides an indication of the state of the self-timer.
  • timer status indicator 602b indicates that the self-timer delay is set to 10 seconds.
  • timer status indicator 602b is not displayed when the self-timer delay is disabled (or set to 0 seconds).
  • activation of (e.g., tap gesture on) timer status indicator 602b causes device 600 to display various options for changing the self-timer delay, such as in adjustable timer control 634.
  • activation of (e.g., tap gesture on) shutter affordance 610 causes device 600 to initiate capture of media (e.g., an image, a series of images) based on the current state of the device, including without flash (as indicated by flash indicator 602a) and with a ten-second self-timer delay (as indicated by timer status indicator 602b).
  • device 600 includes the visual content corresponding to live preview 630 as shown in indictor region 602 and control region 606 (and, optionally, additional visual content), as described in further detail with respect to FIGS. 8A-8V.
  • the camera feature of device 600 is in use in a low-light environment, as illustrated in live preview 630. While in the low-light environment, device 600 determines, using the one or more camera sensors, ambient light sensors, and/or additional sensors that detect environmental lighting conditions, that a low-light condition is met (e.g., a condition that is met when device 600 detects that environmental lighting conditions are below a threshold (e.g., 10 lux) and that flash is not enabled, and that is not met when the device detects that environmental lighting conditions are not below the threshold or that flash is enabled (on or automatic)).
  • a threshold e.g. 10 lux
  • device 600 displays (e.g., without requiring additional user input) low-light mode status indicator 602c in indicator region 602. Additionally, as illustrated in FIGS. 6F-6G, in accordance with determining that the low-light condition is met, device 600 displays (e.g., without requiring additional user input) low-light mode control affordance 614b and flash control affordance 614c in indicator region 606. In some embodiments, device 600 cycles (e.g., a predetermined number of times) between displays of low-light mode control affordance 614b and flash control affordance 614c in indicator region 606, by replacing one affordance with the other.
  • low- light mode control affordance 614b and flash control affordance 614c are displayed concurrently in indicator region 606.
  • each of low-light mode control affordance 614b and flash control affordance 614c correspond to a different lighting condition (e.g., different ambient light levels) and the affordances are displayed in control region 606 when their corresponding lighting condition is met (and are not displayed when their corresponding lighting condition is met).
  • a first lighting condition is met when device 600 detects that environmental lighting conditions are below a first threshold (e.g., 20 lux) and a second lighting condition is met when device 600 detects that environmental lighting conditions are below a second threshold (e.g., 10 lux).
  • the lighting conditions are based on an amount of environmental light detected by device 600 and, optionally, whether flash is enabled.
  • Device 600 optionally displays low-light mode status indicator 602c when a feature (e.g., lighting enhancement feature) corresponding to the indicator is available for use (regardless of whether the corresponding feature is enabled or disabled).
  • a feature e.g., lighting enhancement feature
  • FIGS. 6A-6E in accordance with device 600 determining that the low- light condition is not met, device 600 forgoes displaying low-light mode control affordance 614b, low-light mode status indicator 602c, and low-light mode status indicator 602c in those corresponding camera user interfaces.
  • device 600 does not displays low- light mode status indicator 602c in indicator region 602 when the feature (e.g., lighting enhancement feature) corresponding to the indicator is not available for use.
  • device 600 detects, using the touch-sensitive surface, tap gesture 650d at a location that corresponds to flash control affordance 614c.
  • device 600 shifts up a border of camera display region 604 (while maintaining the same size and aspect ratio) and visual boundary 608, thereby decreasing the height of indicator region 602 and increasing the height of control region 606.
  • device 600 ceases to display flash indicator 602a in control region 606.
  • device 600 continues to display flash indicator 602a (regardless of the state (on, off, automatic)) in the indicator region 602 even when indicator region 602 is in the reduced-height mode.
  • Adjustable flash control 662 includes flash-on control 662a and flash-off control 662b.
  • Device 600 indicates that the flash is in the off state by, for example, emphasizing (e.g., bolding, highlighting) OFF’ in flash-off control 662b.
  • device 600 also ceases to display zoom affordance 610 in camera display region 604.
  • device 600 maintains display of zoom affordance 610 in camera display region 604.
  • device 600 detects, using the touch-sensitive surface, tap gesture 650e at a location that corresponds to flash-on control 662a. As illustrated in FIG. 61, in response to detecting tap gesture 650b, device 600 updates adjustable flash control 662 to indicate that OFF’ (corresponding to flash-off control 662b) is no longer selected and that ON’ (corresponding to flash-on control 662a) is now selected (e.g., via bolding, highlighting).
  • device 600 further in response to detecting tap gesture 650e, and without receiving additional user input, ceases to display updated adjustable flash control 662 after a predetermined period of time after detecting tap gesture 650e and transitions to the user interface illustrated in FIG. 61.
  • device 600 shifts down a border of camera display region 604 (while maintaining the same size and aspect ratio) and visual boundary 608, thereby increasing the height of indicator region 602 and reducing the height of control region 606 (as compared to the user interface of FIG. 6H).
  • device 600 re-displays flash indicator 602a, which now indicates that the flash is enabled, in control region 606.
  • device 600 replaces display of adjustable flash control 662 with camera mode affordances 620. Further, device 600 re-displays zoom affordance 610 in camera display region 604. At FIG. 6J, in accordance with determining that the low-light condition continues to be met, device 600 displays (e.g., without requiring additional user input) flash control affordance 614c in control region 606. At FIG. 6J, the low-light condition is no longer met (e.g., because flash is on) and, as a result, low-light mode status indicator 602c is no longer displayed in indicator region 602, as described in more detail with respect to FIGS. 18A-18X.
  • device 600 detects, using the touch-sensitive surface, tap gesture 650f at a location that corresponds to additional control affordance 614.
  • device 600 shifts up a border of camera display region 604 (while maintaining the same size and aspect ratio) and visual boundary 608, thereby decreasing the height of indicator region 602 and increasing the height of control region 606.
  • device 600 ceases to display flash indicator 602a in control region 606.
  • device 600 ceases to display flash indicator 602a.
  • device 600 replaces display of camera mode affordances 620 with camera setting affordances 626, including a first set of camera setting affordances 626a-626e.
  • Camera setting affordances 626a-626e when activated, change (or initiate processes for changing) camera settings. For example, affordance 626a, when activated, turns on/off the flash and affordance 626d, when activated, initiates a process for setting a self-delay timer (also known as a shutter time).
  • device 600 detects, using the touch-sensitive surface, tap gesture 650g at a location that corresponds to animated image control affordance 626b (in control region 606).
  • device 600 expands display of animated image control affordance 626b to display adjustable animated image control 664, which includes a plurality of affordances 664a-664b which, when activated (e.g., via a tap), configure whether the device captures single images or a predefined number of images.
  • animated image control off option 664b is emphasized (e.g., bolded) to indicate that activation of shutter affordance 610 will capture a single image, rather than a predefined number of images.
  • device 600 detects, using the touch-sensitive surface, tap gesture 650h at a location that corresponds to animated image control affordance 626b (in control region 606).
  • device 600 in response to detecting tap gesture 650g, updates adjustable animated image control 664 to cease to emphasize animated image control off option 664b and, instead, to emphasize animated image control on option 664a (e.g., by bolding“ON”). Further, in response to detecting tap gesture 650h, device 600 configures the camera to capture a predefined number of images when activation (e.g., tap on) of shutter affordance 610 is detected.
  • device 600 further in response to detecting tap gesture 650h, and without receiving additional user input, ceases to display updated adjustable animated image control 664 after a predetermined period of time after detecting tap gesture 650h and transitions to the user interface illustrated in FIG. 6N.
  • swipe down gesture 650i at a location that corresponds to live preview 630 in camera display region 606 device 600 transitions to display the user interface illustrated in FIG. 6N.
  • device 600 shifts down a border of camera display region 604 (while maintaining the same size and aspect ratio) and visual boundary 608, thereby increasing the height of indicator region 602 and reducing the height of control region 606 (as compared to the user interface of FIG. 6M).
  • device 600 re-displays flash indicator 602a, which indicates that the flash is enabled, and further displays animated image status indicator 602d, which indicates that the camera to capture a predefined number of images (as described above) in control region 606.
  • device 600 replaces display of adjustable animated image control 664 with camera mode affordances 620.
  • device 600 re-displays zoom affordance 610 in camera display region 604.
  • device 600 displays (e.g., without requiring additional user input) flash control affordance 614c in control region 606.
  • device 600 detects, using the touch-sensitive surface, tap gesture 650j at a location that corresponds to shutter affordance 610.
  • device 600 captures media (e.g., a predefined number of images) based on the current state of live preview 630 and the camera settings. The captured media is stored locally at device 600 and/or transmitted to a remote server for storage.
  • device 600 displays (e.g., by partially or fully replacing display of additional control affordance 614) media collection 624, which includes a representation of the newly captured media on top of the collection.
  • media collection 624 includes only the representation of the newly captured media, and does not include
  • the newly captured media was captured with flash.
  • animated image control was enabled when shutter affordance 610 was activated, the newly captured media includes a predefined number of images (e.g., a still image and a video).
  • device 600 detects, using the touch-sensitive surface, tap gesture 650k at a location that corresponds to media collection 624.
  • device 600 ceases to display live preview 630 and, instead, displays a photo viewer user interface that includes a representation 642 of the newly captured media. Because the captured media was captured with flash enabled, representation 642 of the newly captured media is brighter than the view of live preview 630 displayed when shutter affordance 610 was activated (because the flash was activated).
  • the displayed representation 642 of the captured media includes the visual content of live preview 630 that was displayed in the camera display region 604 when the image was taken, but does not include visual content of live preview 630 that was displayed in indicator region 602 and control region 606.
  • playback includes visual playback of the visual content of live preview 630 that was displayed in the camera display region 604 when the series of images was captured, but does not include visual content of live preview 630 that was displayed in indicator region 602 and control region 606 (and also does not include recorded visual content that was not displayed in live preview 630 during the recording but that was optionally saved as part of storing the captured media).
  • visual content of live preview 630 that was displayed in indicator region 602 and control region 606 during recording of the captured media are stored in the saved media, as further described with respect to FIGS. 10A-10K.
  • device 600 concurrently displays, with representation 642 of the newly captured media, an edit affordance 644a for editing the newly captured media, send affordance 644b for transmitting the newly captured media, favorite affordance 644c for marking the newly captured media as a favorite media, trash affordance 644d for deleting the newly captured media, and back affordance 644e for returning to display of live preview 630.
  • Device 600 determines that the displayed media was captured while animated image control was enabled, and, in response, displays animated image status indicator 644f.
  • device 600 detects, using the touch-sensitive surface, tap gesture 6501 at a location that corresponds to back affordance 644e.
  • device 600 replaces display the photo viewer user interface that includes the representation 642 of the newly captured media with display of camera user interface that includes live preview 630.
  • device 600 detects, using the touch-sensitive surface, tap gesture 650m at a location that corresponds to camera portrait mode affordance 620d.
  • device 600 displays a revised set of indicators in indicator region 602, an updated live preview 630, and updated control region 606.
  • the revised set of indicators includes previously displayed flash indicator 602a and newly displayed f-stop indicator 602e (e.g., because the newly selected mode is compatible with the features corresponding to flash indicator 602a and f-stop indicator 602e), without displaying previously displayed animated image status indicator 602d (e.g., because the newly selected mode is incompatible with the feature corresponding to animated image status indicator 602d).
  • f-stop indicator 602e provides an indication of an f-stop value (e.g., a numerical value).
  • zoom affordance 622 has shifted to the left and lighting effect control 628 (which, when activated enables changing lighting effects) is displayed in the camera display region 604.
  • the size, aspect ratio, and location of camera display region 604 is the same in FIG. 6R as in FIG. 6Q.
  • Updated live preview 630 in FIG. 6R provides different visual effects as compared to live preview 630 in FIG. 6Q. For example, updated live preview 630 provides a bokeh effect and/or lighting effects whereas live preview 630 in FIG. 6Q does not provide the bokeh effect and/or lighting effects.
  • the zoom of objects in live preview 630 change because of the change in camera mode (photo vs. portrait mode). In some embodiments, the zoom of objects in live preview 630 does not change despite the change in camera mode (photo vs. portrait mode).
  • live preview is displaying subject 640 using the natural light in the subject’s environment and is not applying a lighting effect.
  • Lighting effect control 628 can be used to adjust the level (and type) of lighting effect that is used/applied when capturing media. In some embodiments, adjustments to the lighting effect are also reflected in live preview 630.
  • device 600 detects, using the touch-sensitive surface, swipe left gesture 650n at a location that corresponds to lighting effect control 628 to select a studio lighting effect.
  • device 600 updates lighting effect control 628 to indicate that the studio lighting effect is selected and updates display of live preview 630 to include the studio lighting effect, thereby providing the user with a representation of how media captured using the studio lighting effect will appear.
  • Device 600 also displays lighting status indicator 602f in indicator region 602. Lighting status indicator 602f includes an indication of the current value of lighting effect that is used/applied when capturing media.
  • device 600 in accordance with determining that a light-adjustment condition is met (e.g., a condition that is met when the camera is in portrait mode or is otherwise able to vary lighting effects), device 600 displays (e.g., by expanding additional control affordance 614, without requiring additional user input) lighting control affordance 614d in control region 606. [322] At FIG. 6S, device 600 detects, using the touch-sensitive surface, tap gesture 650o at a location that corresponds to lighting control affordance 614d. At FIG.
  • device 600 in response to detecting tap gesture 650o, device 600 replaces display of camera mode affordances 620 with adjustable lighting effect control 666 and provides an indication (e.g., in camera display region 604) of the current lighting effect value (e.g., 800 lux).
  • the current lighting effect value e.g. 800 lux
  • display of indicators in indicator region 602 are maintained.
  • tap gesture 650o results in ceasing to display indicators in indictor region 602 (such as by shifting a border of camera display region 606 and resizing indictor region 602 and control region 606, as described above).
  • device 600 While displaying adjustable lighting effect control 666, device 600 detects, using the touch-sensitive surface, swipe gesture 650p at a location that corresponds to adjustable lighting effect control 666 to lower the lighting effect value.
  • swipe gesture 650o In response to detecting swipe gesture 650o, device 600 lowers the lighting effect value, which is reflected in live preview 630 become darker, updates the indication (e.g., in camera display region 604) to the updated lighting effect value (e.g., 600 lux), and updates lighting status indicator 602f in indicator region 602 to reflect the updated lighting effect value.
  • the indication e.g., in camera display region 604
  • the updated lighting effect value e.g., 600 lux
  • device 600 detects, using the touch-sensitive surface, tap gesture 650q at a location that corresponds to additional control affordance 614. As illustrated in FIG. 6V, in response to detecting tap gesture 650q, device 600 replaces display of adjustable lighting effect control 666 with display of camera mode affordances 620.
  • device 600 shifts back down the border of camera display region 604 (while maintaining the same size and aspect ratio) and visual boundary 608, thereby increasing the height of indicator region 602 and reducing the height of control region 606.
  • Device 600 also ceases to display the indication of lighting effect value in camera display region 604, but optionally maintains display of lighting effect control 628.
  • FIGS. 7A-7C are a flow diagram illustrating a method for accessing media controls using an electronic device in accordance with some embodiments.
  • Method 700 is performed at a device (e.g., 100, 300, 500, 600) with a display device and one or more cameras (e.g., one or more cameras (e.g., dual cameras, triple camera, quad cameras, etc.) on different sides of the electronic device (e.g., a front camera, a back camera)).
  • Some operations in method 700 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • the electronic device is a computer system.
  • the computer system is optionally in communication (e.g., wired communication, wireless communication) with a display generation component and with one or more input devices.
  • the display generation component is configured to provide visual output, such as display via a CRT display, display via an LED display, or display via image projection.
  • the display generation component is integrated with the computer system.
  • the display generation component is separate from the computer system.
  • the one or more input devices are configured to receive input, such as a touch-sensitive surface receiving user input.
  • the one or more input devices are integrated with the computer system.
  • the one or more input devices are separate from the computer system.
  • the computer system can transmit, via a wired or wireless connection, data (e.g., image data or video data) to an integrated or external display generation component to visually produce the content (e.g., using a display device) and can receive, a wired or wireless connection, input from the one or more input devices.
  • data e.g., image data or video data
  • an integrated or external display generation component to visually produce the content (e.g., using a display device)
  • receive, a wired or wireless connection, input from the one or more input devices e.g., image data or video data
  • method 700 provides an intuitive way for accessing media controls.
  • the method reduces the cognitive burden on a user for accessing media controls, thereby creating a more efficient human-machine interface.
  • the electronic device displays (702), via the display device, a camera user interface.
  • the camera user interface includes (704) a camera display region (e.g., 606), the camera display region including a representation (e.g., 630) of a field-of-view of the one or more cameras.
  • the camera user interface also includes (706) a camera control region (e.g., 606), the camera control region including a plurality of control affordances (e.g., 620, 626) (e.g., a selectable user interface object) (e.g., proactive control affordance, a shutter affordance, a camera selection affordance, a plurality of camera mode affordances) for controlling a plurality of camera settings (e.g., flash, timer, filter effects, f-stop, aspect ratio, live photo, etc.) (e.g., changing a camera mode) (e.g., taking a photo) (e.g., activating a different camera (e.g., front- facing to rear-facing)).
  • a camera control region e.g., 606
  • the camera control region including a plurality of control affordances (e.g., 620, 626) (e.g., a selectable user interface object) (e.g., proactive control affordance,
  • Providing a plurality of control affordances for controlling a plurality of camera settings in the camera control region enables a user to quickly and easily and change and/or manage the plurality of camera settings.
  • Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • a first predefined condition and a second predefined condition e.g., environmental conditions in an environment of the device
  • the electronic device e.g., 600
  • a first control affordance e.g., 602b, 602c
  • a selectable user interface object e.g., a selectable user interface object
  • the electronic device While displaying the camera user interface without displaying the first control affordance and without displaying the second control affordance, the electronic device (e.g., 600) detects (710) a change in conditions.
  • the electronic device In response to detecting the change in conditions (712), in accordance with a determination that the first predefined condition (e.g., the electronic device is in a dark environment) is met (e.g., now met), the electronic device (e.g., 600) displays (714) (e.g., automatically, without the need for further user input) the first control affordance (e.g., 614c, a flash setting affordance) (e.g., a control affordance that corresponds to a setting of the camera that is active or enabled as a result of the first predefined condition being met). Displaying the first control affordance in accordance with a determination that the first predefined condition is met provides quick and convenient access to the first control affordance.
  • the first control affordance e.g., 614c, a flash setting affordance
  • Reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the first predefined condition is met when an amount of light (e.g., amount of brightness (e.g., 20 lux, 5 lux)) in the field-of-view of the one or more cameras is below a first predetermined threshold (e.g., 10 lux), and the first control affordance is an affordance (e.g., a selectable user interface object) for controlling a flash operation.
  • a first predetermined threshold e.g. 10 lux
  • Providing a first control affordance that is an affordance for controlling a flash operation when the amount of light in the field-of-view of the one or more cameras is below a first predetermined threshold provides a user with a quick and easy access to controlling the flash operation when such control is likely to be needed and/or used.
  • Reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device receives a user input corresponding to the selection of the affordance for control the flash operation, and, in response to receiving the user input, the electronic device can change the state of the flash operation (e.g., active (e.g., on), inactive (e.g., off), automatic (e.g., electronic device determines if the flash should be changed ton inactive or active in real time based on conditions (e.g., amount of light in field-of-view of the camera))) and/or display a user interface to change the state of the flash operation.
  • the state of the flash operation e.g., active (e.g., on), inactive (e.g., off), automatic (e.g., electronic device determines if the flash should be changed ton inactive or active in real time based on conditions (e.g., amount of light in field-of-view of the camera))
  • the state of the flash operation e.g., active (e.g., on), inactive (e.g.,
  • the first predefined condition is met when the electronic device (e.g., 600) is connected to (e.g., physically connected to) an accessory of a first type (e.g., 601, a stabilizing apparatus (e.g., tripod)), and the first control affordance is an affordance (e.g., 614a) (e.g., a selectable user interface object) for controlling a timer operation (e.g., an image capture timer, a capture delay timer).
  • a timer operation e.g., an image capture timer, a capture delay timer
  • Providing a first control affordance that is an affordance for controlling a timer operation when the electronic device is connected to an accessory of a first type provides a user with a quick and easy access to controlling the timer operation when such control is likely to be needed and/or used. Reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device receives a user input corresponding to the selection of the affordance (e.g., 630) for controlling a timer operation, and, in response to receiving the user input, the electronic device can change the state (e.g., time of capture after initiating the capture of media) of the timer operation and/or display a user interface to change the state of the flash operation.
  • a user input corresponding to the selection of the affordance (e.g., 630) for controlling a timer operation
  • the electronic device can change the state (e.g., time of capture after initiating the capture of media) of the timer operation and/or display a user interface to change the state of the flash operation.
  • the first predefined condition is met when an amount of light (e.g., amount of brightness (e.g., 20 lux, 5 lux)) in the field-of-view of the one or more cameras is below a second predetermined threshold (e.g., 20 lux), and the first control affordance is an affordance (e.g., 614b) (e.g., a selectable user interface object) for controlling a low-light capture mode.
  • an amount of light e.g., amount of brightness (e.g., 20 lux, 5 lux)
  • a second predetermined threshold e.g. 20 lux
  • Providing a first control affordance that is an affordance for controlling a low-light capture mode when an amount of light in the field-of-view of the one or more cameras is below a second predetermined threshold provides a user with a quick and easy access to controlling the low-light capture mode when such control is likely to be needed and/or used. Reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device receives a user input corresponding to the selection of the affordance (e.g650d) for controlling a low-light capture mode, and, in response to receiving the user input, the electronic device can change the state (e.g., active (e.g., on), inactive (e.g., off)) of the low-light capture mode and/or display a user interface to change the state of the low-light capture mode.
  • the state e.g., active (e.g., on), inactive (e.g., off)
  • the first predefined condition is met when the electronic device (e.g., 600) is configured to capture images in first capture mode (e.g., a portrait mode) and the first control affordance is an affordance (e.g., 614d) (e.g., a selectable user interface object) for controlling a lighting effect operation (718) (e.g., a media lighting capture control (e.g., a portrait lighting effect control (e.g., a studio lighting, contour lighting, stage lighting))).
  • a lighting effect operation (718) e.g., a media lighting capture control (e.g., a portrait lighting effect control (e.g., a studio lighting, contour lighting, stage lighting))
  • Providing a first control affordance that is an affordance for controlling a lighting effect operation when the electronic device is configured to capture images in first capture mode provides a user with a quick and easy access to controlling the lighting effect operation when such control is likely to be needed and/or used. Reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device receives a user input corresponding to the selection of the affordance (e.g., 650o) for controlling a lighting effect operation, and, in response to receiving the user input, the electronic device can change the state (e.g., amount of lighting) of the lighting effect and/or display a user interface to change the state of the lighting effect operation.
  • a user input corresponding to the selection of the affordance (e.g., 650o) for controlling a lighting effect operation
  • the electronic device can change the state (e.g., amount of lighting) of the lighting effect and/or display a user interface to change the state of the lighting effect operation.
  • the electronic device while displaying the affordance (e.g., 614d) for controlling the lighting effect operation, the electronic device (e.g., 600) receives (720) a selection (e.g., tap) of the affordance (e.g., 614d) for controlling the lighting effect operation.
  • a selection e.g., tap
  • the electronic device in response to receiving the selection of the affordance (e.g., 614d) for controlling the lighting effect operation, displays (722) an affordance (e.g., 666) (e.g., a selectable user interface object) for adjusting the lighting effect operation (e.g., slider) that, when adjusted (e.g., dragging a slider bar on a slider between values (e.g., tick marks) on the slider), adjusts a lighting effect (e.g., lighting) applied to the representation of the field-of-view of the one or more cameras.
  • the lighting effect that is adjusted also applies to the captured media (e.g., lighting associated with a studio light when the first control affordance control a studio lighting effect operation).
  • the electronic device while displaying the first control affordance, concurrently displays (724) an indication (e.g., 602f) of a current state of a property (e.g., a setting) of the electronic device (e.g., an effect of a control (e.g., an indication that a flash operation is active)) associated (e.g., showing a property or a status of the first control) with (e.g., that can be controlled by) the first control affordance.
  • an indication of a current state of a property of the electronic device e.g., an effect of a control (e.g., an indication that a flash operation is active)
  • Concurrently displaying an indication of a current state of a property of the electronic device while displaying the first control affordance enables a user to quickly and easily view and change the current state of a property using the first control affordance.
  • the indication (e.g., 602a, 602c) is displayed at the top of the user interface (e.g., top of phone).
  • the indication is displayed in response to changing a camera toggle (e.g., toggling between a front camera and a back camera) control.
  • the property has one or more active states and one or more inactive states and displaying the indication is in accordance with a determination that the property is in at least one of the one or more active states.
  • some operations must be activated before an indication associated with the operation is displayed in the camera user interface while some operations do not have to be active before an indication associated with the operation is displayed in the camera user interface.
  • the indication in accordance with a determination that the property is in the inactive state (e.g., is changed to being in the inactive state) the indication is not displayed or is ceased to be displayed if currently displayed.
  • the property is a first flash operation setting and the current state of the property is that a flash operation is enabled.
  • the flash operation is active when the electronic device (e.g., 600) determines that the amount of light in the field-of-view of the one or more cameras is within a flash range (e.g., a range between 0 and 10 lux).
  • a flash range e.g., a range between 0 and 10 lux.
  • the flash operation being active when the electronic device determines that the amount of light in the field-of-view of the one or more cameras is within a flash range reduces power usage and improves battery life of the device by enabling the user to use the device more efficiently.
  • the property is a second flash operation setting and the current state of the property is that a flash operation is disabled (e.g., shows, displays a representation that shows).
  • the flash operation when the flash is set to automatic, the flash operation is inactive when the electronic device (e.g., 600) determines that the amount of light in the field-of- view of the one or more cameras is not within a flash range (e.g., a range between 0 and 10 lux).
  • the flash operation being inactive when the electronic device determines that the amount of light in the field-of-view of the one or more cameras is not within a flash range reduces power usage and improves battery life of the device by enabling the user to use the device more efficiently.
  • the property is an image capture mode setting and the current state of the property is that the image capture mode is enabled
  • the electronic device e.g., 600
  • the electronic device e.g., 600
  • the electronic device is configured to, in response to an input (e.g., a single input) corresponding to a request to capture media, capture a still image and a video (e.g., a moving image).
  • Capturing a still image and a video when the property is an image capture mode setting and the current state of the property is that the image capture mode is enabled enables a user to quickly and easily capture a still image and a video.
  • Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the property is a second image capture mode setting and the current state of the property is that the second image capture mode is enabled.
  • the electronic device e.g., 600
  • the electronic device is configured to, in response to an input (e.g., a single input) corresponding to a request to capture media, capture media using a high-dynamic- range imaging effect.
  • the electronic device in response to receiving a request to camera media, the electronic device (e.g., 600), via the one or more cameras, captures media that is a high- dynamic-range imaging image.
  • Capturing media using a high-dynamic-range imaging effect when the property is a second image capture mode setting and the current state of the property is that the second image capture mode is enabled enables a user to quickly and easily capture media using the high-dynamic-range imaging effect.
  • Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the camera control region (e.g., 606) is displayed adjacent to a first side of the display device (e.g., at the bottom of a display region) and the indication is displayed adjacent to a second side of the display device (e.g., a side is closest to the location of the one or more cameras) that is opposite the first side (e.g., top of camera display region).
  • the electronic device in response to displaying the first control affordance (726), in accordance with a determination that the first control affordance is of a first type (e.g., a type in which a corresponding indication is always shown (e.g., a flash control)), the electronic device (e.g., 600) displays (728) a second indication associated with the first control affordance (e.g., the second indication is displayed irrespective of a state of a property associated with the first control).
  • a first type e.g., a type in which a corresponding indication is always shown (e.g., a flash control
  • the electronic device in response to displaying the first control affordance, in accordance with a determination that the first control affordance is of a second type (e.g., a type in which a corresponding indication is conditionally shown) that is different from the first type and a determination that a second property (e.g., a setting) of the electronic device (e.g., 600) associated with the first control affordance is in an active state, the electronic device displays (730) the second indication associated with the first control.
  • a second type e.g., a type in which a corresponding indication is conditionally shown
  • the electronic device in response to displaying the first control affordance, in accordance with a determination that the first control affordance is of a second type (e.g., a type in which a corresponding indication is conditionally shown) that is different from the first type and a determination that the second property (e.g., a setting) of the electronic device (e.g., 600) associated with the first control affordance is in an inactive state, the electronic device forgoes display of the second indication associated with the first control affordance.
  • some operations associated with a control must be activated before an indication associated with the operation is displayed in the camera user interface while some operations do not have to be active before an indication associated with the operation is displayed in the camera user interface.
  • the electronic device In response to detecting the change in conditions (712), in accordance with a determination that the second predefined condition (e.g., the electronic device is positioned on a tripod) (e.g., a predefined condition that is different from the first predefined condition) is met (e.g., now met), the electronic device (e.g., 600) displays (716) (e.g., automatically, without the need for further user input) the second control affordance (e.g., a timer setting affordance) (e.g., a control affordance that corresponds to a setting of the camera that is active or enabled as a result of the second predefined condition being met).
  • the second control affordance e.g., a timer setting affordance
  • the control affordance has an appearance that represents the camera setting that is associated with the predefined condition (e.g., a lightning bolt to represent a flash setting).
  • a settings interface is displayed for changing a state of the camera setting associated with the predefined condition.
  • the electronic device e.g., 600 concurrently displays the first control affordance and the second control affordance. Concurrently displaying the first control affordance and the second control affordance in response to detecting the change in conditions and in accordance with a determination that the first and second predefined conditions are met provides the user with a quick and convenient access to both the first control affordance and the second control affordance.
  • Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • multiple affordances are displayed.
  • the electronic device displays the first control affordance while forgoing to display the second control affordance.
  • Displaying the first control affordance while forgoing to display the second control affordance in response to detecting the change in conditions and in accordance with a determination that the first predefined condition is met and the second predefined condition is not met provides the user with quick and easy access to a control affordance that is likely to be needed and/or used while not providing the user with quick and easy access to a control affordance that is not likely to be needed and/or used.
  • Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
  • the electronic device displays the second control affordance while forgoing to display the first control affordance. Displaying the second control affordance while forgoing to display the first control affordance in response to detecting the change in conditions and in accordance with a determination that the first predefined condition is not met and the second predefined condition is met provides the user with quick and easy access to a control affordance that is likely to be needed and/or used while not providing the user with quick and easy access to a control affordance that is not likely to be needed and/or used.
  • Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
  • the electronic receives selection of an affordance (e.g., 614) for navigating to the plurality of additional control affordances (e.g., an ellipses affordance).
  • an affordance e.g., 614
  • additional control affordances e.g., an ellipses affordance
  • the electronic device in response to receiving selection of the affordance (e.g., 614) for navigating to the plurality of addition control affordances, displays at least some of a plurality of control affordances (e.g., 626) in the camera user interface (including the first control and/or the second control affordances.
  • the electronic device when a predefined condition is met, can display an animation when the affordance pops out the affordance for navigating to the plurality of additional control affordances.
  • the plurality of control affordances includes an affordance (e.g., 618) for navigating to a plurality of additional control affordances (e.g., an affordance for displaying a plurality of camera setting affordances) that includes at least one of the first or second control affordances.
  • the first affordance in accordance with the determination that the first predefined condition is met, is displayed adjacent to (e.g., next to, sounded by a bounder with the additional control affordance) the affordance for navigating to the plurality of additional control affordances.
  • the second affordance in accordance with the determination that the second predefined condition is met, is displayed adjacent to (e.g., next to, sounded by a bounder with the additional control affordance) the affordance for navigating to the plurality of additional control affordances.)
  • the representation of the field-of-view of the one or more cameras extends across (e.g., over) a portion of the camera user interface that includes the first control affordance and/or the second control affordance.
  • the camera user interface extends across the entirety of the display area of the display device.
  • the representation e.g., the preview
  • the representation is displayed under all controls included in the camera user interface (e.g., transparently or translucently displayed so that the buttons are shown over portions of the representation).
  • 3000, 3200, 3400, 3600, 3800, 4000, and 4200 optionally include one or more of the
  • FIGS. 8A-8V illustrate exemplary user interfaces for displaying media controls using an electronic device in accordance with some embodiments.
  • the user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 9A-9C.
  • FIG. 8A illustrates electronic device 600 displaying a live preview 630 that optionally extends from the top of the display to the bottom of the display.
  • Live preview 630 is based on images detected by one or more camera sensors.
  • device 600 captures images using a plurality of camera sensors and combines them to display live preview 630.
  • device 600 captures images using a single camera sensor to display live preview 630.
  • the camera user interface of FIG. 8 A includes indicator region 602 and control region 606, which are overlaid on live preview 630 such that indicators and controls can be displayed concurrently with the live preview.
  • Camera display region 604 is substantially not overlaid with indicators or controls.
  • the live preview includes subject 840 and a surrounding environment.
  • indicator region 602 is overlaid onto live preview 630 and optionally includes a colored (e.g., gray; translucent) overlay.
  • Indicator region 602 includes flash indicator 602a and animated image status indicator 602d.
  • Flash indicator 602a indicates whether the flash is automatic mode, on, off, or in another mode (e.g., red-eye reduction mode).
  • Animated image status indicator 602d indicates whether the camera is configured to capture a single image or a plurality of images (e.g., in response to detecting activation of shutter affordance 610).
  • camera display region 604 includes live preview 630 and zoom affordance 622.
  • control region 606 is overlaid onto live preview 630 and optionally includes a colored (e.g., gray; translucent) overlay.
  • control region 606 includes camera mode affordances 620, a portion of media collection 624, additional control affordance 614, shutter affordance 610, and camera switcher affordance 612.
  • Camera mode affordances 620 indicates which camera mode is currently selected and enables the user to change the camera mode.
  • camera modes affordances 620a-620e are displayed, and‘Photo’ camera mode 620c is indicated as being the current mode in which the camera is operating by the bolding of the text.
  • Media collection 624 includes representations of media (e.g., photos), such as recently captured photos. Additional control affordance 614 enables the user to access additional camera controls.
  • Shutter affordance 610 when activated, causes device 600 to capture media (e.g., a photo) based on the current state of live preview 630 and the currently selected mode.
  • the captured media is stored locally at electronic device and/or transmitted to a remote server for storage.
  • Camera switcher affordance 612 when activated, causes device 600 to switch to showing the field-of-view of a different camera in live preview 630, such as by switching between a rear-facing camera sensor and a front-facing camera sensor.
  • device 600 detects, using a touch-sensitive surface, swipe up gesture 850a (a swipe input toward indicator region 602 and away from control region 606) at a location that corresponds to camera display region 604. In response to detecting swipe up gesture 850a, device 600 displays the user interface of FIG. 8B.
  • device 600 detects, using a touch-sensitive surface, tap gesture 850b at a location corresponding to additional control affordance 614. In response to detecting tap gesture 850b, device 600 similarly displays the user interface of FIG. 8B.
  • device 600 shifts up camera display region 604 (while maintaining the same size and aspect ratio) and visual boundary 608, thereby reducing the height of indicator region 602 and increasing the height of control region 606.
  • device 600 ceases to display flash indicator 602a and animated image status indicator 602d.
  • device 600 ceases to display any indicators in indicator region 602 while it is in the reduced height mode.
  • device 600 replaces display of camera mode affordances 620 with camera setting affordances 626, including a first set of camera setting affordances 626a-626e.
  • Camera setting affordances 626a-626e when activated, change (or initiated processes for changing) camera settings. For example, affordance 626a, when activated, turns on/off the flash and affordance 626d, when activated, initiates a process for setting a shutter timer.
  • device 600 detects, using the touch-sensitive surface, swipe down gesture 850c (a swipe input away from indicator region 602 and toward control region 606) at a location that corresponds to camera display region 604. In response to detecting swipe down gesture 850c, device 600 displays the user interface of FIG. 8C.
  • device 600 detects, using a touch-sensitive surface, tap gesture 850d at a location corresponding to additional control affordance 614. In response to detecting tap gesture 850d, device 600 similarly displays the user interface of FIG. 8C.
  • device 600 shifts down camera display region 604 (while maintaining the same size and aspect ratio) and visual boundary 608, thereby increasing the height of indicator region 602 and decreasing the height of control region 606.
  • device 600 re-displays flash indicator 602a and animated image status indicator 602d.
  • device 600 replaces display of camera setting affordances 626 with camera mode affordances 620.
  • device 600 detects, using the touch-sensitive surface, swipe right gesture 850e at a location that corresponds to media collection 624.
  • device 600 in response to detecting swipe right gesture 850e, device 600 slides the remainder of media collection 624 onto the display, which covers additional control affordance 614. As a result, device 600 ceases to display additional control affordance 614.
  • device 600 detects, using the touch-sensitive surface, swipe left gesture 850f at a location that corresponds to media collection 624.
  • swipe left gesture 850f in response to detecting swipe left gesture 850f, device 600 slides the media collection 624 partially off of the display in the left direction, which reveals additional control affordance 614. As a result, device 600 displays additional control affordance 614.
  • device 600 detects, using the touch-sensitive surface, swipe left gesture 850g at a location that corresponds to camera display region 604 (on live preview 630).
  • device 600 transitions among graphical views of FIGS. 8F-8H.
  • device 600 begins the transition among graphical views of FIGS. 8F-8H in response to detecting a start of a swipe left gesture 850g (in FIG. 8E), and the transition continues as the swipe left gesture 850g progresses (without detecting lift-off of the gesture), as shown in FIGS. 8F-8G.
  • device 600 shifts a border of camera display region 604 to the left (the direction of swipe left gesture 850g) without shifting live preview 630.
  • Shifting camera display region 604 causes display of a vertical portion of visual boundary 608 and causes display of a colored (e.g., gray) overlay in the area that camera display region 604 has vacated (e.g., on the right side of the display, thereby indicating to the user that device 600 is detecting swipe left gesture 850g).
  • a portion of visual boundary 608 is displayed outside of (to the left of) device 600 for the better understanding of the reader and is not a visual element of the user interface of device 600.
  • device 600 ceases to display indicators 602a and 602d of indicator region 602. Similarly, device 600 updates camera mode affordance 620 to slide 620b to the left and off the display and to slide‘Pano’ camera mode 620f onto the display from the right. ‘Photo’ camera mode is no longer indicated as being the current mode and, instead, portrait camera mode is indicated as being the current mode (by the bolding of the text of ‘Portrait’ camera mode affordance 620d and/or by being centered on the display).
  • device 600 in response to left swipe input 850g, device 600 also optionally provides a tactile output 860 to indicate to the user that the camera mode is changing.
  • device 600 overlays camera display region 604 with a colored (e.g., gray; translucent) overlay and/or device 600 dims live preview 630 and/or device 600 dims the display and/or device 600 blurs the display (including live preview 630).
  • a colored (e.g., gray; translucent) overlay and/or device 600 dims live preview 630 and/or device 600 dims the display and/or device 600 blurs the display (including live preview 630).
  • device 600 displays a revised set of indicators in indicator region 602, an updated live preview 630, and updated control region 606.
  • the revised set of indicators includes previously displayed flash indicator 602a and newly displayed f-stop indicator 602e (e.g., because the newly selected mode is compatible with the features corresponding to flash indicator 602a and f-stop indicator 602e), without displaying previously displayed animated image status indicator 602d (e.g., because the newly selected mode is incompatible with the feature corresponding to animated image status indicator 602d).
  • f-stop indicator 602e provides an indication of an f-stop value (e.g., a numerical value).
  • zoom affordance 622 has shifted to the left and lighting effect control 628 (which, when activated enables changing lighting effects) is displayed in the camera display region 604.
  • the size, aspect ratio, and location of camera display region 604 is the same in FIG. 8E as in FIG. 8H.
  • Updated live preview 630 in FIG. 8H provides different visual effects as compared to live preview 630 in FIG. 8E.
  • updated live preview 630 provides a bokeh effect and/or lighting effects whereas live preview 630 in FIG. 8E does not provide the bokeh effect and/or lighting effects.
  • the zoom of objects in live preview 630 change because of the change in camera mode (photo vs. portrait mode).
  • the zoom of objects in live preview 630 does not change despite the change in camera mode (photo vs. portrait mode).
  • device 600 detects, using the touch-sensitive surface, swipe left gesture 850h at a location that corresponds to camera mode affordances 620 (in control region 606), rather than on live preview 630 in camera display region 604.
  • swipe gesture 850g which causes camera display region 604 to shift while transitioning to the portrait camera mode
  • the device transitions to the portrait camera mode of FIG. 8H without shifting the camera display region 604.
  • the device can receive either input to transition camera modes, but displays different animations during the transitions to the updated camera mode.
  • device 600 detects, using the touch-sensitive surface, tap gesture 850i at a location that corresponds to additional control affordance 614. As illustrated in FIG. 81, in response to detecting tap gesture 850i, device 600 shifts up camera display region 604 (while maintaining the same size and aspect ratio) and visual boundary 608, thereby reducing the height of indicator region 602 and increasing the height of control region 606. In addition to reducing the height of indicator region 602, device 600 ceases to display flash indicator 602a and f-stop indicator 602e. In some examples, device 600 ceases to display any indicators in indicator region 602 while it is in the reduced height mode for the indicator region.
  • device 600 replaces display of camera mode affordances 620 with camera setting affordances 626, including a second set of camera setting affordances 626a, 626c, 626d-626f.
  • Camera setting affordances 626a, 626c, 626d-626f when activated, change (or initiated processes for changing) camera settings.
  • the first set of camera setting affordances are different from the second set of camera setting affordances.
  • affordance 626a is displayed for both the photo camera mode and the portrait camera mode, but affordance 626b for enabling/disabling live photos is not displayed for portrait camera mode and, instead, affordance 626f is displayed which, when activated, initiates a process for setting an f- stop value.
  • detecting a swipe up gesture at FIG. 8H on camera display region 604 causes device 600 to similarly display the user interface of FIG. 81.
  • device 600 detects, using the touch-sensitive surface, tap gesture 850j at a location that corresponds to aspect ratio control affordance 626c (in control region 606) while in the portrait camera mode.
  • device 600 expands display of aspect ratio control affordance 626c to display adjustable aspect ratio control 818, which includes a plurality of affordances 818a-1818d which, when activated (e.g., via a tap) change the aspect ratio of camera display region 604.
  • 4:3 aspect ratio affordance 818b is bolded to indicate that the aspect ratio of camera display region 604 is 4:3, a non-square aspect ratio.
  • device 600 detects, using the touch- sensitive surface, tap gesture 850k at a location that corresponds to square aspect ratio affordance 818a.
  • device 600 in response to detecting tap gesture 850k, changes the aspect ratio of camera display region 604 to be square. As a result, device 600 also increases the height of one or both of indicator region 602 and control region 606. As illustrated in FIG. 8K, lighting effect control 628 is now displayed in control region 606 because the height of control region 606 has increased.
  • device 600 detects, using the touch-sensitive surface, tap gesture 8501 at a location that corresponds to‘Photo’ camera mode 620c to change the mode in which the camera is operating.
  • device 600 changes the camera mode from portrait camera mode to photo camera mode. Although the camera mode has changed and the f-stop indicator 602e is no longer displayed, the size, aspect ratio, and location of camera display region 604 is the same in both FIG. 8K and 8L. ‘Photo’ camera mode affordance is now bolded to indicate that the photo camera mode is currently active.
  • device 600 detects, using the touch-sensitive surface, tap gesture 850m at a location that corresponds to aspect ratio indicator 602g.
  • device 600 replaces display of camera mode affordance 620 in control region 606 with display of adjustable aspect ratio control 818, including affordances 818a-1818d which, when activated (e.g., via a tap) change the aspect ratio of camera display region 604, as discussed above.
  • device 600 detects, using the touch-sensitive surface, tap gesture 850n at a location that corresponds to aspect ratio control affordance 626c.
  • device 600 contracts the display of aspect ratio control affordance 626c to cease display of adjustable aspect ratio control 818.
  • device 600 detects, using the touch-sensitive surface, tap gestures 850o, 850p, and 850q at a location that corresponds to zoom affordance 622.
  • tap gesture 850o as shown in FIG. 80
  • device 600 updates a zoom of live preview 630 (e.g., by switching camera sensors from a first camera sensor to a second camera sensor with a different field-of-view) and updates the zoom affordance 622 to indicate the current zoom.
  • tap gesture 850p as shown in FIG.
  • device 600 updates a zoom of live preview 630 (e.g., by switching from the second camera sensor to a third camera sensor with a different field-of-view) and updates the zoom affordance 622 to indicate the current zoom.
  • device 600 updates a zoom of live preview 630 (e.g., by switching from the third camera sensor to the first camera sensor with a different field-of-view) and updates the zoom affordance 622 to indicate the current zoom.
  • the controls in control region 606 have not changed and the indicators in indicator region 602 have not changed.
  • device 600 while displaying camera setting affordances 626, device 600 detects, using the touch-sensitive surface, swipe down gesture 850r at a location that corresponds to live preview 630 in the camera display region 604. In response to detecting swipe down gesture 850r, device 600 replaces display of camera setting affordances 626 with camera mode affordances 620, as shown in FIG. 8R. In some embodiments, device 600 also shifts down camera display region 604 (while maintaining the same size and aspect ratio) and visual boundary 608, thereby increasing the height of indicator region 602 and decreasing the height of control region 606. In some embodiments, device 600 maintains display of aspect ratio indicator 602g for FIGS. 8K-8S because the square aspect ratio allows indicator region 602 to have a height that more readily accommodates indicators while the camera setting affordance 626 is displayed.
  • device 600 detects, using the touch-sensitive surface, tap gesture 850s at a location that corresponds to shutter affordance 610.
  • device 600 captures media (e.g., a photo, a video) based on the current state of live preview 630.
  • the captured media is stored locally at electronic device and/or transmitted to a remote server for storage.
  • device 600 replaces display of additional control affordance 614 with media collection 624, which includes a representation of the newly captured media on top of the collection.
  • device 600 detects, using the touch-sensitive surface, tap gesture 850t at a location that corresponds to media collection 624.
  • tap gesture 850t as shown in FIG. 8T, device 600 ceases to display live preview 630 and, instead, displays a photo viewer user interface that includes a representation 842 of newly captured media (e.g., a photo, a frame of a video).
  • Device 600 concurrently displays, with representation 842 of the newly captured media, edit affordance 644a for editing the newly captured media, send affordance 644b for transmitting the newly captured media, favorite affordance 644c for marking the newly captured media as a favorite media, and trash affordance 644d for deleting the newly captured media.
  • device 600 detects, using the touch-sensitive surface, tap gesture 850u at a location that corresponds to edit affordance 644a.
  • device 600 displays an edit user interface for editing the newly captured media.
  • the edit user interface includes aspect editing affordances 846a-846d, with square aspect editing affordance 846a highlighted to indicate that the media was captured at the square aspect ratio.
  • device 600 detects, using the touch-sensitive surface, tap gesture 850v at a location that corresponds to 4:3 aspect ratio editing affordance 846b.
  • device 600 updates display of the representation of the media from the square aspect ratio to a 4:3 aspect ratio while maintaining the visual content of the media as displayed in the square aspect ratio and adding visual content captured (in response to tap gesture 850s on shutter affordance 610) that extends beyond the 4:3 aspect ratio visual content.
  • 4:3 aspect editing affordance 846b is highlighted to indicate that the media is being shown at the expanded 4:3 aspect ratio.
  • FIGS. 9A-9C are a flow diagram illustrating a method for displaying media controls using an electronic device in accordance with some embodiments.
  • Method 900 is performed at a device (e.g., 100, 300, 500, 600) with a display device and one or more cameras (e.g., one or more cameras (e.g., dual cameras, triple camera, quad cameras, etc.) on different sides of the electronic device (e.g., a front camera, a back camera)).
  • Some operations in method 900 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • the electronic device is a computer system.
  • the computer system is optionally in communication (e.g., wired communication, wireless communication) with a display generation component and with one or more input devices.
  • the display generation component is configured to provide visual output, such as display via a CRT display, display via an LED display, or display via image projection.
  • the display generation component is integrated with the computer system.
  • the display generation component is separate from the computer system.
  • the one or more input devices are configured to receive input, such as a touch-sensitive surface receiving user input.
  • the one or more input devices are integrated with the computer system.
  • the one or more input devices are separate from the computer system.
  • the computer system can transmit, via a wired or wireless connection, data (e.g., image data or video data) to an integrated or external display generation component to visually produce the content (e.g., using a display device) and can receive, a wired or wireless connection, input from the one or more input devices.
  • data e.g., image data or video data
  • an integrated or external display generation component to visually produce the content (e.g., using a display device)
  • receive, a wired or wireless connection, input from the one or more input devices e.g., image data or video data
  • method 900 provides an intuitive way for displaying media controls.
  • the method reduces the cognitive burden on a user for displaying media controls, thereby creating a more efficient human-machine interface.
  • the electronic device e.g., 600 displays (902), via the display device, a camera user interface.
  • the camera user interface includes (e.g., the electronic device displays concurrently, in the camera user interface) a camera display region, the camera display region including a representation (e.g., 630) of a field-of-view of the one or more cameras (904).
  • the camera user interface includes (e.g., the electronic device displays concurrently, in the camera user interface) a camera control region (e.g., 606) the camera control region including a plurality of camera mode affordances (e.g., 620) (e.g., a selectable user interface object) (e.g., affordances for selecting different camera modes (e.g., slow motion, video, photo, portrait, square, panoramic, etc.)) at a first location (906) (e.g., a location above an image capture affordance (e.g., a shutter affordance that, when activated, captures an image of the content displayed in the camera display region)).
  • a camera control region e.g., 606
  • the camera control region including a plurality of camera mode affordances (e.g., 620) (e.g., a selectable user interface object) (e.g., affordances for selecting different camera modes (e.g., slow motion, video, photo, portrait, square, panoramic
  • each camera mode (e.g., video, phot/still, portrait, slow-motion, panoramic modes) has a plurality of settings (e.g., for a portrait camera mode: a studio lighting setting, a contour lighting setting, a stage lighting setting) with multiple values (e.g., levels of light for each setting) of the mode (e.g., portrait mode) that a camera (e.g., a camera sensor) is operating in to capture media (including post-processing performed automatically after capture).
  • a portrait camera mode e.g., a camera sensor
  • camera modes are different from modes which do not affect how the camera operates when capturing media or do not include a plurality of settings (e.g., a flash mode having one setting with multiple values (e.g., inactive, active, auto)).
  • camera modes allow a user to capture different types of media (e.g., photos or video) and the settings for each mode can be optimized to capture a particular type of media corresponding to a particular mode (e.g., via post processing) that has specific properties (e.g., shape (e.g., square, rectangle), speed (e.g., slow motion, time elapse), audio, video).
  • the one or more cameras of the electronic device when activated, captures media of a first type (e.g., rectangular photos) with particular settings (e.g., flash setting, one or more filter settings); when the electronic device is configured to operate in a square mode, the one or more cameras of the electronic device, when activated, captures media of a second type (e.g., square photos) with particular settings (e.g., flash setting and one or more filters); when the electronic device is configured to operate in a slow motion mode, the one or more cameras of the electronic device, when activated, captures media that media of a third type (e.g., slow motion videos) with particular settings (e.g., flash setting, frames per second capture speed); when the electronic device is configured to operate in a portrait mode, the one or more cameras of the electronic device captures media of a fifth type (e.g., portrait photos (e.g., photos with blurred backgrounds)
  • a first type e.g., rectangular photos
  • particular settings e.g., flash setting
  • the display of the representation (e.g., 630) of the field-of-view changes to correspond to the type of media that will be captured by the mode (e.g., the representation is rectangular mode while the electronic device (e.g., 600) is operating in a still photo mode and the representation is square while the electronic device is operating in a square mode).
  • the plurality of camera setting affordances include an affordance (e.g., 618a-618d) (e.g., a selectable user interface object) for configuring the electronic device (e.g., 600) to capture media that, when displayed, is displayed with a first aspect ratio (e.g., 4 by 3, 16 by 9) in response to a first request to capture media.
  • a first aspect ratio e.g. 4 by 3, 16 by 9
  • the electronic device receives selection of the affordance (e.g., 618a-618d) and, in response, the electronic device displays a control (e.g., a boundary box 608) that can be moved to change the first aspect ratio to a second aspect ratio.
  • a control e.g., a boundary box 608
  • the representation (e.g., 630) of the field-of-view of the one or more cameras is displayed at a first zoom level (e.g., lx zoom) (908).
  • a first zoom level e.g., lx zoom
  • the electronic device e.g., 600
  • receives (910) a first request to change the zoom level of the representation e.g., tap on display device.
  • the electronic device in response to receiving the first request to change the zoom level of the representation (e.g., 630) (912), in accordance with a determination that the request to change the zoom level of the representation corresponds a request to increase the zoom level of the representation, the electronic device (e.g., 600) displays (914) a second representation field-of-view of the one or more cameras at a second zoom level (e.g., 2x zoom) larger than the first zoom level.
  • a second representation field-of-view of the one or more cameras at a second zoom level (e.g., 2x zoom) larger than the first zoom level.
  • the electronic device in response to receiving the first request to change the zoom level of the representation (912), in accordance with a determination that the request to change the zoom level of the representation corresponds a request to decrease the zoom level of the representation (e.g., 630), the electronic device (e.g., 600) displays (916) a third representation field-of-view of the one or more cameras at a third zoom (e.g., 0.5x zoom) level smaller than the first zoom level.
  • the difference between the magnification of the zoom levels is uneven (e.g., between 0.5x and lx (e.g., 0.5x difference) and between lx and 2x (e.g., lx difference)).
  • the electronic device while displaying the representation (e.g., 630) of the field-of- view of the one or more cameras at a fourth zoom level (e.g., a current zoom level (e.g., 0.5x, lx, or 2x zoom)), the electronic device (e.g., 600) receives (918) a second request (e.g., tap on display device) to change the zoom level of the representation.
  • a fourth zoom level e.g., a current zoom level (e.g., 0.5x, lx, or 2x zoom)
  • a second request e.g., tap on display device
  • the electronic device in response to receiving the second request to change the zoom level of the representation (920), in accordance with a determination that the fourth zoom level is the second zoom level (e.g., 2x zoom) (and, in some embodiments, the second request to change the zoom level of the representation corresponds to a second request to increase the zoom level of the representation), the electronic device (e.g., 600) displays (922) a fourth representation of the field-of-view of the one or more cameras at the third zoom level (e.g., 0.5x zoom).
  • the third zoom level e.g., 0.5x zoom
  • the electronic device in response to receiving the second request to change the zoom level of the representation (920), in accordance with a determination that the fourth zoom level is the third zoom level (e.g., 0.5x) (and, in some embodiments, the second request to change the zoom level of the representation corresponds to a second request to increase the zoom level of the representation), the electronic device (e.g., 600) displays (924) a fifth representation of the field-of-view of the one or more cameras at the first zoom level (e.g., lx zoom).
  • the electronic device in response to receiving the second request to change the zoom level of the representation (920), in accordance with a determination that the fourth zoom level is the first zoom level (e.g., lx) (and, in some embodiments, the second request to change the zoom level of the representation corresponds to a second request to increase the zoom level of the representation), the electronic device (e.g., 600) displays (926) a sixth representation of the field-of-view of the one or more cameras at the second zoom level (e.g., 2x).
  • the camera user interface includes an affordance (e.g., 622) that, when selected, cycles through a set of predetermined zoom values (e.g., cycles from 0.5x, to lx, to 2x, and then back to 0.5x or cycles from 2x to lx to 0.5x, and then back to 2x).
  • an affordance e.g., 622
  • Providing an affordance that, when selected, cycles through a set of predetermined zoom values provides visual feedback to a user of the selectable predetermined zoom values.
  • Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device when the zoom level is an upper limit zoom level (e.g., 2x) and in response to a request to increase zoom, the electronic device (e.g., 600) changes the zoom level to 0.5x.
  • the zoom level is a lower limit zoom level (e.g., 0.5x) and in response to a request to decrease zoom
  • the electronic device e.g., 600 changes the zoom level to 2x.
  • the electronic device While displaying the camera user interface the electronic device (e.g., 600) detects (928) a first gesture (e.g., 850g, 850h, a touch gesture (e.g., swipe)) on the camera user interface.
  • a first gesture e.g., 850g, 850h, a touch gesture (e.g., swipe)
  • the electronic device modifies (930) an appearance of the camera control region (e.g., 606) including, in accordance with a determination that the first gesture is a gesture of a first type (e.g., a swipe gesture on the camera mode affordances) (e.g., a gesture at the first location), displaying (932) one or more additional camera mode affordances (e.g., 620f, a selectable user interface object) at the first location (e.g., scrolling the plurality of camera mode affordances such that one or more displayed camera mode affordances are no longer displayed, and one or more additional camera mode affordances are displayed at the first location).
  • a gesture of a first type e.g., a swipe gesture on the camera mode affordances
  • additional camera mode affordances e.g., 620f, a selectable user interface object
  • Displaying one or more additional camera mode affordances in accordance with a determination that the first gesture is a gesture of a first type enables a user to quickly and easily access other camera mode affordances.
  • Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the gesture of the first type is movement of a contact (e.g., 850h, a swipe on display device) on at least one of the plurality of camera mode affordances (e.g., 620) (e.g., swipe across two or more camera mode affordances or a portion of a region associated with the plurality of camera affordances).
  • a contact e.g., 850h, a swipe on display device
  • the plurality of camera mode affordances e.g., 620
  • swipe across two or more camera mode affordances or a portion of a region associated with the plurality of camera affordances e.g., swipe across two or more camera mode affordances or a portion of a region associated with the plurality of camera affordances.
  • the first gesture is of the first type and detecting the first gesture includes detecting a first portion (e.g., an initial portion, a contact followed by a first amount of movement) of the first gesture and a second portion (a subsequent portion, a continuation of the movement of the contact) of the first gesture.
  • a first portion e.g., an initial portion, a contact followed by a first amount of movement
  • a second portion a subsequent portion, a continuation of the movement of the contact
  • the electronic device in response to detecting the first portion of the first gesture, displays, via the display device, a boundary (e.g., 608) that includes one or more discrete boundary elements (e.g., a single, continuous boundary or a boundary made up of discrete elements at each corner) enclosing (e.g., surrounding, bounding in) at least a portion of the representation of the field-of-view of the one or more cameras (e.g., boundary (e.g., frame) displayed around representation (e.g., camera preview) of the field-of-view of the one or more cameras).
  • a boundary e.g., 608
  • one or more discrete boundary elements e.g., a single, continuous boundary or a boundary made up of discrete elements at each corner
  • enclosing e.g., surrounding, bounding in
  • the representation of the field-of-view of the one or more cameras e.g., boundary (e.g., frame) displayed around representation (e.g.,
  • Displaying a boundary that includes one or more discrete boundary elements enclosing at least a portion of the representation of the field-of-view of the one or more cameras in response to detecting the first portion of the first gesture provides visual feedback to a user that the first portion of the first gesture has been detected.
  • Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
  • the electronic device in response to detecting the second portion of the first gesture, translates (e.g., moving, sliding, transitioning) the boundary (e.g., 608 in FIG. 8F) in a first direction to across a display of the display device until at least a portion of the boundary is translated off the display (translated off a first edge of the display device) and is ceased to be displayed.
  • the boundary e.g., 608 in FIG. 8F
  • Translating the boundary in a first direction to across a display of the display device until at least a portion of the boundary is translated off the display and is ceased to be displayed in response to detecting the second portion of the first gesture provides visual feedback to a user that the first gesture has been (e.g., fully) detected.
  • Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • detecting the second portion of the first gesture includes detecting a second contact moving in the first direction.
  • the second contact is detected on the representation of the field-of-view (e.g., on a portion of the representation) of the one or more cameras.
  • a rate at which translating the boundary occurs is proportional to a rate of movement of the second contact in the first direction (e.g., the boundary moves as the contact moves). The rate at which translating the boundary occurs being proportional to a rate of movement of the second contact in the first direction provides visual feedback to a user that the rate of translation of the boundary corresponds to the rate of the movement of the second contact.
  • Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • translating the boundary includes altering a visual appearance (e.g., dimming, as in FIG. 8G) of the at least a portion of the representation (e.g., 630) of the field-of-view of the one or more cameras enclosed by the boundary.
  • the electronic device e.g., 600
  • the electronic device modifies (930) an appearance of the camera control region (e.g., 606), including, in accordance with a determination that the first gesture is a gesture of a second type different from the first type (e.g., a selection of an affordance in the camera control region other than one of the camera mode affordances) (e.g., a gesture at a location other than the first location (e.g., a swipe up on the representation of the field-of-view of the camera)), ceasing to display (934) the plurality of camera mode affordances (e.g., 620) (e.g., a selectable user interface object), and displaying a plurality of camera setting (e.g., 626, control a camera operation) affordances (e.g., a selectable user interface object) (e.g., affordances for selecting or changing a camera setting (e.g., flash, timer, filter
  • the gesture of the second type is movement of a contact (e.g., a swipe on the display device) in the camera display region.
  • the camera control region (e.g., 606) further includes an affordance (e.g., a selectable user interface object) for displaying a plurality of camera setting affordances
  • the gesture of the second type is a selection (e.g., tap) of the affordance for displaying the plurality of camera setting affordances.
  • the electronic device receives a selection of the affordance for displaying one or more camera settings.
  • the electronic device in response to receiving the request, the electronic device (e.g., 600) ceases to display the one or more camera mode affordances (e.g., 620) or one or more camera setting affordances.
  • displaying the camera user interface further includes displaying an affordance (e.g., 602a) (e.g., a selectable user interface object) that includes a graphical indication of a status of capture setting (e.g., a flash status indicator).
  • an affordance e.g., 602a
  • a selectable user interface object e.g., a selectable user interface object
  • Displaying an affordance that includes a graphical indication of a status of capture setting enables a user to quickly and easily recognize the status of capture setting.
  • Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
  • the gesture of the second type corresponds to a selection of the indication.
  • the electronic device detects a second gesture on the camera user interface corresponding to a request to display a first representation of previously captured media (e.g., 624, captured before now) (e.g., swipe (e.g., swipe from an edge of the display screen)).
  • a second gesture on the camera user interface corresponding to a request to display a first representation of previously captured media (e.g., 624, captured before now) (e.g., swipe (e.g., swipe from an edge of the display screen)).
  • the electronic device in response to detecting the second gesture, displays a first representation (e.g., 624) of the previously captured media (e.g., one or more representations of media that are displayed stacked on top of each other). Displaying a first representation of the previously captured media in response to detecting the second gesture enable a user to quickly and easily view the first representation of the previously captured media.
  • the first representation is displayed in the camera control region (e.g., 606).
  • displaying the plurality of camera setting affordances at the first location includes, in accordance with a determination that the electronic device (e.g., 600) is configured to capture media in a first camera mode (e.g., a portrait mode) while the gesture of the second type was detected, displaying a first set of camera setting affordances (e.g., a selectable user interface object) (e.g., lighting effect affordances) at the first location.
  • Displaying a first set of camera setting affordances at the first location in accordance with a determination that the electronic device is configured to capture media in a first camera mode while the gesture of the second type was detected provides a user with a quick and convenient access to the first set of camera setting affordances.
  • Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user- device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • displaying the plurality of camera setting affordances (e.g., 626) at the first location includes, in accordance with a determination that the electronic device (e.g., 600) is configured to capture media in a second camera mode (e.g., a video mode) that is different than the first camera mode while the gesture of the second type was detected, displaying a second set of camera setting affordances (e.g., a selectable user interface object) (e.g., video effect affordances) at the first location that is different than the first plurality of camera settings.
  • a second camera mode e.g., a video mode
  • a selectable user interface object e.g., video effect affordances
  • the first set of camera setting affordances includes a first camera setting affordance (e.g., 626a) and the second set of camera setting affordances includes the first camera setting affordance (e.g., 626a, a flash affordance that is included for both portrait mode and video mode).
  • a first camera setting affordance e.g., 626a
  • the second set of camera setting affordances includes the first camera setting affordance (e.g., 626a, a flash affordance that is included for both portrait mode and video mode).
  • the first camera mode is a still photo capture mode and the first set of camera setting affordances includes one or more affordances selected from the group consisting of: an affordance (e.g., a selectable user interface object) that includes an indication (e.g., a visual indication) corresponding to a flash setting, an affordance (e.g., a selectable user interface object) that includes an indication corresponding to a live setting (e.g., setting that, when on, creates a moving images (e.g., an image with the file extension of a GIF).
  • the electronic device receives a selection of the affordance that includes the indication corresponding to the live setting.
  • the electronic device in response to receiving selection of the indication, turns on/off the live setting), an affordance (e.g., a selectable user interface object) that includes an indication corresponding to an aspect ratio setting.
  • the electronic device receives a selection of the affordance that includes the indication corresponding to the aspect ratio setting; in some embodiments, in response to receiving selection of the indication, the electronic device turns on/off the aspect ratio setting and/or displays an adjustable control to adjust the aspect ratio of a representation (e.g., image, video) display on the display device), an affordance (e.g., a selectable user interface object) that includes an indication corresponding to a timer setting.
  • a representation e.g., image, video
  • the electronic device receives a selection of the affordance that includes the indication corresponding to the timer setting; in some embodiments, in response to receiving selection of the indication, the electronic device turns on/off the timer setting and/or displays an adjustable control to adjust the time before the image is captured after capture is initiated), and an affordance (e.g., a selectable user interface object) that includes an indication corresponding to a filter setting.
  • the electronic device receives a selection of the affordance that includes the indication corresponding to the filter setting; in some embodiments, in response to receiving selection of the indication, the electronic device turns on/off the filter setting and/or displays an adjustable control to adjust the filter that the electronic device uses when capturing an image.
  • selection of the affordance will cause the electronic device (e.g., 600) to set a setting corresponding to the affordance or display a user interface (e.g., options (e.g., slider, affordances)) for setting the setting.
  • a user interface e.g., options (e.g., slider, affordances)
  • the first camera mode is a portrait mode and the first set of camera setting affordances (e.g., 626) includes one or more affordances selected from the group consisting of: an affordance (e.g., a selectable user interface object) that includes an indication corresponding to a depth control setting (in some embodiments, the electronic device receives a selection of the affordance that includes the indication corresponding to the depth control setting; in some embodiments, in response to receiving selection of the indication, the electronic device turns on/off the depth control setting and/or displays an adjustable control to adjust the depth of field to blur the background of the device), an affordance (e.g., a selectable user interface object) that includes an visual indication corresponding to a flash setting (in some embodiments, the electronic device receives a selection of the affordance that includes the indication corresponding to the flash setting; in some embodiments, in response to receiving selection of the indication, the electronic device displays selectable user interface elements to configure a flash setting of an electronic device (e.g., set the
  • the electronic device receives a selection of the affordance that includes the indication corresponding to the filter setting; in some embodiments, in response to receiving selection of the indication, the electronic device turns on/off the filter setting and/or displays an adjustable control to adjust the filter that the electronic device uses when capturing an image), and an affordance (e.g., a selectable user interface object) that includes an indication corresponding to a lighting setting (in some embodiments, the electronic device receives a selection of the affordance that includes the indication corresponding to the lighting setting; in some embodiments, in response to receiving selection of the indication, the electronic device turns on/off the lighting setting and/or displays an adjustable control to adjust (e.g., increase/decrease the amount of light) a particular light setting (e.g., studio light setting, a stage lighting setting) that the electronic device uses when capturing an image).
  • a particular light setting e.g., studio light setting, a stage lighting setting
  • selection of the affordance will cause the electronic device (e.g., 600) to set a setting corresponding to the affordance or display a user interface (e.g., options (e.g., slider, affordances)) for setting the setting.
  • a user interface e.g., options (e.g., slider, affordances)
  • the electronic device while not displaying a representation (e.g., any representation) of previously captured media, the electronic device (e.g., 600) detects (936) capture of first media (e.g., capture of a photo or video) using the one or more cameras. In some embodiments, the capture occurs in response to a tap on a camera activation affordance or a media capturing affordance (e.g., a shutter button). In some embodiments, in response to detecting the capture of the first media, the electronic device (e.g., 600) displays (938) one or more representations (e.g., 6) of captured media, including a representation of the first media.
  • first media e.g., capture of a photo or video
  • the electronic device in response to detecting the capture of the first media, the electronic device (e.g., 600) displays (938) one or more representations (e.g., 6) of captured media, including a representation of the first media.
  • the representation of the media corresponding to the representation of the field-of-view of the one or more cameras is displayed on top of the plurality of representations of the previously captured media. Displaying the representation of the media corresponding to the representation of the field-of-view of the one or more cameras on top of the plurality of representation of the previously captured media enables a user to at least partially view and/or recognize previously captured media while viewing the representation of the media corresponding to the
  • the plurality of representations of the previously captured media are displayed as a plurality of representations that are stacked on top of each other.
  • the electronic device while the electronic device (e.g., 600) is configured to capture media that, when displayed, is displayed with the first aspect ratio, the electronic device receives (940) a third request to capture media. In some embodiments, in response to receiving the third request to capture media, the electronic device (e.g., 600) displays (942) a representation of the captured media with the first aspect ratio. In some embodiments, the electronic device (e.g., 600) receives (944) a request to change the representation of the captured media with the first aspect ratio to a representation of the captured media with a second aspect ratio. In some embodiments, in response to receiving the request, the electronic device (e.g., 600) displays (946) the representation of the captured media with the second aspect ratio. In some embodiments, in response to receiving the request, the electronic device (e.g., 600) displays (946) the representation of the captured media with the second aspect ratio. In some
  • adjusting the aspect ratio is nondestructive (e.g., the aspect ratio of the captured media can be changed (increased or decreased) after changing the photo).
  • the representation of the captured media with the second aspect ratio includes visual content (e.g., image content; additional image content within the field-of-view of the one or more cameras at the time of capture that was not included in the representation at the first aspect ratio) not present in the representation of the captured media with the first aspect ratio.
  • visual content e.g., image content; additional image content within the field-of-view of the one or more cameras at the time of capture that was not included in the representation at the first aspect ratio
  • the electronic device while the electronic device (e.g., 600) is configured to capture media in a third camera mode (e.g., portrait mode), the electronic device (e.g., 600) detects a second request to capture media. In some embodiments, in response to receiving the second request to capture media, the electronic device (e.g., 600) captures media using the one or more cameras based on settings corresponding to the third camera mode and at least one setting corresponding to an affordance (e.g., a selectable user interface object) (e.g., a lighting effect affordance) of the plurality of camera setting affordances (e.g. ,626).
  • an affordance e.g., a selectable user interface object
  • a lighting effect affordance e.g., a lighting effect affordance
  • Capturing media using the one or more cameras based on settings corresponding to the third camera mode and at least one setting corresponding to an affordance in response to receiving the request while the electronic device is configured to capture media in a third camera mode provides a user with easier control of the camera mode applied to captured media.
  • Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • 2700, 2800, 3000, 3200, 3400, 3600, 3800, 4000, and 4200 optionally include one or more of the characteristics of the various methods described above with reference to method 900. For brevity, these details are not repeated below.
  • FIGS. 10A-10K illustrate exemplary user interfaces for displaying a camera field-of- view using an electronic device in accordance with some embodiments.
  • the user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 11A-11C.
  • FIG. 10A illustrates electronic device 600 displaying a live preview 630 that optionally extends from the top of the display to the bottom of the display.
  • Live preview 630 is based on images detected by one or more camera sensors.
  • device 600 captures images using a plurality of camera sensors and combines them to display live preview 630.
  • device 600 captures images using a single camera sensor to display live preview 630.
  • the camera user interface of FIG. 10A includes indicator region 602 and control region 606, which are overlaid on live preview 630 such that indicators and controls can be displayed concurrently with the live preview.
  • Camera display region 604 is substantially not overlaid with indicators or controls.
  • live preview 630 includes a water view 1040 with surrounding environment.
  • Water view 1040 includes a horizon line 1040a that is displayed at an offset by an angle from device 600 because of how the user has oriented device 600.
  • FIGS. 10A-10K include graphical illustration 1060 that provides details about the orientation of device 600 with respect to the horizon line in the corresponding figure.
  • the camera user interface of FIG. 10A includes visual boundary 608 that indicates the boundary between indicator region 602 and camera display region 604 and the boundary between camera display region 604 and control region 606.
  • indicator region 602 is overlaid onto live preview 630 and optionally includes a colored (e.g., gray; translucent) overlay.
  • Indicator region 602 includes animated image status indicator 602d, which indicates whether the camera is configured to capture a single image or a plurality of images (e.g., in response to detecting activation of shutter affordance 610).
  • camera display region 604 includes live preview 630 and zoom affordance 622.
  • control region 606 is overlaid onto live preview 630 and optionally includes a colored (e.g., gray; translucent) overlay.
  • control region 606 includes camera mode affordances 620, additional control affordance 614, shutter affordance 610, and camera switcher affordance 612.
  • Camera mode affordances 620 indicates which camera mode is currently selected and enables the user to change the camera mode.
  • camera modes 620a-620e are displayed, and ‘Photo’ camera mode 620c is indicated as being the current mode in which the camera is operating by the bolding of the text.
  • Additional control affordance 614 enables the user to access additional camera controls.
  • Shutter affordance 610 when activated, causes device 600 to capture media (e.g., a photo) based on the current state of live preview 630.
  • the captured media is stored locally at electronic device and/or transmitted to a remote server for storage.
  • Camera switcher affordance 612 when activated, causes the device to switch to showing the field-of- view of a different camera in live preview 630, such as by switching between a rear-facing camera sensor and a front-facing camera sensor.
  • device 600 detects, using a touch-sensitive surface, tap gesture 1050a at a location that corresponds to video camera mode affordance 620b. In response to detecting tap gesture 1050a, device 600 displays the user interface of FIG. 10B.
  • device 600 detects, using the touch-sensitive surface, swipe right gesture 1050b at a location corresponding to live preview 630 in the camera display region 604. In response to detecting swipe right gesture 1050b, device 600 similarly displays the user interface of FIG. 10B.
  • FIGS. 8E-8H The transitions between FIG. 10A and 10B are described in further detail above with respect to FIGS. 8E-8H.
  • device 600 in response to detecting tap gesture 1050a or swipe right gesture 1050b, device 600 has transitioned from the photo camera mode to the video camera mode.
  • Device 600 displays a revised set of indicators in indicator region 602, an (optionally) updated live preview 630, and updated camera mode affordances 620.
  • the revised set of indicators in indicator region 602 includes newly displayed video quality indicator 602h (e.g., because the newly selected mode (video (record) mode) is compatible with the features corresponding to video quality indicator 602h) and newly displayed record time indicator 602i, without displaying previously displayed animated image status indicator 602d (e.g., because the newly selected mode is incompatible with the feature corresponding to live animated image status indicator 602d).
  • Video quality indicator 602h provides an indication of a video quality (e.g., resolution) at which videos will be recorded (e.g., when shutter affordance 610 is activated).
  • a video quality e.g., resolution
  • video quality indicator 602h indicates that the device is in 4K video quality recording mode and, as a result, when recording is activated the video will be recorded at the 4K video quality.
  • record time indicator 602i indicators the amount of time (e.g., in seconds, minutes, and/or hours) of a current ongoing vide. In FIG. 10B, record time indicator 602i indicates 00:00:00 because no video is currently being recorded.
  • the zoom of objects in live preview 630 change because of the change in camera mode (photo vs. video mode). In some embodiments, the zoom of objects in live preview 630 does not change despite the change in camera mode (photo vs. video mode). Note that the orientation 1060 of device 600 continues to be offset from the horizon and, as a result, horizon line 1040a continues to be displayed at an offset by an angle from device 600.
  • live preview 630 is updated to no longer be displayed in indicator region 602 and control region 606, while continuing to be displayed in camera display region 604.
  • the backgrounds of indicator region 602 and control region 606 are also updated to be black. As a result, the user can no longer see live preview 630 in indicator region 602 and control region 606.
  • device 600 detects, using the touch-sensitive surface, tap gesture 1050c at a location that corresponds to video quality indicator 602h (in indicator region 602).
  • device 600 in response to detecting tap gesture 1050c, device 600 displays adjustable video quality control 1018, which includes 720p video quality affordance 1018a, HD video quality affordance 1018b, and 4K video quality affordance 1018c (bolded to indicate 4K video quality recording mode is currently active).
  • device 600 detects, using the touch-sensitive surface, tap gesture 1050d at a location that corresponds to HD video quality affordance 1018b.
  • device 600 transitions the device (while not actively recording video) from 4K video quality recording mode to HD video quality recording mode.
  • Device 600 updates video quality indicator 602h (e.g., to say“HD”) to indicate that the device is in the HD video quality recording mode.
  • video quality indicator 602h e.g., to say“HD”
  • device 600 displays live preview 630 in indicator region 602, camera display region 604, and control region 606 (similar to FIG. 10A). This indicates to the user that visual content (beyond the visual content displayed in camera display region 604 and, optionally also, beyond visual content displayed in indicator region 602 and control region 606) will be stored as part of a video recording.
  • device 600 While device 600 is in the HD video quality recording mode and the orientation 1060 of device 600 continues to be offset from the horizon and, as a result, horizon line 1040a continues to be displayed at an offset by an angle from device 600, device 600 detects, using the touch-sensitive surface, tap gesture 1050e at a location that corresponds to shutter affordance 610.
  • device 600 in response to detecting tap gesture 1050e, begins recording video in the HD video quality recording mode.
  • the content of live preview 630 continues to update as the scene in the field-of-view of the camera(s) changes.
  • Visual elements of shutter affordance 610 have been updated to indicate that the device is recording a video and that re-activating shutter affordance 610 will end the recording.
  • Record time indicator 602i has progressed in FIG. 10E to indicate that 5 second of video has been recorded thus far.
  • Video quality indicator 602h is no longer displayed, thereby providing the user with a more complete view of live preview 630 and, optionally, because the video quality recording mode cannot be changed while recording video.
  • the orientation 1060 of device 600 continues to be offset from the horizon and, as a result, horizon line 1040a continues to be displayed at an offset by an angle from device 600.
  • orientation 1060 of device 600 varies during the video recording such that horizon line 1040a is recorded with varying degrees of offset from device 600.
  • device 600 detects, using the touch-sensitive surface, tap gesture 1050g at a location that corresponds to shutter affordance 610. In response to tap gesture 1050g, device 600 stops the recording.
  • the recording is stored in memory of device 600 for later retrieval, editing, and playback.
  • the stored recording includes visual content of live preview 630 as was displayed in indicator region 602, camera display region 604, and control region 606. Further, the stored recording also includes visual content captured during the video recording by the camera(s) of device 600 that were not displayed as part of live preview 630.
  • device 600 receives one or more user inputs to access the video recording. As illustrated in FIG. 10F, device 600 displays a frame of video recording 1032, which is available for playback, editing, deleting, and
  • the displayed frame of video recording 1032 includes the visual content of live preview 630 that was displayed in the camera display region 604 during recording, but does not include visual content of live preview 630 that was displayed in indicator region 602 and control region 606.
  • Device 600 overlays playback affordance 1038 onto the displayed frame of video recording 1032.
  • Activation (e.g., tap on) playback affordance 1038 causes playback affordance 1038 to cease to be displayed and for playback of video recording 1032 to occur, which includes visual playback of the visual content of live preview 630 that was displayed in the camera display region 604 during recording, but does not include visual content of live preview 630 that was displayed in indicator region 602 and control region 606 (and also does not include recorded visual content that was not displayed in live preview 630 during the recording).
  • the user interface of FIG. 10F also includes edit affordance 644a (for initiating a process for editing the video recording) and auto adjust affordance 1036b (for automatically editing the video recording).
  • device 600 detects, using the touch-sensitive surface, tap gesture 1050g at a location corresponding to edit affordance 644a. As illustrated in FIG. 10G, in response to detecting tap gesture 1050g, device 600 displays video editing options 1060, including affordance 1060a (for cropping and simultaneously rotating the video recording), adjust horizon affordance 1060b (for adjusting the horizon of the recording), affordance 1060c (for cropping the video recording), and affordance 1060d (for rotating the video recording). In some
  • cropping the recording merely reduces the visual content for playback (as compared to FIG. 10F) by, for example, further excluding portions of live preview 630 that would otherwise be displayed by activating playback affordance 1038 in FIG. 10F.
  • FIG. 10G also includes representations of visual content that was recorded and stored as part of the video recording but was not displayed as part of the camera display region 604 during the recording. These representations shown outside of device 600 are not part of the user interface of device 600, but are provided for improved understanding.
  • FIG. 10G illustrates that visual content of live preview 630 that was displayed in indicator region 602 and control region 606 is stored as part of the video recording and that some visual content that was not displayed in live preview 630 during the recording is also stored as part of video recording 1032, all of which is available to device 600 for rotating video recording 1032 to correct the offset of the horizon line.
  • device 600 while displaying video editing options 1060, device 600 detects, using the touch-sensitive surface, tap gesture 1050i at a location corresponding to adjust horizon affordance 1060b. As illustrated in FIG. 10H, in response to detecting tap gesture 1050i, device 600 modifies video recording 1032 such that horizon line 1040a is not displayed at an offset (e.g., is parallel to the top (or bottom) of the display of device 600) by using (e.g., bringing in) visual content that was not displayed in camera display region 604 during video recording and/or was not displayed in live preview 630 during video recording. Activation of done affordance 1036c preserves the modifications made to video recording 1032, while activation of cancel affordance 1036d reverts the modifications made to video recording 1032.
  • an offset e.g., is parallel to the top (or bottom) of the display of device 600
  • Activation of done affordance 1036c preserves the modifications made to video recording 1032
  • activation of cancel affordance 1036d revert
  • device 600 detects, using the touch-sensitive surface, tap gesture 1050h at a location corresponding to auto adjust affordance 1036b.
  • device 600 automatically (and without requiring further user input) modifies video recording 1032 such that horizon line 1040a is not displayed at an offset (e.g., is parallel to the top (or bottom) of the display of device 600) by bringing in visual content that was not displayed in camera display region 604 during video recording and/or was not displayed in live preview 630 during video recording, as shown in FIG. 10H.
  • auto adjustment includes additional adjustments, beyond horizon line correction (e.g., sharpening, exposure correction) that can use visual content that was not displayed in camera display region 604 during video recording and/or was not displayed in live preview 630 during video recording.
  • various user inputs change the magnification of live preview 630.
  • device 600 detects, using the touch-sensitive surface, tap gesture 1050j at a location corresponding to zoom affordance 622 and, in response, updates visual elements of zoom affordance 622 and zooms live preview 630 to a predetermined zoom level (e.g., 2X) that is not based on a magnitude of tap gesture 1050j, as shown in FIG.
  • a predetermined zoom level e.g., 2X
  • device 600 detects, using the touch-sensitive surface, tap gesture 1050k at a location corresponding to zoom affordance 622 and, in response, updates visual elements of zoom affordance 622 and zooms live preview 630 to a second predetermined zoom level (e.g., IX) that is not based on a magnitude of tap gesture 1050k, as shown in FIG. 10K.
  • a second predetermined zoom level e.g., IX
  • device 600 detects, using the touch-sensitive surface, pinch (or de pinch) gesture 10501 at a location corresponding to live preview 630 in camera display region 604 and, in response, zooms live preview 630 to a zoom level (e.g., 1.7X) that is based on a magnitude of pinch (or de-pinch) gesture 10501 (and, optionally, updates visual elements of zoom affordance 622).
  • a zoom level e.g., 1.7X
  • FIGS. 11 A-l 1C are a flow diagram illustrating a method for displaying a camera field-of-view using an electronic device in accordance with some embodiments.
  • Method 1100 is performed at a device (e.g., 100, 300, 500, 600) with a display device and one or more cameras (e.g., one or more cameras (e.g., dual cameras, triple camera, quad cameras, etc.) on different sides of the electronic device (e.g., a front camera, a back camera)).
  • Some operations in method 1100 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • the electronic device is a computer system.
  • the computer system is optionally in communication (e.g., wired communication, wireless communication) with a display generation component and with one or more input devices.
  • the display generation component is configured to provide visual output, such as display via a CRT display, display via an LED display, or display via image projection.
  • the display generation component is integrated with the computer system.
  • the display generation component is separate from the computer system.
  • the one or more input devices are configured to receive input, such as a touch-sensitive surface receiving user input.
  • the one or more input devices are integrated with the computer system.
  • the one or more input devices are separate from the computer system.
  • the computer system can transmit, via a wired or wireless connection, data (e.g., image data or video data) to an integrated or external display generation component to visually produce the content (e.g., using a display device) and can receive, a wired or wireless connection, input from the one or more input devices.
  • data e.g., image data or video data
  • an integrated or external display generation component to visually produce the content (e.g., using a display device)
  • receive, a wired or wireless connection, input from the one or more input devices e.g., image data or video data
  • method 1100 provides an intuitive way for displaying a camera field-of-view.
  • the method reduces the cognitive burden on a user for displaying a camera field- of-view, thereby creating a more efficient human-machine interface.
  • the electronic device e.g., 600
  • criteria can include a criterion that is satisfied when the device is configured to capture certain media (e.g.,
  • the camera user interface includes (1108) a first region (e.g., 604) (e.g., a camera display region), the first region including a representation of a first portion of a field-of-view (e.g., 630) of the one or more cameras.
  • the camera user interface includes (1110) a second region (e.g., 606) (e.g., a camera control region), the second region including a representation of a second portion of the field-of- view (e.g., 630) of the one or more cameras.
  • the second portion of the field-of-view of the one or more cameras is visually distinguished (e.g., having a dimmed appearance) (e.g., having a semi-transparent overlay on the second portion of the field-of-view of the one or more cameras) from the first portion.
  • the representation of the second portion of the field-of-view of the one or more cameras has a dimmed appearance when compared to the representation of the first portion of the field-of-view of the one or more cameras.
  • the representation of the second portion of the field-of-view of the one or more cameras is positioned above and/or below the camera display region (e.g., 604) in the camera user interface.
  • the electronic device By displaying the camera user interface in response to receiving the request to display the camera user interface and in accordance with a determination that respective criteria are not satisfied, where the camera user interface includes the first region and the second region, the electronic device performs an operation when a set of conditions has been met without requiring further user input, which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device detects (1112) an input corresponding to a request to capture media (e.g., image data (e.g., still images, video)) with the one or more cameras (e.g., a selection of an image capture affordance (e.g., a selectable user interface object) (e.g., a shutter affordance that, when activated, captures an image of the content displayed in the first region)).
  • a request to capture media e.g., image data (e.g., still images, video)
  • the one or more cameras e.g., a selection of an image capture affordance (e.g., a selectable user interface object) (e.g., a shutter affordance that, when activated, captures an image of the content displayed in the first region)).
  • the electronic device In response to detecting the input corresponding to a request to capture media (e.g., video, photo) with the one or more cameras, the electronic device (e.g., 600) captures (1114), with the one or more cameras, a media item (e.g., video, photo) that includes visual content corresponding to (e.g., from) the first portion of the field-of-view (e.g., 630) of the one or more cameras and visual content corresponding to the second portion (e.g., from) of the field-of-view of the one or more cameras.
  • a media item e.g., video, photo
  • the electronic device After capturing the media item, the electronic device (e.g., 600) receives (1116) a request to display the media item (e.g., a request to display).
  • a request to display the media item e.g., a request to display.
  • the electronic device e.g., a smartphone
  • an object tracking (e.g., object identification) operation uses at least a third portion of the visual content from the second portion of the field-of-view of the one or more cameras.
  • an object tracking operation e.g., automatically, without user input
  • Performing an object tracking operation (e.g., automatically, without user input) using at least a third portion of the visual content from the second portion of the field-of-view of the one or more camera after capturing the media item reduces the number of inputs needed to perform an operation, which in turn enhances the operability of the device and makes the user- device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device In response to receiving the request to display the media item, the electronic device (e.g., 600) displays (1120) a first representation of the visual content corresponding to the first portion of the field-of-view (e.g., 630) of the one or more cameras without displaying a representation of at least a portion of (or all of) the visual content corresponding to the second portion of the field-of-view of the one or more cameras.
  • the captured image data includes the representations of both the first and second portions of the field-of-view (e.g., 630) of the one or more cameras.
  • the representation of the second portion is omitted from the displayed representation of the captured image data, but can be used to modify the displayed representation of the captured image data.
  • the second portion can be used for camera stabilization, object tracking, changing a camera perspective (e.g., without zooming), changing camera orientation (e.g., without zooming), and/or to provide additional image data that can be incorporated into the displayed representation of the captured image data.
  • the electronic device while displaying the first representation of the visual content, the electronic device (e.g., 600) detects (1122) a set of one or more inputs corresponding to a request to modify (e.g., edit) the representation of the visual content. In some embodiments, in response to detecting the set of one or more inputs, the electronic device (e.g., 600) displays (1124) a second (e.g., a modified or edited) representation of the visual content.
  • a second e.g., a modified or edited
  • the second representation of the visual content includes visual content from at least a portion of the first portion of the field-of view-of the one or more cameras and visual content based on (e.g., from) at least a portion of the visual content from the second portion of the field-of-view of the one or more cameras that was not included in the first representation of the visual content.
  • Displaying the second representation of the visual content in response to detecting the set of one or more inputs enables a user to access visual content from at least the portion of the first portion of the field-of view-of the one or more cameras and visual content based on at least the portion of the visual content from the second portion of the field-of-view of the one or more cameras that was not included in the first representation of the visual content, thus enabling the user to access more of the visual content and/or different portions of the visual content.
  • a second representation of the visual content is generated and displayed in response to an edit operation.
  • the second representation includes at least a portion of the captured visual content that was not included in the first representation.
  • the first representation of the visual content is a representation from a first visual perspective (e.g., visual perspective of one or more cameras at the time the media item was captured, an original perspective, an unmodified perspective).
  • the second representation of the visual content is a representation from a second visual perspective different from the first visual perspective that was generated based on the at least a portion of the visual content from the second portion of the field-of-view of the one or more cameras that was not included in the first representation of the visual content (e.g., changing the representation from the first to the second visual perspective adds or, in the alternative, removes some of visual content corresponding to the second portion).
  • Providing the second representation of the visual content that is a representation from a second visual perspective different from the first visual perspective that was generated based on the at least a portion of the visual content from the second portion of the field-of-view of the one or more cameras that was not included in the first representation of the visual content provides a user with access to and enables the user to view additional visual content.
  • Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the first representation of the visual content is a representation in a first orientation (e.g., visual perspective of one or more cameras at the time the media item was captured, an original perspective, an unmodified perspective).
  • the second representation of the visual content is a representation in a second orientation different from the first orientation that was generated based on the at least a portion of the visual content from the second portion of the field-of-view of the one or more cameras that was not included in the first representation of the visual content (e.g., changing the representation from the first to the second orientation (e.g., horizon, portrait, landscape) adds or, in the alternative, removes some of visual content corresponding to the second portion).
  • Providing the second representation of the visual content that is a representation in a second orientation different from the first orientation that was generated based on the at least a portion of the visual content from the second portion of the field-of-view of the one or more cameras that was not included in the first representation of the visual content provides a user with access to and enables the user to view additional visual content.
  • Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the first representation is displayed at a first zoom level.
  • the first representation of the visual content is a representation in at a first zoom level (e.g., visual perspective of one or more cameras at the time the media item was captured, an original perspective, an unmodified perspective).
  • the second representation of the visual content is a representation in a second zoom level different from the first zoom level that was generated based on the at least a portion of the visual content from the second portion of the field-of-view of the one or more cameras that was not included in the first representation of the visual content (e.g., changing the representation from the first to the second zoom level adds or, in the alternative, removes some of visual content corresponding to the second portion).
  • the request to change the first zoom level to the second zoom level, while the device is operating in a portrait capturing mode corresponds to a selection of a zoom option affordance that is displayed while the device is configured to operate in portrait mode.
  • the first representation of the visual content is generated based at least in part on a digital image stabilization operation using at least a second portion of the visual content from the second portion of the field-of-view of the one or more cameras (e.g., using pixels from the visual content corresponding to the second portion in order to stabilize capture of camera).
  • the request to display the media item is a first request to display the media item (1126).
  • the electronic device receives (1128) a second request to display the media item (e.g., a request to edit the media item (e.g., second receiving the second request includes detecting one or more inputs corresponding to a request to display the media item)).
  • the electronic device in response to receiving the second request to display the media item (e.g., a request to edit the media item), displays (1130) the first representation of the visual content corresponding to the first portion of the field-of-view (e.g., 630) of the one or more cameras and the representation of the visual content corresponding to the second portion of the field-of-view of the one or more cameras.
  • the representation of the second portion of the field-of-view (e.g., 630) of the one or more cameras has a dimmed appearance when compared to the representation of the first portion of the field-of-view of the one or more cameras in the displayed media.
  • the displayed media has a first region that includes the representation and a second media that includes the representation of the visual content corresponding to the second portion of the field-of-view (e.g., 630) of the one or more cameras.
  • the electronic device in response to receiving the request to display the camera user interface and in accordance with a determination that respective criteria are satisfied, displays (1132), via the display device, a second camera user interface, the second camera user interface the including the representation of the first portion of the field-of-view of the one or more cameras without including the representation of the second portion of the field-of-view of the one or more cameras.
  • the electronic device By displaying a second camera user interface that includes the representation of the first portion of the field-of-view of the one or more cameras without including the representation of the second portion of the field-of-view of the one or more cameras in response to receiving the request to display the camera user interface and in accordance with a determination that respective criteria are satisfied, the electronic device performs an operation when a set of conditions has been met without requiring further user input, which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device in response to detecting input corresponding to a request to capture media, the electronic device (e.g., 600) captures a media item that includes visual content corresponding to the first portion of the field-of-view of the one or more cameras without capturing media corresponding to the second portion of the field-of-view of the one or more cameras.
  • the electronic device receives (1134) a request to display a previously captured media item (e.g., a request to edit the media item).
  • a request to display a previously captured media item e.g., a request to edit the media item.
  • the electronic device in response to receiving the request to display the previously captured media item (1136) (e.g., a request to edit the media item), in accordance with a determination that the previously captured media item was captured when the respective criteria were not satisfied, the electronic device (e.g., 600) displays an indication of additional content (e.g., the indication includes an alert the media item includes additional content that can be used, when a media item is captured that does include additional content, the indication is displayed).
  • the electronic device By displaying an indication of additional content in response to receiving the request to display the previously captured media item and in accordance with a determination that the previously captured media item was captured when the respective criteria were not satisfied, the electronic device provides a user with additional control options (e.g., for editing the media item), which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • additional control options e.g., for editing the media item
  • the electronic device in response to receiving the request to display the previously captured media item (1136) (e.g., a request to edit the media item), in accordance with a determination that the previously captured media item was captured when the respective criteria was satisfied, the electronic device (e.g., 600) forgoes display of (1140) an indication of additional content (e.g., when a media item is captured that does not include additional content, the media item is not displayed).
  • the respective criteria includes a criterion that is satisfied when the electronic device (e.g., 600) is configured to capture a media item with a resolution of four thousand horizontal pixels or greater.
  • the respective criteria includes a criterion that is satisfied when the electronic device (e.g., 600) is configured to operate in a portrait mode at a
  • predetermined zoom level e.g., portrait mode doesn’t include additional content while going between zoom levels (e.g., 0.5x, lx, 2x zooms)).
  • the respective criteria include a criterion that is satisfied when at least one camera (e.g., a peripheral camera) of the one or more cameras cannot maintain a focus (e.g., on one or more objects in the field-of-view) for a predetermined period of time (e.g.,
  • the input corresponding to the request to capture media with the one or more cameras is a first input corresponding to the request to capture media with the one or more cameras.
  • the electronic device detects a second input corresponding to a request to capture media with the one or more cameras.
  • the electronic device in response to detecting the second input corresponding to the request to capture media with the one or more cameras and in accordance with a determination that the electronic device is configured to capture visual content corresponding to the second portion of the field-of-view of the one or more cameras based on an additional content setting (e.g., 3702a, 3702a2, 3702a3 in FIG.
  • the electronic device captures the first representation (e.g., displayed in region 604) of the visual content corresponding to the first portion of the field-of-view of the one or more cameras and capturing the representation (e.g., displayed in regions 602 and/or 606) of at least the portion of the visual content corresponding to the second portion of the field-of-view of the one or more cameras.
  • the electronic device displays a settings user interface that includes an additional content capture setting affordance, that when selected, causes the electronic device to change into or out of a state in which the electronic device automatically, without additional user input, captures the second content in response to a request to capture media.
  • the additional content capture setting is user configurable.
  • the electronic device in response to detecting the second input corresponding to the request to capture media with the one or more cameras and in accordance with a determination that the electronic device is not configured to capture visual content corresponding to the second portion of the field-of-view of the one or more cameras based on the additional content setting, the electronic device captures the first representation of the visual content corresponding to the first portion of the field-of-view of the one or more cameras without capturing the representation of at least the portion of the visual content corresponding to the second portion of the field-of-view of the one or more cameras. In some embodiments, the electronic device forgoes capturing the second portion of the field-of-view of the one or more cameras.
  • methods 700, 900, 1300, 1500, 1700, 1900, 2000, 2100, 2300, 2500, 2700, 2800, 3000, 3200, 3400, 3600, 3800, 4000, and 4200 optionally include one or more of the characteristics of the various methods described above with reference to method 1100. For brevity, these details are not repeated below.
  • FIGS. 12A-12I illustrate exemplary user interfaces for accessing media items using an electronic device in accordance with some embodiments.
  • the user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 13A-13B.
  • device 600 displays home user interface screen 1200 that includes camera launch icon 1202. While displaying home user interface 1200, device 600 detects input 1295a on camera launch icon 1202.
  • device 600 displays a user interface that includes an indicator region 602, camera display region 604, and control region 606, as seen in FIG. 12B.
  • Indicator region 602 includes a flash indicator 602a and an animated image status indicator 602d that shows that device 600 is currently configured to capture animated images (e.g., capture a predefined number of images in response to a request to capture media).
  • Camera display region 604 includes live preview 630. Live preview 630 is a representation of the field- of-view of one or more cameras of device 600 (e.g., a rear-facing camera).
  • Control region 606 includes media collection 624. Device 600 displays media collection 624 as being stacked and close to device edge 1214.
  • Media collection 624 includes first portion of media collection 1212a (e.g., left half of media collection 624) and second portion of media collection 1212b (e.g., the top representations in the stack of media collection 624).
  • device 600 when the camera user interface is launched, device 600 automatically, without user input, displays an animation of media collection 624 sliding in from device edge 1214 towards the center of device 600.
  • first portion of media collection 1212b is not initially displayed when the animation begins (e.g., only the top representation is initially visible).
  • camera control region 612 includes shutter affordance 610.
  • device 600 detects a tap input 1295b on shutter affordance 610 while live preview 630 shows a woman walking across a crosswalk.
  • FIGS. 12C-12F illustrate the capture of animated media in response to input 1295b.
  • live preview 630 shows the woman moving further across the crosswalk and a man having entered the crosswalk.
  • Control region 606 does not include media collection 624, which is not shown while media is being captured.
  • media collection 624 is displayed while capturing media.
  • media collection 624 is displayed with only a single representation (e.g., the top representation of the stack) while capturing media.
  • live preview 630 shows the woman beginning to exit the crosswalk while the man moves further into the crosswalk.
  • Media collection 624 is shown and includes a representation of a first image of the plurality of images captured during the ongoing capture of animated media (e.g., an image captured 0.5 seconds after input 1295b was detected).
  • live preview 630 shows the woman having partially exited the crosswalk and the man in the middle of the crosswalk.
  • Media collection 624 is shown and includes a representation of a second image of the plurality of images captured during the ongoing capture of animated media (e.g., an image captured 1 second after input 1295b was detected).
  • the second image is overlaid over the representation shown in FIG. 12D (e.g., as a stack).
  • FIG. 12F device 600 has completed capture of the animated media.
  • Media collection 624 now includes, at the top of the stack, a single representation of the captured animated media (e.g., a single representation that is representative of the predefined plurality of captured images) overlaid over other previously captured media (e.g., media other than that captured during the animated media capture operation).
  • a single representation of the captured animated media e.g., a single representation that is representative of the predefined plurality of captured images
  • other previously captured media e.g., media other than that captured during the animated media capture operation.
  • device 600 in response to detecting that representation media collection 624 has been displayed for a predetermined period of time, ceases to display the first portion of media collection 1212a of media collection 624. As illustrated in FIG. 12G, device 600 maintains display of second portion of media collection 1212b while ceasing to display first portion of media collection 1212a. In some embodiments, ceasing to display first portion of media collection 1212a includes displaying an animation that slides the media collection 624 towards device edge 1214. After ceasing to display first portion of media collection 1212a and maintain second portion of media collection 1212b, additional control affordance 614 is displayed in a location previously occupied by media collection 624. In addition, after ceasing to display first portion of media collection 1212a, device 600 detects a swipe input 1295c that moves away from device edge 1214.
  • device 600 in response to detecting swipe input 1295c, device 600 re displays first portion of media collection 1212b of media collection 624. After redisplaying first portion of media collection 1212b, device 600 ceases to display additional control affordance 614 because media collection 624 covered the location that additional control affordance 614 occupied. While displaying media collection 624, device 600 detects tap input 1295d on media collection 624.
  • device 600 in response to detecting tap input 1295d, displays enlarged representation 1226 (e.g., a representation of the animated media captured in FIGS. 12B-12F).
  • Representation 1226 corresponds to the small representation displayed at the top of the stack of media collection 624 of FIG. 12H.
  • device 600 in response to a contact on representation 1226 with a characteristic intensity greater than a threshold intensity or a duration longer than a threshold duration, plays back the animated media corresponding to representation 1226. While displaying enlarged representation 1226, device 600 detects input 1295e on back affordance 1236.
  • device 600 in response to detecting input 1295e, exits out of the enlarged representation 1226 of the media and displays the media collection 624 near device edge 1214. While displaying media collection 624, device 600 detects input 1295f which is a swipe gesture that moves towards device edge 1214.
  • device 600 in response to detecting swipe input 1295f, ceases to display the first portion of media collection 1212a of media collection 624 and redisplays additional control affordance 616.
  • FIGS. 13A-13B are a flow diagram illustrating a method for accessing media items using an electronic device in accordance with some embodiments.
  • Method 1300 is performed at a device (e.g., 100, 300, 500, 600) with a display device and one or more cameras (e.g., one or more cameras (e.g., dual cameras, triple camera, quad cameras, etc.) on different sides of the electronic device (e.g., a front camera, a back camera)).
  • Some operations in method 1300 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • the electronic device is a computer system.
  • the computer system is optionally in communication (e.g., wired communication, wireless communication) with a display generation component and with one or more input devices.
  • the display generation component is configured to provide visual output, such as display via a CRT display, display via an LED display, or display via image projection.
  • the display generation component is integrated with the computer system.
  • the display generation component is separate from the computer system.
  • the one or more input devices are configured to receive input, such as a touch-sensitive surface receiving user input.
  • the one or more input devices are integrated with the computer system.
  • the one or more input devices are separate from the computer system.
  • the computer system can transmit, via a wired or wireless connection, data (e.g., image data or video data) to an integrated or external display generation component to visually produce the content (e.g., using a display device) and can receive, a wired or wireless connection, input from the one or more input devices.
  • data e.g., image data or video data
  • an integrated or external display generation component to visually produce the content (e.g., using a display device)
  • receive, a wired or wireless connection, input from the one or more input devices e.g., image data or video data
  • method 1300 provides an intuitive way for accessing media items.
  • the method reduces the cognitive burden on a user for accessing media items, thereby creating a more efficient human-machine interface.
  • enabling a user to access media items faster and more efficiently conserves power and increases the time between battery charges.
  • the electronic device displays (1302), via the display device, a camera user interface, the camera user interface including (e.g., displaying concurrently) a camera display region (e.g., 604), the camera display region including a representation (e.g., 630) of a field-of-view of the one or more cameras.
  • a camera user interface including (e.g., displaying concurrently) a camera display region (e.g., 604), the camera display region including a representation (e.g., 630) of a field-of-view of the one or more cameras.
  • the electronic device While displaying the camera user interface, the electronic device (e.g., 600) detects (1304) a request to capture media corresponding to the field-of-view (e.g., 630) of the one or more cameras (e.g., activation of a capture affordance such as a physical camera shutter button or a virtual camera shutter button).
  • a request to capture media corresponding to the field-of-view (e.g., 630) of the one or more cameras (e.g., activation of a capture affordance such as a physical camera shutter button or a virtual camera shutter button).
  • the electronic device In response to detecting the request to capture media corresponding to the field-of- view (e.g., 630) of the one or more cameras, the electronic device (e.g., 600) captures (1306) media corresponding to the field-of-view of the one or more cameras and displays a
  • the electronic device e.g., a smartphone
  • the 600 detects (1308) that the representation of the captured media has been displayed for a predetermined period of time.
  • the predetermined amount of time is initiated in response to an event (e.g., capturing an image, launching the camera application, etc.).
  • the length of the predetermined amount of time is determined based on the detected event. For example, if the event is capturing image data of a first type (e.g., still image), the predetermined amount of time is a fixed amount of time (e.g., 0.5 seconds), and if the event is capturing image data of a second type (e.g., a video), the predetermined amount of time corresponds to the amount of image data captured (e.g., the length of the captured video)).
  • the electronic device detects (1310) user input corresponding to a request to display an enlarged representation of the captured media (e.g., user input corresponding to a selection (e.g., tap) on of the representation of the captured media).
  • the electronic device in response to detecting user input corresponding to the selection of the representation of the captured media, displays (1312), via the display device, an enlarged representation of the captured media.
  • representation of the captured media e.g., enlarging a representation of the media.
  • the representation of the captured media is displayed at a fifth location on the display.
  • the electronic device e.g., 600
  • displays an affordance e.g., a selectable user interface object
  • Displaying an affordance for controlling a plurality of camera settings after ceasing to display at least a portion of the representation of the captured media while maintaining display of the camera user interface provides a user with easily accessible and usable control options.
  • Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • capturing media e.g., a video, a moving image (e.g., live photo)
  • the field-of-view e.g., 630
  • capturing media includes capturing a sequence of images.
  • the electronic device By capturing (e.g., automatically, without additional user input) a sequence of images when capturing media corresponding to the field-of-view of the one or more cameras, the electronic device provides improved feedback, which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • displaying the representation of the captured media includes playing at least a portion of the captured sequence of images that includes at least two images (e.g., video, photo).
  • the captured video is looped for a predetermined period of time.
  • the predetermined period of time is based on (e.g., equal to) a duration of the captured sequence of images.
  • the representation of the captured media ceases to be displayed after playback of the video media is completed.
  • the electronic device In response to detecting that the representation (e.g., 1224) of the captured media has been displayed for the predetermined period of time, the electronic device (e.g., 600) ceases to display (1314) at least a portion of the representation of the captured media while maintaining display of the camera user interface. Ceasing to display at least a portion of the representation of the captured media while maintaining display of the camera user interface in response to detecting that the representation of the captured media has been displayed for the predetermined period of time reduces the number of inputs needed to perform an operation, which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
  • ceasing to display the representation of the captured media includes displaying an animation of the representation of the captured media moving off the camera control region (e.g., once the predetermined amount of time expires, the image preview slides off-screen (e.g., to the left) in an animation)).
  • the portion of the representation of the captured media is a first portion of the representation of the capture media.
  • ceasing to display at least the first portion of the representation of the captured media while maintaining display of the camera user interface further includes maintaining display of at least a second portion of the representation of the captured media (e.g., an edge of the representation sticks out near an edge of the user interface (e.g., edge of display device (or screen on display device)).
  • the representation of the captured media is displayed at a first location on the display.
  • ceasing to display at least the first portion of the representation of the captured media while maintaining display of the camera user interface further includes displaying an animation that moves (e.g., slides) the representation of the captured media from the first location on the display towards a second location on the display that corresponds to an edge of the display device (e.g., animation shows representation sliding towards the edge of the camera user interface). Displaying an animation that moves the representation of the captured media from the first location on the display towards a second location on the display that corresponds to an edge of the display device when ceasing to display at least the first portion of the
  • representation of the captured media while maintaining display of the camera user interface provides to a user visual feedback that the at least the first portion of the representation is being removed from being displayed.
  • Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the representation of the captured media is displayed at a third location on the display.
  • the electronic device e.g., 600 detects user input (e.g., a swipe gesture towards the edge of the display device) corresponding to a request to cease display of at least a portion of the second representation of the captured media while maintaining display of the camera user interface.
  • the electronic device in response to detecting the request to cease display of at least a portion of the second representation, ceases to display at least a portion of the second representation of the captured media while maintaining display of the camera user interface.
  • the electronic device after ceasing to display the first portion of the representation, the electronic device (e.g., 600) receives (1316) user input corresponding to movement of a contact from a fourth location on the display that corresponds to an edge of the display device to a fifth location on the display that is different from the fourth location (e.g., swipe in from edge of display) (e.g., user input corresponding to a request to display (or redisplay) the representation (or preview).
  • the electronic device in response to receiving user input corresponding to movement of the contact from the fourth location on the display that corresponds to the edge of the display device to the fifth location on the display, the electronic device (e.g., 600) re-displays (1318) the first portion of the representation.
  • Re-displaying the first portion of the representation in response to receiving user input corresponding to movement of the contact from the fourth location on the display that corresponds to the edge of the display device to the fifth location on the display enables a user to quickly and easily cause the electronic device to re-display the first portion of the representation.
  • Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device receives (1320) a request to redisplay the camera user interface.
  • the electronic device in response receiving the request to redisplay the camera user interface, displays (1322) (e.g., automatically displaying) a second instance of the camera user interface that includes (e.g., automatically includes) a second representation of captured media.
  • the second representation of captured media is displayed via an animated sequence of the representation translating on to the UI from an edge of the display.
  • the electronic device while displaying the representation of the captured media, receives a user input corresponding to a request to display options to share the captured media. In some embodiments, in response to receiving the user input corresponding to the request to display options to share the capture media, displays a user interface for sharing the captured media. In some embodiments, the user interface for sharing the captured media includes a plurality of options to share the captured media. [486] Note that details of the processes described above with respect to method 1300 (e.g., FIGS. 13A-13B) are also applicable in an analogous manner to the methods described above and below.
  • methods 700, 900, 1100, 1500, 1700, 1900, 2000, 2100, 2300, 2500, 2700, 2800, 3000, 3200, 3400, 3600, 3800, 4000, and 4200 optionally include one or more of the characteristics of the various methods described above with reference to method 1300. For brevity, these details are not repeated below.
  • FIGS. 14A-14U illustrate exemplary user interfaces for modifying media items using an electronic device in accordance with some embodiments.
  • the user interfaces in these figures are used to illustrate the processes described below, including the processes in FIG. 15A-15C.
  • FIGS. 14A-14D illustrate the process by which device 600 is configured to capture media using different aspect ratios.
  • device 600 displays live preview 630 that is a
  • Live preview 630 includes visual portion 1404 and dimmed portion 1406.
  • Visual boundary 608 is between visual portion 1404 and dimmed portion 1406 and visually displayed on device 600.
  • Visual boundary 608 includes predefined input locations 1410A-1410D at the comers of visual boundary 608.
  • Visual portion 1404 is a visual indication of media that will be captured and displayed to the user in response to a request to capture media.
  • visual portion 1404 is a visual indication of the portion of the representation of media that is typically displayed when media is captured and represented.
  • Dimmed portion 1406 is a visual indication of the portion of the media that is not typically displayed after media is captured and represented.
  • Visual portion 1404 is visually distinguished from dimmed portion 1406. Specifically, visual portion 1404 is not shaded while dimmed portion 1406 is shaded.
  • device 600 displays zoom affordance 622.
  • FIGS. 14A-14D show various portions of an overall input 1495A.
  • Overall input 1495A changes the aspect ratio corresponding to visual portion 1404 from four-by-three aspect ratio 1400 (e.g., a 4:3 aspect ratio corresponding to visual portion 1404) to a new aspect ratio.
  • Overall input 1495A includes input portion 1495A1 and input portion 1495A2.
  • Input portion 1495A1, corresponding to stationary component of the input, is the first portion of overall input 1495A and input portion 1495A2, corresponding to a moving component of the input, is a second portion of overall input 1495A.
  • device 600 while device 600 is configured to capture media with four-by-three aspect ratio 1400, device detects input portion 1495A1 at location 1410A, corresponding to the upper-right corner of visual boundary 608.
  • device 600 has determined that input portion 1495A1 has been maintained at location 1410A for a predetermined period of time (e.g., a non-zero length of time, 0.25 seconds, 0.5 seconds). As illustrated in FIG. 14B, in accordance with this determination, device 600 shrinks the area enclosed by visual boundary 608. In some embodiments, shrinking the area enclosed by visual boundary 608 provides an indication that visual boundary can now be modified (e.g., using further movement of the input). Reducing the area enclosed by visual boundary 608, reduces the area of visual portion 1404 and increases the area of dimmed portion 1406. In some embodiments, device 600 displays an animation of visual boundary 608 shrinking and dimmed portion 1406 expanding into the area that visual boundary 608 left vacant.
  • a predetermined period of time e.g., a non-zero length of time, 0.25 seconds, 0.5 seconds.
  • device 600 In addition to shrinking the area enclosed by visual boundary 608, device 600 generates tactile output 1412A and ceases to display zoom affordance 622. After detecting that input portion 1495A1, device 600 detects input portion 1495A2 of overall input 1495A moving in a downwards direction, aware from location 1410A.
  • device 600 in response to detecting input portion 1495A2, moves or translates visual boundary 608 from its original position to a new position based on a characteristic (e.g., a magnitude and/or direction) of input portion 1495A2.
  • a characteristic e.g., a magnitude and/or direction
  • Device 600 displays visual boundary 608 at the new. While displaying visual boundary 608 at the new position, device 600 detects lift off of overall input 1495 A.
  • device 600 in response to detecting lift off of input 1495 A, expands visual boundary 608, increasing the size of visual boundary 608 to square aspect ratio 1416 (e.g., a square aspect ratio corresponding to visual portion 1404).
  • Square aspect ratio 1416 is a predetermined aspect ratio. Because device 600 determined that input portion 1495A2 resulted in visual boundary 608 having a final position within a predetermined proximity to the predetermined square aspect ratio, device 600 causes the visual boundary to snap to the square aspect ratio 1416. In response to detecting lift off of overall input 1495A, device 600 also generates tactile output 1412B and redisplays zoom affordance 622. In addition, device 600 displays aspect ratio status indicator 1420 to indicate that device 600 is configured to capture media of square aspect ratio 1416.
  • visual boundary 608 in accordance with input portion 1495A2 not having a final position within a predetermined proximity to the predetermined square aspect ratio (or any other predetermined aspect ratio), will be displayed based on the magnitude and direction of input portion 1495A2 and not at a predetermined aspect ratio. In this way, users can set a custom aspect ratio or readily select a predetermined aspect ratio.
  • device 600 displays an animation of visual boundary 608 expanding.
  • device 600 displays an animation of visual boundary 608 snapping into the predetermined aspect ratio.
  • tactile output 412B is provided when visual boundary 608 snaps into a predetermined aspect ratio (e.g., aspect ratio 1416).
  • device 600 detects input portion 1495B1 of overall input 1495B on predetermined location 1404B corresponding to a lower-right corner of visual boundary 608.
  • Input portion 1495B1 is a contact that is maintained for at least a predetermined time at location 1404B.
  • device 600 in response to detecting input portion 1495B1, device 600 performs similar techniques to those discussed in FIG. 14B. For clarity, device 600 shrinks the area enclosed by visual boundary 608 and generates tactile output 1412C.
  • Device 600 also detects input portion 1495B2 of overall input 1495B, which is a drag moving in a downwards direction away from location 1404B.
  • device 600 in response to detecting movement of input portion 1495B2, moves or translates visual boundary 608 from its original position to a new position based on a characteristic (e.g., magnitude and/or direction) of input portion 1495B2. While moving visual boundary 608 to the new position, device 600 detects that visual boundary 608 is in four-by-three aspect ratio 1418. In response to detecting that visual boundary 608 is in four-by-three aspect ratio 1418, without detecting lift off of input 1495B, device 600 issues tactile output 1412D.
  • a characteristic e.g., magnitude and/or direction
  • device 600 maintains display of aspect ratio status indicator 1420 that indicates that device 600 is configured to capture media of square aspect ratio 1416 and forgoes updating aspect ratio status indicator 1420 to indicate that device 600 is configured to capture media of aspect ratio 1418 (e.g., 4:3), since overall input 1495B is still being maintained without lift off.
  • aspect ratio status indicator 1420 indicates that device 600 is configured to capture media of square aspect ratio 1416 and forgoes updating aspect ratio status indicator 1420 to indicate that device 600 is configured to capture media of aspect ratio 1418 (e.g., 4:3), since overall input 1495B is still being maintained without lift off.
  • device 600 continues to detect input portion 1495B2.
  • Visual boundary 608 is now aspect ratio 1421 and has moved from its position illustrated in FIG. 14G to a new position. While displaying visual boundary 608 at the new position, device 600 detects lift off of overall input 1495B.
  • device 600 in response to detecting lift off of input 1495B, performs similar techniques to those discussed in FIG 14D in relation to the response to a detection of lift off of 1495 A. For clarity, as illustrated in FIG. 141, device 600 expands visual boundary 608 to predetermined sixteen-by-nine aspect ratio 1422. In addition, device 600 redisplays zoom affordance 622 and updates aspect ratio status indicator 1418 to indicate that device 600 is configured to capture media of sixteen-by-nine aspect ratio 1422 (e.g., 16:9). In some embodiments, device 600 generates tactile output in response to lift off of input 1495B.
  • device 600 detects input 1495C (e.g., a continuous upwards swipe gesture) on predefined input location 1404B that corresponds to a corner of visual boundary 608.
  • Device 600 determines that 1495C has not been maintained on predefined input location 1404B for a predetermined period of time (e.g., the same predetermined time discussed with respect to FIG. 14B).
  • device 600 in response to input 1495C, displays camera setting affordances 624 in accordance with the techniques described above for displaying camera setting affordances 802 in FIGS. 8A-8B above.
  • Device 600 does not, however, adjust the visual boundary 608 in response to input 1495C because input 1495C did not include a stationary contact at location 1404B, corresponding to a corner of visual boundary 608.
  • camera setting affordances 624 and camera setting affordances 802 are the same.
  • device 600 While displaying camera setting affordances 624, device 600 detects input 1495D on aspect ratio control 1426.
  • Adjustable aspect ratio controls 1470 include aspect ratio options 1470A-1470D. As shown in FIG. 14L, aspect ratio option 1495C is bolded and selected, which matches the status indicated by aspect ratio status indicator 1420. While displaying adjustable aspect ratio controls 1470, device 600 detects input 1495E on aspect ratio option 1470B.
  • device 600 in response to detecting input 1495E, device 600 updates visual boundary 1408 and visual portion 1410 from sixteen-by-nine aspect ratio to four-by-three aspect ratio.
  • device 600 detects input 1495F, which is a downward swipe in the live preview 630.
  • device 600 in response to detecting input 1495F, device 600 ceases to display camera setting affordances 624 in accordance with the techniques described above in FIG. 8Q-8R.
  • device 600 detects input 1495G, which is tap gesture at predefined input location 1410A corresponding to the upper-right corner of visual boundary 608.
  • device 600 determines that input 1495G has not been maintained on predefined input location 1410A for a predetermined period of time. Device 600 does not adjust the visual boundary 608 in response to input 1495G because input 1495G did not meet the conditions for adjusting the visual boundary. In response to input 1495G, device 600 updates live preview 630 and adjusts image capture setting by adjusting the focus and exposure settings based on the location of tap input 1495G. As illustrated in FIG. 140, visual portion 1404 appears more blurry and out of focus due to the updated focus and exposure setting.
  • device 600 detects input portion 1495H1 of overall input 1495H on a location in live preview 630 (e.g., a location that is not one of the corners 1410A-1410D of visual boundary 608).
  • Overall input 1495H includes a first contact, followed by a lift-off, and then a second contact.
  • Input portion 1495H1 is a stationary contact (e.g., the first contact of overall input 1495H) that is maintained for more than a predetermined period of time (e.g., is maintained for at least the same period of time as input portion 1495A1 of FIG. 14B).
  • device 600 in response to detecting input portion 1495H1, activates an exposure lock function that updates the live preview and updates the capture settings based on light values at the location of input portion 1495H1. Device 600 also displays exposure setting manipulator 1428.
  • device 600 detects input portion 1495H2 (e.g., the second contact of overall input 1495H) of overall input 1495H, which is a dragging movement performed with the second contact of overall input 1495H.
  • device 600 updates the exposure setting manipulator 1428 to a new value based on a characteristic (e.g., magnitude and/or direction) of input portion 1495H2.
  • device 600 maintains display of exposure setting manipulator 1428.
  • Device 600 also detects input 14951, which is a horizontal swipe starting from predefined input location 1410A, which is the upper-right comer of visual boundary 608.
  • device 600 in response to detecting input 14951, changes the camera mode in accordance with similar techniques discussed in FIGS. 8D-8H. Device 600 does not, however, adjust the visual boundary 608 in response to input 14951 because input 14951 did not include a stationary contact component that was detected for a predetermined period of time at predefined input location 1410A, corresponding to a comer of visual boundary 608.
  • FIGS. 15A-15C are a flow diagram illustrating a method for modifying media items using an electronic device in accordance with some embodiments.
  • Method 1500 is performed at a device (e.g., 100, 300, 500, 600) with a display device and one or more cameras (e.g., one or more cameras (e.g., dual cameras, triple camera, quad cameras, etc.) on different sides of the electronic device (e.g., a front camera, a back camera)).
  • Some operations in method 1500 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • the electronic device is a computer system.
  • the computer system is optionally in communication (e.g., wired communication, wireless communication) with a display generation component and with one or more input devices.
  • the display generation component is configured to provide visual output, such as display via a CRT display, display via an LED display, or display via image projection.
  • the display generation component is integrated with the computer system.
  • the display generation component is separate from the computer system.
  • the one or more input devices are configured to receive input, such as a touch-sensitive surface receiving user input.
  • the one or more input devices are integrated with the computer system.
  • the one or more input devices are separate from the computer system.
  • the computer system can transmit, via a wired or wireless connection, data (e.g., image data or video data) to an integrated or external display generation component to visually produce the content (e.g., using a display device) and can receive, a wired or wireless connection, input from the one or more input devices.
  • data e.g., image data or video data
  • an integrated or external display generation component to visually produce the content (e.g., using a display device)
  • receive, a wired or wireless connection, input from the one or more input devices e.g., image data or video data
  • method 1500 provides an intuitive way for modifying media items.
  • the method reduces the cognitive burden on a user for modifying media items, thereby creating a more efficient human-machine interface.
  • enabling a user to modify media items faster and more efficiently conserves power and increases the time between battery charges.
  • the electronic device displays (1502), via the display device, a camera user interface, the camera user interface including (e.g., displaying concurrently) a camera display region (e.g., 604), the camera display region including a representation (e.g., 630) of a field-of-view of the one or more cameras.
  • a camera user interface including (e.g., displaying concurrently) a camera display region (e.g., 604), the camera display region including a representation (e.g., 630) of a field-of-view of the one or more cameras.
  • the camera user interface further comprises an indication that the electronic device (e.g., 600) is configured to operate in a first media capturing mode.
  • the electronic device e.g., 600
  • the electronic device displays a control (e.g., a slider) for adjusting a property (e.g., a setting) associated with a media capturing operation.
  • Displaying the control for adjusting a property associated with a media capturing operation in accordance with detecting a fourth input including detecting continuous movement of a fourth contact in a second direction enables a user too quickly and easily access the control.
  • Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device while displaying the control for adjusting the property associated with a media capturing operation, the electronic device (e.g., 600) displays a first indication (e.g., number, slider knob (e.g., bar) on slider track) of a first value of the property (e.g., amount of light, a duration, etc.).
  • a first indication e.g., number, slider knob (e.g., bar) on slider track
  • a first value of the property e.g., amount of light, a duration, etc.
  • the control property e.g., amount of light, a duration, etc.
  • the electronic device replaces display of the first indication of the first value of the property with display of a second indication of value of the property.
  • the value of the property is displayed when set. In some embodiments, the value of the property is not displayed.
  • the electronic device While the electronic device (e.g., 600) is configured to capture media with a first aspect ratio (e.g., 1400) in response to receiving a request to capture media (e.g., in response to activation of a physical camera shutter button or activation of a virtual camera shutter button), the electronic device detects (1504) a first input (e.g., a touch and hold) including a first contact at a respective location on the representation of the field-of-view of the one or more cameras (e.g., a location that corresponds to a corner of the camera display region).
  • a first input e.g., a touch and hold
  • the electronic device In response to detecting the first input (1506), in accordance with a determination that a set of aspect ratio change criteria is met, the electronic device (e.g., 600) configures (1508) the electronic device to capture media with a second aspect ratio (e.g., 1416) that is different from the first aspect ratio in response to a request to capture media (e.g., in response to activation of a physical camera shutter button or activation of a virtual camera shutter button).
  • a second aspect ratio e.g., 1416
  • the set of aspect ratio change criteria includes a criterion that is met when the first input includes maintaining the first contact at a first location corresponding to a predefined portion (e.g., a comer) of the camera display region that indicates at least a portion of a boundary of the media that will be captured in response to a request to capture media (e.g., activation of a physical camera shutter button or activation of a virtual camera shutter button) for at least a threshold amount of time, followed by detecting movement of the first contact to a second location different from the first location (1510).
  • a predefined portion e.g., a comer
  • the electronic device By configuring the electronic device to capture media with a second aspect ratio that is different from the first aspect ratio in response to a request to capture media and in accordance with a determination that a set of aspect ratio change criteria is met, the electronic device performs an operation when a set of conditions has been met without requiring further user input, which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device in response to detecting at least a first portion of the first input, in accordance with a determination that the first portion of the first input includes maintaining the first contact at the first location for at least the threshold amount of time, the electronic device (e.g., 600) provides (1512) a first tactile (e.g., haptic) output.
  • a first tactile e.g., haptic
  • Providing the first tactile output in accordance with a determination that the first portion of the first input includes maintaining the first contact at the first location for at least the threshold amount of time provides feedback to a user the first contact has been maintained at the first location for at least the threshold amount of time.
  • Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device in response to detecting at least a second portion of the first input, in accordance with a determination that a second portion of the first input includes maintaining the first contact at the first location for at least the threshold amount of time, the electronic device (e.g., 600) displays (1514) a visual indication of the boundary (e.g., 1410) of the media (e.g., a box) that will be captured in response to a request to capture media.
  • a visual indication of the boundary e.g., 1410 of the media
  • Displaying the visual indication of the boundary of the media that will be captured in accordance with a determination that a second portion of the first input includes maintaining the first contact at the first location for at least the threshold amount of time provides visual feedback to a user of the portion of the media that will be captured.
  • Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
  • the electronic device modifies (1516) the appearance of the visual indication based on the first magnitude and the first direction (e.g., adjusting the visual indication to show changes to the boundary of the media that will be captured).
  • the electronic device in response to detecting at least a first portion of the first input, in accordance with a determination that the first portion of the first input includes maintaining the first contact at the first location for at least the threshold amount of time, the electronic device (e.g., 600) displays (1518) an animation that includes reducing a size of a portion of the representation of the field-of-view of the one or more cameras that is indicated by the visual indication (e.g., animation of boundary being pushed back (or shrinking)).
  • an animation that includes reducing a size of a portion of the representation of the field-of-view of the one or more cameras that is indicated by the visual indication (e.g., animation of boundary being pushed back (or shrinking)).
  • Displaying an animation that includes reducing a size of a portion of the representation of the field-of-view of the one or more cameras that is indicated by the visual indication in accordance with a determination that the first portion of the first input includes maintaining the first contact at the first location for at least the threshold amount of time provides visual feedback to a user that the size of the portion of the representation is being reduced while also enabling the user to quickly and easily reduce the size.
  • Providing improved visual feedback and additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device displays (1520) an animation (e.g., expanding) that includes increasing a size of a portion of the representation of the field-of-view of the one or more cameras that is indicated by the visual indication (e.g., expanding the first boundary box at a first rate (e.g., rate of expansion)).
  • an animation e.g., expanding
  • a first portion of the representation of the field-of-view of the one or more cameras is indicated as selected by the visual indication (e.g., 1410) of the boundary of the media (e.g., enclosed in a boundary (e.g., box)) and a second portion of the representation of the field-of-view of the one or more cameras is not indicated as selected by the visual indication of the boundary of the media (e.g., outside of the boundary (e.g., box)).
  • the first portion as being selected by the visual indication of the boundary of the media and not indicating the second portion as being selected by the visual indication of the boundary of the media enables a user to quickly and easily visually distinguish the portions of the representation that are and are not selected.
  • the second portion is visually distinguished (e.g., having a dimmed or shaded appearance) (e.g., having a semi-transparent overlay on the second portion of the field-of-view of the one or more cameras) from the first portion.
  • configuring the electronic device (e.g., 600) to capture media with a second aspect ratio includes, in accordance with the movement of the first contact to the second location having a first magnitude and/or direction of movement (e.g., a magnitude and direction) that is within a first range of movement (e.g., a range of vectors that all correspond to a predetermined aspect ratio), configuring the electronic device to capture media with a predetermined aspect ratio (e.g., 4:3, square, 16:9).
  • configuring the electronic device (e.g., 600) to capture media with a second aspect ratio includes, in accordance with the movement of the first contact to the second location having a second magnitude and/or direction of movement (e.g., a magnitude and direction) that is not within the first range of movement (e.g., a range of vectors that all correspond to a predetermined aspect ratio), configuring the electronic device to capture media with an aspect ratio that is not predetermined (e.g., a dynamic aspect ratio) and that is based on the second magnitude and/or direction of movement (e.g., based on a magnitude and/or direction of the movement).
  • a second magnitude and/or direction of movement e.g., a magnitude and direction
  • configuring the electronic device (e.g., 600) to capture media with the predetermined aspect ratio includes generating, via one or more tactile output devices, a second tactile (e.g., haptic) output.
  • Generating the second tactile output when configuring the electronic device to capture media with the predetermined aspect ratio provides feedback to a user of the aspect ratio setting.
  • Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • each camera mode e.g., video, phot/still, portrait, slow-motion, panoramic modes
  • each camera mode has a plurality of settings (e.g., for a portrait camera mode: a studio lighting setting, a contour lighting setting, a stage lighting setting) with multiple values (e.g., levels of light for each setting) of the mode (e.g., portrait mode) that a camera (e.g., a camera sensor) is operating in to capture media (including post-processing performed automatically after capture.
  • camera modes are different from modes which do not affect how the camera operates when capturing media or do not include a plurality of settings (e.g., a flash mode having one setting with multiple values (e.g., inactive, active, auto)).
  • camera modes allow user to capture different types of media (e.g., photos or video) and the settings for each mode can be optimized to capture a particular type of media corresponding to a particular mode (e.g., via post processing) that has specific properties (e.g., shape (e.g., square, rectangle), speed (e.g., slow motion, time elapse), audio, video).
  • the one or more cameras of the electronic device when activated, captures media of a first type (e.g., rectangular photos) with particular settings (e.g., flash setting, one or more filter settings); when the electronic device is configured to operate in a square mode, the one or more cameras of the electronic device, when activated, captures media of a second type (e.g., square photos) with particular settings (e.g., flash setting and one or more filters); when the electronic device is configured to operate in a slow motion mode, the one or more cameras of the electronic device, when activated, captures media that media of a third type (e.g., slow motion videos) with particular settings (e.g., flash setting, frames per second capture speed); when the electronic device is configured to operate in a portrait mode, the one or more cameras of the electronic device captures media of a fifth type (e.g., portrait photos (e.g., photos with blurred backgrounds)
  • a first type e.g., rectangular photos
  • particular settings e.g., flash setting
  • the display of the representation of the field-of-view changes to correspond to the type of media that will be captured by the mode (e.g., the representation is rectangular mode while the electronic device is operating in a still photo mode and the representation is square while the electronic device is operating in a square mode).
  • the electronic device e.g., 600
  • the electronic device in response to detecting the first input, in accordance with a determination that the first input does not include maintaining the first contact at the first location for the threshold amount of time and a determination that the first input includes movement of the first contact that exceeds a first movement threshold (e.g., the first input is a swipe across a portion of the display device without an initial pause), the electronic device (e.g., 600) configures the electronic device to capture media using a second camera mode different from the first camera mode.
  • the electronic device e.g., 600
  • configuring the electronic device to use the second camera mode includes displaying an indication that the device is configured to the second camera mode.
  • the electronic device in response to detecting the first input, in accordance with a determination that the first input (e.g., a touch for short period of time on comer of boundary box) includes detecting the first contact at the first location for less than the threshold amount of time (e.g., detect a request for setting a focus), the electronic device (e.g., 600) adjusts (1522) a focus setting, including configuring the electronic device to capture media with a focus setting based on content at a location in the field-of-view of the one or more cameras that corresponds to the first location.
  • Adjusting a focus setting in accordance with a determination that the first input includes detecting the first contact at the first location for less than the threshold amount of time reduces the number of inputs needed to perform an operation, which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device in response to detecting the first input, in accordance with a determination that the first input (e.g., a touch for long period of time on anywhere on representation that is not the corner of the boundary box) includes maintaining the first contact for a second threshold amount of time at a third location (e.g., a location that is not the first location) that does not correspond to a predefined portion (e.g., a comer) of the camera display region (e.g., 604) that indicates at least the portion of the boundary of the media that will be captured in response to the request to capture media (e.g., activation of a physical camera shutter button or activation of a virtual camera shutter button), the electronic device (e.g., 600) configures (1524) the electronic device to capture media with a first exposure setting (e.g., an automatic exposure setting) based on content at a location in the field-of-view of the one or more cameras that corresponds to the third location.
  • a first exposure setting e.g., an automatic exposure setting
  • Configuring the electronic device to capture media with the first exposure setting in accordance with a determination that the first input includes maintaining the first contact for a second threshold amount of time at a third location that does not correspond to a predefined portion of the camera display region that indicates at least the portion of the boundary of the media that will be captured in response to the request to capture media reduces the number of inputs needed to perform an operation, which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
  • the electronic device after configuring the electronic device (e.g., 600) to capture media with the first exposure setting (e.g., an automatic exposure setting) based on content at a location in the field-of-view of the one or more cameras that corresponds to the third location, the electronic device (e.g., 600) detects a change in the representation of the field-of-view of the one or more cameras (e.g., due to movement of the electronic device) that causes the content at a location in field-of-view of the one or more cameras that corresponds to the third location to no longer be in the field-of-view of the one or more cameras. In some embodiments, in response to detecting the change, the electronic device (e.g., 600) continues to configure the electronic device to capture media with the first exposure setting.
  • the first exposure setting e.g., an automatic exposure setting
  • methods 700, 900, 1100, 1300, 1700, 1900, 2000, 2100, 2300, 2500, 2700, 2800, 3000, 3200, 3400, 3600, 3800, 4000, and 4200 optionally include one or more of the characteristics of the various methods described above with reference to method 1500. For brevity, these details are not repeated below.
  • FIGS. 16A-16Q illustrate exemplary user interfaces for varying zoom levels using an electronic device in accordance with some embodiments.
  • the user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 17A-17B.
  • FIG. 16A illustrates device 600 in a portrait orientation 1602 (e.g., vertical), where device 600’s long axis is running vertically. While device 600 is in portrait orientation 1602, the device displays portrait orientation camera interface 1680.
  • Portrait orientation interface 1680 includes portrait orientation live preview 1682, zoom toggle affordance 1616, shutter affordance 1648, and camera switching affordance 1650.
  • portrait orientation live preview 1682 is a live preview of a portion of the field-of-view of front facing camera 1608. Live preview 1682 does not include grayed out portions 1681and 1683, which also display previews of content from the field-of-view of front-facing camera 1608.
  • portrait orientation live preview 1682 shows person 1650A preparing to take an image (e.g., a selfie) using front-facing camera 1608 of device 600.
  • portrait orientation live preview 1682 is displayed at zoom level 1620A that uses 80% of front-facing camera 604’ s field-of-view (e.g., the live preview is zoomed in) that is available for display in portrait orientation live preview 1682.
  • Portrait orientation live preview 1682 shows person 1650A (e.g., a user of device 600) standing in the center with person 1650B partially visible on the right side of the image and person 1650C partially visible on the left side of the image. While displaying portrait orientation live preview 1682 in the way described above, device 600 detects input 1695A (e.g., a tap) on shutter affordance 1648.
  • input 1695A e.g., a tap
  • device 600 in response to detecting input 1695 A, captures media representative of portrait orientation live preview 1682 and displays a representation 1630 of the media in portrait orientation camera user interface 1680.
  • device 600 while displaying portrait orientation live preview 1682, device 600 detects clockwise rotational input 1695B that causes device 600 to be physically rotated into a landscape orientation (e.g., with the device’s long axis running horizontally).
  • person 1650A rotates device 600 clockwise in order to capture more of the environment in the horizontal direction (e.g., so as to bring persons 1650B and 1650C into the field-of-view).
  • device 600 in response to detecting rotational input 1695B, replaces portrait orientation camera user interface 1680 with landscape orientation camera interface 1690 automatically, without additional intervening user inputs.
  • Landscape orientation camera interface 1690 includes a landscape orientation live preview 1692 that is displayed at zoom level 1620B in landscape orientation 1604.
  • Zoom level 1620B is different from zoom level 1620A in that device 600 is using 100% of front-facing camera 1608’s field-of-view (“FOV”) to display landscape orientation live preview 1692.
  • FOV field-of-view
  • landscape orientation live preview 1692 shows the entire faces of person 1650A, as well as persons 1650B, and 1650C.
  • landscape orientation live preview 1692 while at zoom level 1620B (100% of FOV), allows the user to frame a photo (e.g., a potential photo) that includes a greater degree of content.
  • Landscape orientation live preview 1692 also shows a new person, person 1650D, who was not shown in portrait orientation live preview 1682.
  • device 600 automatically shifts between zoom level 1620A (80% of FOV) and zoom level 1620B (100% of FOV) when the device orientation changes from portrait to landscape because user’s typically want to use the front cameras of their devices to capture more of their environment when in a landscape orientation than in a portrait orientation.
  • zoom level 1620A 80% of FOV
  • zoom level 1620B (100% of FOV)
  • device 600 in response to detecting input 1695B, captures media representative of landscape orientation live preview 1692 and displays a representation 1632 of the media in landscape orientation camera user interface 1690.
  • Representation 1632 is different from representation 1630 in that it is in landscape orientation 1604 and matches zoom level 1620B (100% of FOV).
  • Device 600 is also capable of changing zoom levels based on various manual inputs. For instance, while displaying landscape orientation live preview 1692 at zoom level 1620B, device 600 detects de-pinch input 1695D or tap input 1695DD on zoom toggle affordance 1616. As illustrated in FIG. 16E, in response to detecting input 1695D or tap input 1695DD, device 600 changes the zoom level of landscape orientation live preview 1692 from zoom level 1620B (100% of FOV) back to zoom level 1620A (80% of FOV).
  • a de-pinch gesture while at zoom level 1620B (100% of FOV) snaps to zoom level 1620A (80% of FOV; a predetermined zoom level) rather than setting a zoom level entirely based on the magnitude of the de-pinch gesture.
  • live preview 1692 remains in landscape orientation 1604.
  • landscape orientation live preview 1692 currently shows only a portion of person 1650B and ceases to show person 1650D.
  • landscape orientation live preview 1692 shows a different image than portrait orientation live preview 1682 showed because device 600 is now in landscape orientation 1604.
  • device 600 detects de-pinch input 1695E.
  • device 600 in response to detecting input 1695E, changes the zoom level of landscape orientation live preview 1692 from zoom level 1620 A (80% of FOV) to zoom level 1620C (e.g., 40% of FOV).
  • zoom level 1620A e.g., 80% of FOV
  • zoom level 1670 e.g., 40% of FOV
  • device 600 detects pinching input 1695F.
  • device 600 in response to detecting pinching input 1695F, changes the zoom level of landscape orientation live preview 1692 from zoom level 1620C (40% of FOV) back to zoom level 1620A (80% of FOV), which is described above in relation to FIG. 16E. While displaying landscape orientation live preview at zoom level 1620A, device 600 detects pinching input 1695G.
  • device 600 changes the zoom level of landscape orientation live preview 1692 from zoom level 1620 A (80% of FOV) back to zoom level 1620B (100% of FOV), which is described in relation to FIG. 16C- 16D. While displaying portrait landscape orientation live preview 1692, device 600 detects counterclockwise rotational input 1695H that causes device 600 to be rotated back into portrait orientation 1602.
  • device 600 displays automatically, without interviewing inputs, portrait orientation camera user interface 1680 that includes portrait orientation live preview 1682 in portrait orientation 1602 at the zoom level 1620A (80% of FOV).
  • device 600 is capable of allowing a user to automatically, without additional inputs, change camera user interface 1692 at zoom level 1620B back into camera user interface 1680 (as illustrated in FIG. 16 A) at zoom level 1620 A.
  • zoom toggle affordance 1616 is used to change a live preview between zoom level 1620 A (using 80% of FOV) and zoom level 1620B (using 100% of FOV), which is different from pinching inputs (as described above) that allow a user to change the zoom level of a live preview to other zoom levels (e.g., zoom level 1620C).
  • zoom toggle affordance 1616 is used to change a live preview between zoom level 1620 A (using 80% of FOV) and zoom level 1620B (using 100% of FOV), which is different from pinching inputs (as described above) that allow a user to change the zoom level of a live preview to other zoom levels (e.g., zoom level 1620C).
  • device 600 While displaying portrait orientation live preview 1682 at 1620B, device 600 detects input 16951 (e.g., a tap) on zoom toggle affordance 1616.
  • device 600 in response to detecting input 16951, displays changes the zoom level of portrait orientation live preview 1682 from zoom level 1620 A (field- of-view80% of FOV) to zoom level 1620B (100% FOV).
  • zoom level 1620 A field- of-view80% of FOV
  • zoom level 1620B 100% FOV
  • portrait orientation live preview 1682 shows the full face of person 1650A, as well as persons 1650B and 1650C.
  • FIGS. 16J-16N depict scenarios where device 600 does not automatically change the zoom level of the camera user interface when detecting rotational input. Turning back to FIG. 16J, device 600 detects an input 1695J on camera switching affordance.
  • device 600 in response to detecting input 1695J, displays portrait orientation camera interface 1680 that includes portrait orientation live preview 1684 depicting at least a portion of the field-of-view of one or more cameras.
  • Portrait orientation live preview 1684 is displayed at zoom level 1620D.
  • device 600 has switched from being configured to capture media using front-facing camera 1608 to being configured to capture media using of one or more cameras.
  • live preview 1684 While displaying live preview 1684, device 600 detects clockwise rotational input 1695K of device 600, changing the device from being in a portrait orientation to a landscape orientation.
  • FIG. 16L in response to detecting rotational input 1695K, device 600 displays landscape orientation camera interface 1690.
  • Landscape orientation camera interface camera interface 1690 includes landscape orientation live preview 1694 that depicts the field-of-view of one or more cameras in landscape orientation 1604.
  • Device 600 does not automatically adjust the zoom level, as was seen in FIGS. 16B-16C, so landscape orientation live preview 1694 remains displayed at zoom level 1620D because automatic zoom criteria are not satisfied when device 600 is configured to capture media using a rear-facing camera (e.g., camera on the opposite side of device with respect to front-facing camera 1608).
  • a rear-facing camera e.g., camera on the opposite side of device with respect to front-facing camera 1608).
  • While displaying landscape orientation live preview 1694 device 600 detects input 1695L on live preview 1684 corresponding to the video capture mode affordance.
  • device 600 in response to detecting input 1695L, device 600 initiates a video capture mode.
  • video capture mode device 600 displays landscape orientation camera interface 1691 at zoom level 1620E.
  • Landscape orientation camera interface 1691 includes landscape orientation live preview 1697 that depicts the field-of-view of a rear-facing camera (e.g., camera on the opposite side of device with respect to front-facing camera 1608). While displaying landscape orientation camera interface 1691, device 600 detects input 1695M on camera switching affordance 1616.
  • landscape orientation camera interface 1691 includes landscape orientation live preview 1697 that depicts the FOV in landscape orientation 1604.
  • landscape orientation camera interface 1691 and live preview 1697 remain in the landscape orientation 1604 at zoom level 1620E.
  • device 600 has switched from being configured to capture media using a rear-facing camera (e.g., camera on the opposite side of device with respect to front-facing camera 1608) to front-facing camera 1608 and remains in video capture mode. While displaying camera interface 1691, device 600 detects a rear-facing camera (e.g., camera on the opposite side of device with respect to front-facing camera 1608) to front-facing camera 1608 and remains in video capture mode. While displaying camera interface 1691, device 600 detects
  • portrait orientation interface 1681 includes live preview 1687 that depicts at least a portion of field-of-view of front-facing camera 1608 in portrait orientation 1602 at zoom level 1620E because automatic zoom criteria are not satisfied when device 600 is configured to capture media in video mode.
  • device 600 displays a notification 1640 to join a live communication session that includes join affordance 1642. While displaying the notification 1640, device 600 detects input (e.g., tap) 16950 on notification affordance 1642.
  • input e.g., tap
  • device 600 in response to detecting input 16950, joins the live communication session.
  • device 600 switches from video capture mode to a live communication session mode.
  • device 600 displays portrait orientation camera interface 1688 in portrait orientation 1602 that includes displaying a portrait orientation live preview 1689 at zoom level 1620A (80% of FOV).
  • device 600 detects clockwise rotational input 1695P that causes device 600 to be rotated into landscape orientation 1604.
  • landscape orientation camera interface 1698 includes a landscape orientation live preview 1699 that is displayed at zoom level 1620B (e.g., at 100% of FOV) because a set of automatic zoom criteria are satisfied when device 600 is transmitting live video in a live communication session (e.g., as opposed to being in a video capture mode).
  • FIGS. 17A-17B are a flow diagram illustrating a method for varying zoom levels using an electronic device in accordance with some embodiments.
  • Method 1700 is performed at a device (e.g., 100, 300, 500, 600) with a display device (e.g., a touch-sensitive display) and one or more cameras (e.g., 1608; (e.g., dual cameras, triple camera, quad cameras, etc.) on different sides of the electronic device (e.g., a front camera, a back camera)).
  • Some operations in method 1700 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • the electronic device is a computer system.
  • the computer system is optionally in communication (e.g., wired communication, wireless communication) with a display generation component and with one or more input devices.
  • the display generation component is configured to provide visual output, such as display via a CRT display, display via an LED display, or display via image projection.
  • the display generation component is integrated with the computer system.
  • the display generation component is separate from the computer system.
  • the one or more input devices are configured to receive input, such as a touch-sensitive surface receiving user input.
  • the one or more input devices are integrated with the computer system.
  • the one or more input devices are separate from the computer system.
  • the computer system can transmit, via a wired or wireless connection, data (e.g., image data or video data) to an integrated or external display generation component to visually produce the content (e.g., using a display device) and can receive, a wired or wireless connection, input from the one or more input devices.
  • data e.g., image data or video data
  • an integrated or external display generation component to visually produce the content (e.g., using a display device)
  • receive, a wired or wireless connection, input from the one or more input devices e.g., image data or video data
  • method 1700 provides an intuitive way for varying zoom levels.
  • the method reduces the cognitive burden on a user for varying zoom levels, thereby creating a more efficient human-machine interface.
  • the electronic device While the electronic device (e.g., 600) is in a first orientation (e.g., 1602) (e.g., the electronic is orientated in portrait orientation (e.g., the electronic device is vertical)), the electronic device displays (1702), via the display device, a first camera user interface (e.g., 1680) for capturing media (e.g., image, video) in a first camera orientation (e.g., portrait orientation) at a first zoom level (e.g., zoom ratio (e.g., IX, 5X, 10X)).
  • a first orientation e.g. 1602
  • a first camera user interface e.g., 1680
  • capturing media e.g., image, video
  • a first camera orientation e.g., portrait orientation
  • a first zoom level e.g., zoom ratio (e.g., IX, 5X, 10X)
  • the electronic device detects (1704) a change (e.g., 1695B) in orientation of the electronic device from the first orientation (e.g., 1602) to a second orientation (e.g., 1604).
  • a change e.g., 1695B
  • Automatically displaying, without intervening user inputs, a second camera user interface for capturing media in a second camera orientation at a second zoom level that is different from the first zoom level reduces the number of inputs needed to perform an operation, which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device displays (1710) (e.g., in the first camera user interface and in the second camera user interface) a media capture affordance (e.g., a selectable user interface object) (e.g., a shutter button).
  • a media capture affordance e.g., a selectable user interface object
  • the electronic device detects (1712) a first input that corresponds to the media capture affordance (e.g., 1648) (e.g., a tap on the affordance).
  • the electronic device in response to detecting the first input (1714), in accordance with a determination that the first input was detected while the first camera user interface (e.g., 1680) is displayed, the electronic device (e.g., 600) captures (1716) media at the first zoom level (e.g., 1620A). In some embodiments, in response to detecting the first input (1714), in accordance with a determination that the first input was detected while the second camera user interface (e.g., 1690) is displayed, the electronic device (e.g., 600) captures (1718) media at the second zoom level (e.g., 1620B).
  • Capturing media at different zoom levels based on a determination of whether the first input is detected while the first camera user interface is displayed or while the second camera user interface is displayed enables a user to quickly and easily capture media without the need to manually configure zoom levels.
  • Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • displaying the first camera user interface includes displaying a first representation (e.g., 1682) (e.g., a live preview (e.g., a live feed of the media that can be captured)) of a field-of-view of the camera (e.g., an open observable area that is visible to a camera, the horizontal (or vertical or diagonal) length of an image at a given distance from the camera lens).
  • a first representation e.g., 1682
  • a live preview e.g., a live feed of the media that can be captured
  • the first representation is displayed in the first camera orientation (e.g., a portrait orientation) at the first zoom level (e.g., 1620A) (e.g., 80% of camera’s field-of-view, zoom ratio (e.g., IX, 5X, 10X)).
  • the first representation e.g., 1682 is displayed in real time.
  • displaying the second camera user interface includes displaying a second representation (e.g., 1692) (e.g., a live preview (e.g., a live feed of the media that can be captured)) of the field-of-view of the camera (e.g., an open observable area that is visible to a camera, the horizontal (or vertical or diagonal) length of an image at a given distance from the camera lens).
  • a second representation e.g., 1692
  • a live preview e.g., a live feed of the media that can be captured
  • the second representation (e.g., 1692) is displayed in the second camera orientation (e.g., a landscape orientation) at the second zoom level (e.g., 1620B) (e.g., 100% of camera’s field-of- view, zoom ratio (e.g., IX, 5X, 10X)).
  • the second representation (e.g., 1692) is displayed in real time.
  • the first orientation (e.g., 1602) is a portrait orientation and the first representation is a portion of the field-of-view of the camera
  • the second orientation (e.g., 1604) is a landscape orientation and the second representation is an entire field-of-view of the camera.
  • the representation e.g., 1682 displayed in the camera interface
  • the representation e.g., 1692 displayed in the camera interface
  • the entire field-of-view of the camera e.g., the field-of-view of the camera (e.g., 1608) is not cropped).
  • the electronic device while displaying the first representation (e.g., 1682) of the field-of-view of the camera, the electronic device (e.g., 600) receives (1720) a request (e.g., a pinch gesture on the camera user interface) to change the first zoom level (e.g., 1620A) to a third zoom level (e.g., 1620B).
  • a request e.g., a pinch gesture on the camera user interface
  • the first zoom level e.g., 1620A
  • a third zoom level e.g., 1620B
  • the request is received when the set of automatic zoom criteria are satisfied (e.g., automatic zoom criteria include a criterion that is satisfied when the electronic device using a first camera (e.g., a front camera) to capture the field-of-view of the camera and/or a when the electronic device in one or more other modes (e.g., portrait mode, photo mode, mode associated with a live communication session)).
  • automatic zoom criteria include a criterion that is satisfied when the electronic device using a first camera (e.g., a front camera) to capture the field-of-view of the camera and/or a when the electronic device in one or more other modes (e.g., portrait mode, photo mode, mode associated with a live communication session)).
  • the electronic device in response to receiving the request to change the first zoom level (e.g., 1620 A) to the third zoom level (e.g., 1620B), replaces (1722) display of the first representation (e.g., 1682) with a third representation (e.g., a live preview (e.g., a live feed of the media that can be captured)) of the field-of-view of the camera.
  • the third representation is in the first camera orientation and at the third zoom level.
  • the third zoom level e.g., 1620B
  • the second zoom level e.g., 1620A and 1620B.
  • a user can use a pinch out (e.g., two contacts moving relative to each other so that a distance between the two contacts increases) gesture to zoom in on the representation from a first zoom level (e.g., 80%) to a third zoom level (e.g., second zoom level (e.g., 100%)) (e.g., capture less of the field-of-view of the camera).
  • a pinch in e.g., two fingers coming together
  • first zoom level e.g., 100%
  • second zoom level e.g., 80%
  • the electronic device while displaying the first representation (e.g., 1682) of the field-of-view of the camera, the electronic device (e.g., 600) displays (1724) (e.g., displaying in the first camera user interface and in the second camera user interface) a zoom toggle affordance (e.g., 1616) (e.g., a selectable user interface object). Displaying a zoom toggle affordance while displaying the first representation of the field-of-view of the camera enables a user to quickly and easily adjust the zoom level of the first representation manually, if needed.
  • a zoom toggle affordance e.g. 1616
  • the electronic device detects (1726) a second input (e.g., 16951) that corresponds to selection of the zoom toggle affordance (e.g., 1616) (e.g., a selectable user interface object) (e.g., a tap on the affordance).
  • a second input e.g., 16951
  • the zoom toggle affordance e.g., 1616
  • a selectable user interface object e.g., a tap on the affordance
  • selection of the zoom toggle affordance to a request to change the first zoom level to a fourth zoom level in response to detecting the second input, the electronic device (e.g., 600) replaces (1728) display of the first representation (e.g., 1682) with a fourth representation (e.g., a live preview (e.g., a live feed of the media that can be captured)) of the field-of-view of the camera.
  • the fourth representation e.g., a live preview (e.g., a live feed of the media that can be captured)
  • the fourth zoom level is in the first camera orientation and at the fourth zoom level.
  • the fourth zoom level is the same as the second zoom level.
  • a user taps an affordance to zoom in on the representation from a first zoom level (e.g., 80%) to a third zoom level (e.g., the second zoom level (e.g., 100%)) (e.g., capture less of the field-of-view of the camera).
  • a user can tap on an affordance to zoom out the representation from a first zoom level (e.g., 100%) to a third zoom level (e.g., second zoom level (e.g., 80%)) (e.g., capture more of the field-of-view of the camera).
  • the affordance for changing the zoom level can toggle between a zoom in and a zoom out state when selected (e.g., display of the affordance can change to indicate that the next selection will cause the representation to be zoomed out or zoomed in).
  • the zoom toggle affordance (e.g., 1616) is displayed in the first camera user interface (e.g., 1680) and the second camera user interface (e.g., 1690). In some embodiments, the zoom toggle affordance (e.g., 1616) is initially displayed in the first camera user interface with an indication that it will, when selected, configure the electronic device to capture media using the second zoom level, and is initially displayed in the second camera user interface with an indication that it will, when selected, configure the electronic device (e.g., 600) to capture media using the first zoom level.
  • the electronic device while displaying the first representation (e.g., 1682) of the field-of-view of the camera, the electronic device (e.g., 600) receives a request (e.g., a pinch gesture (e.g., 1695D-1695I) on the camera user interface) to change the first zoom level (e.g., 1620A) to a third zoom level (e.g., 1620B).
  • a request e.g., a pinch gesture (e.g., 1695D-1695I) on the camera user interface) to change the first zoom level (e.g., 1620A) to a third zoom level (e.g., 1620B).
  • the request is received when the electronic device (e.g., 600) is operating in a first mode (e.g., a mode that includes a determination that the electronic device using a first camera (e.g., a front camera) to capture the field-of-view of the camera and/or a determination of operating the device in one or more other modes (e.g., portrait mode, photo mode, mode associated with a live communication session)).
  • a first mode e.g., a mode that includes a determination that the electronic device using a first camera (e.g., a front camera) to capture the field-of-view of the camera and/or a determination of operating the device in one or more other modes (e.g., portrait mode, photo mode, mode associated with a live communication session)).
  • a first mode e.g., a mode that includes a determination that the electronic device using a first camera (e.g., a front camera) to capture the field-of-view of the camera and/or a determination of operating
  • the electronic device in response to receiving the request to change the first zoom level (e.g., 1620A) to the third zoom level (e.g., 1620C), replaces display of the first representation (e.g., 1682) with a fifth representation (e.g., a live preview (e.g., a live feed of the media that can be captured)) of the field-of-view of the camera.
  • a live preview e.g., a live feed of the media that can be captured
  • the fifth representation is in the first camera orientation and at the third zoom level.
  • the third zoom level is the different from the second zoom level.
  • the user can zoom-in and out of the representation to a zoom level that the device would not automatically display the representation when the orientation of the device is changed.
  • the camera includes a first camera (e.g., a front camera (e.g., a camera located on the first side (e.g., front housing of the electronic device)) and a second camera (e.g., a rear camera (e.g., located on the rear side (e.g., rear housing of the electronic device))) that is distinct from the first camera.
  • the set of automatic zoom criteria include a criterion that is satisfied when the electronic device (e.g., 600) is displaying, in the first camera user interface (e.g., 1680, 1690), (e.g., set by the user of the device, a
  • the device in accordance with a determination that the set of automatic zoom criteria are not met (e.g., the device is displaying a representation of the field-of-view of the second camera and not the first camera) (e.g., FIG.
  • the electronic device forgoes automatically, without intervening user inputs, displaying a second camera user interface (e.g., 1690) for capturing media in a second camera orientation (e.g., landscape orientation) at a second zoom level that is different from the first zoom level.
  • a second camera user interface e.g., 1690
  • the second camera user interface for capturing media in the second camera orientation at the second zoom level in accordance with a determination that the set of automatic zoom criteria are not met prevents unintended access to the second camera user interface.
  • Automatically forgoing performing an operation when a set of conditions has not been met enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the set of automatic zoom criteria include a criterion that is satisfied when the electronic device (e.g., 600) is not in a video capture mode of operation (e.g., capturing video that does not include video captured while the electronic device is in a live communication session between multiple participants, streaming video (e.g., FIGS. 16M-16N)).
  • a video capture mode of operation e.g., capturing video that does not include video captured while the electronic device is in a live communication session between multiple participants, streaming video (e.g., FIGS. 16M-16N)).
  • the set of automatic zoom criteria include a criterion that is satisfied when the electronic device (e.g., 600) is configured to capture video for a live communication session (e.g., communicating in live video chat (e.g., live video chat mode) between multiple participants, displaying a user interface for facilitating a live communication session (e.g., first camera user interface is a live communication session interface) (e.g., FIGS. 16P-16Q)).
  • a live communication session e.g., communicating in live video chat (e.g., live video chat mode) between multiple participants, displaying a user interface for facilitating a live communication session (e.g., first camera user interface is a live communication session interface) (e.g., FIGS. 16P-16Q)).
  • the first zoom level is higher than the second zoom level (e.g., the first zoom level is 10X and the second zoom level is IX; the first zoom level is 100% and the second zoom level is 80%).
  • the electronic device e.g., 600
  • the electronic device detects a change in orientation of the electronic device from the second orientation (e.g., 1604) to the first orientation (e.g., 1602).
  • the electronic device in response to detecting the change in orientation of the electronic device (e.g., 600) from the second orientation to the first orientation (e.g., switching the device from landscape to portrait mode), displays, on the display device, the first camera user interface (e.g., 1680).
  • the camera user interface when switching the device from a landscape orientation (e.g., a landscape mode) to a portrait orientation (e.g., a portrait mode), the camera user interface zooms in and, when switching the device from a portrait orientation to a landscape orientation, the device zooms outs.
  • methods 700, 900, 1100, 1300, 1500, 1900, 2000, 2100, 2300, 2500, 2700, 2800, 3000, 3200, 3400, 3600, 3800, 4000, and 4200 optionally include one or more of the characteristics of the various methods described above with reference to method 1700. For brevity, these details are not repeated below.
  • FIGS. 18A-18X illustrate exemplary user interfaces for managing media using an electronic device in accordance with some embodiments.
  • the user interfaces in these figures are used to illustrate the processes described below, including the processes in FIG. 19A-19B, 20A- 20C, and 21A-21C.
  • FIGS. 18A-18X illustrate device 600 operating in several environments with different levels of visible light.
  • An environment that has an amount of light below a low- light threshold e.g., 20 lux
  • a low-light environment An environment having an amount of light above the low-light threshold will be referred to as a normal environment.
  • device 600 can detect, via one or more cameras, whether there is a change in the amount of light in an environment (e.g., in the field-of-view of the one or more cameras (FOV)) and determine whether device 600 is operating in a low-light environment or a normal environment.
  • FOV field-of-view of the one or more cameras
  • device 600 displays a camera user interface that includes camera display region 604, control region 606, and indicator region 602.
  • Live preview 630 is a representation of the FOV.
  • Live preview 630 shows a person posing for a picture in a well-lit environment.
  • the amount of light in the FOV is above a low-light threshold and device 600 is not operating in the low-light environment. Because device 600 is not operating in a low-light environment, device 600 continuously captures data in the FOV and updates live preview 630 based on a standard frame rate.
  • device 600 displays live preview 630 showing a person posing for a picture in a low-light environment, which is evident by live preview 630 displaying a visually darker image. Because device 600 is operating in the low-light environment, device 600 displays low-light mode status indicator 602c and flash status indicator 602a. Low-light mode status indicator 602c indicates that low-light mode is inactive (e.g., device 600 is not configured to operate in low-light mode) and flash status indicator 602a indicates that a flash operation is active (e.g., device 600 is configured to perform a flash operation when capturing an image). In some embodiments, flash status indicator 602a can appear in control region 606, even when device 600 is not operating in a low-light environment. At FIG. 18B, device 600 detects input 1895 A on low light mode status indicator 602c.
  • device 600 in response to input 1895 A, updates low-light mode status indicator 602c to indicate that low-light mode is active and flash mode status indicator 602a to indicate that the flash operation is inactive. While low-light mode and the flash operation are both useful when capturing media in a darker environment, in the present embodiment, low-light mode is mutually exclusive with the flash operation.
  • device 600 displays adjustable low-light mode control 1804 for setting a capture duration for capturing media in the low-light mode. Indication 1818 on adjustable low- light mode control 1804 indicates that the low-light mode is set to a particular capture duration, where each tick mark on adjustable low-light mode control 1804 represents a different capture duration.
  • live preview 630 is visually brighter in FIG. 18C than it was in FIG. 18B.
  • device 600 operates one or more of its cameras using a lower frame rate (e.g., corresponding to longer exposure times).
  • a lower frame rate e.g., corresponding to longer exposure times.
  • the standard frame rate e.g., a higher frame rate
  • device 600 lowers the frame rate from the standard frame rate.
  • device 600 is being held substantially still and the subject in the FOV is likewise substantially still.
  • device 600 forgoes lowering the frame rate or lowers the frame rate to a lesser degree than if movement is not detected, as lower framerates can result in blurred images, when content is moving in the FOV.
  • device 600 can be configured to balance the options between decreasing the frame rate due to low-light in the environment and increasing the frame rate due to detected movement in the environment.
  • device 600 in response to detecting input 1895B, device 600 has started capturing media using low-light mode.
  • live preview 630 ceases to be displayed.
  • live preview 630 darkens to black.
  • device 600 also replaces display of shutter affordance 610 with stop affordance 1806 and generates tactile response 1820A.
  • Stop affordance 1806 indicates that low-light mode capture can be stopped by an input on stop affordance 1806.
  • device 600 also initiates movement of indication 1818 towards a capture duration of zero (e.g., a countdown from lsec to zero).
  • adjustable low-light mode control 1804 also changes color (e.g., white to red) in response to detecting input 1895B.
  • device 600 moves indication 1818 on adjustable low-light mode control 1804 to a capture duration that is near zero.
  • live preview 630 is displayed with a representation of media that has been captured between the one second capture duration (e.g., in 18E) and the near zero capture duration.
  • device 600 displays a representation 1812 of the captured media.
  • Device 600 replaces display of stop affordance 1806 with shutter affordance 610 after the media is captured.
  • low-light mode status indicator 602c indicates that low-light mode is active, device 600 detects input 1895C on low-light mode status indicator 602c.
  • device 600 in response to receiving input 1895C, updates low-light mode status indicator 602c to indicate that low-light mode is inactive and updates flash status indicator 602a to indicate that the flash operation is active. Further, in response to detecting input 1895C, device 600 ceases to display adjustable low-light mode control 1804. In some embodiments, when device 600 goes from operating in low-light conditions to normal conditions, adjustable low-light mode control 1804 ceases to be displayed automatically without any user input.
  • device 630 increases the frame rate of one or more cameras of its cameras and live preview 630 is visually darker, as in FIG. 18B.
  • device 600 detects input 1895D on low-light mode controller affordance 614b that device 600 has displayed adjacent to additional camera control affordance 614.
  • device 600 in response to detecting input 1895D, updates low-light mode status indicator 602c to indicate that low-light mode is active and updates flash status indicator 602c to indicate that the flash operation is inactive.
  • Device 600 redisplays adjustable low-light mode control 1804 with indication 1818 set to the previous one second capture duration.
  • device 600 decreases the frame rate of one or more of its cameras, which makes live preview 630 visually brighter, as in FIG. 18C.
  • device 600 detects input 1895E on indication 1818 to adjust adjustable low-light mode control 1804 to a new capture duration.
  • device 600 in response to receiving input 1895E, moves indication 1818 from a one second capture duration to a two second capture duration. While moving indication 1818 from the one second duration to the two second capture duration, device 600 brightens live preview 630. In some embodiments, device 600 displays a brighter live preview 630 by decreasing (e.g., further decreasing) the frame rate of one or more cameras of device 600 and/or by applying one or more image-processing techniques. At FIG. 181, device 600 detects input 1895F on indication 1818 to adjust adjustable low-light mode control 1804 to a new capture duration. In some embodiments, input 1895F is a second portion of input 1895E (e.g., a continuous dragging input that includes 1895E and 1895F).
  • input 1895F is a second portion of input 1895E (e.g., a continuous dragging input that includes 1895E and 1895F).
  • device 600 in response to detecting input 1895F, moves indication 1818 from a two second capture duration to a four second capture duration. While moving indication 1818 from the two second capture duration to the four second capture duration, device 600 further brightens live preview 630.
  • device 600 detects input 1895G on shutter affordance 610.
  • FIGS. 18K-18M in response to detecting input 1895G, device 600 initiates capture of media based on the four second capture duration that was set in FIG. 18K.
  • FIGS. 18K-18M illustrate a winding up animation 1814.
  • Winding up animation 814 includes an animation of the low-light mode control 1804 starting at 0 seconds (18K) before progressing rapidly to the 2 second mark (18L) before arriving at the 4 second mark (18M), which is equal to the captured duration of the adjustable low-light mode control 1804 (e.g., four seconds). Winding up animation generates tactile output at various stages.
  • Winding up animation 1814 corresponds to the start of the low-light mode media capture.
  • winding up animation is a smooth animation that displays FIGS. 18K-18M at evenly spaced intervals.
  • device 600 generates a tactile output in conjunction with winding up animation (e.g., tactile outputs 1820B-1820D).
  • the winding up animation occurs in relatively short amount of time (e.g., 0.25 seconds, 0.5 seconds).
  • device 600 After displaying the winding up animation 1814, device 600 displays winding down animation 1822 as illustrated in FIGS. 18M-18Q. Winding down animation 1822 occurs based on the capture duration and coincides with image capture occurring. Wounding down animation generates tactile output at various stages. Turning back to FIG. 18M, device 600 displays indication 1818 at a four second capture duration. [586] As illustrated in FIG. 18N, device 600 has moved indication 1818 from the four second capture duration to a three and a half seconds to indicate the remaining capture duration, without updating live preview 630 or generating a tactile output.
  • device 600 has moved indication 1818 from the three and a half second capture duration to a three second capture remaining duration.
  • Device 600 updates live preview 630 to show an image representative of camera data that has been captured up until the three second capture remaining duration (e.g., 1 second of captured camera data).
  • the three second capture remaining duration e.g. 1 second of captured camera data.
  • device 600 does not continuously update live preview 630 to show a brighter image. Instead, device 600 only updates live preview 630 at one second intervals of capture duration.
  • device 600 generates tactile output 1820E.
  • live preview 630 is visually brighter here because live preview 630 updates at one second intervals with additional, captured camera data. In some embodiments, the live preview is updated at intervals other than 1 second (e.g., 0.5 seconds, 2 seconds).
  • device 600 moves indication 1818 from a two second capture remaining duration to a zero capture remaining duration.
  • live preview 630 is visually brighter than it was in FIG. 18P.
  • device 600 has completed capture over the full 4 second duration and displays a representation 1824 of the media that was captured.
  • Representation 1826 is brighter than each of the live previews of FIGS. 180 (e.g., 1 second of data) and 18P (2 seconds of data) and is comparable in brightness to the live preview of FIG. 18Q (4 seconds of data).
  • device 600 detects an input on stop affordance 820 while capturing media and before the completion of the set capture duration. In such embodiments, device 600 uses data captured up to that point to generate and store media.
  • FIG. 18S shows the result of an embodiment in which capture is stopped 1 second in to a 4 second capture. In 18S, representation 1824 of the media captured in the 1 second interval prior to being stopped is noticeably darker than representation 1826 of FIG. 18R, which was captured over a 4 second duration.
  • device 600 detects input 1895R on adjustable low-light mode control 1804. As illustrated in FIG. 18T, in response to detecting input 1895R, device 600 moves indication 1818 from the four second capture duration to the zero second capture duration. In response to moving indication 1818 to the zero capture duration, device 600 updates low-light mode status indicator 602c to indicate that low-light mode is inactive. In addition, device 600 updates flash status indicator 602a to indicate that the flash operation is active.
  • setting low-light mode control 1804 to a duration of zero is equivalent to turning off low-light mode.
  • device 600 detects input 1895S on additional control affordance 614.
  • device 600 in response to detecting input 1895S, displays low-light mode control affordance 614b in control region 606.
  • FIGS. 18V-18X illustrates different sets of user interfaces showing flash status indicators 602cl-602c3 and low light mode status indicator 602cl-602c3 in three different surroundings.
  • FIGS. 18V-18X show devices 600A, 600B, and 600C, which each include one or more features of devices 100, 300, 500, or 600.
  • Device 600A displays adjustable flash control as set to on
  • device 600B displays adjustable flash control 662B as set to auto
  • device 600B display adjustable flash control 662C as set to off.
  • adjustable flash control 662 sets a flash setting for device 600.
  • FIG. 18V illustrates a surroundings where the amount 1888 of light in the FOV is between ten lux and zero lux, as shown by indicator graphic 1888. Because the amount of light in the FOV is between ten lux and zero lux (e.g., very low-light mode), device 600 displays low- light status indicator as active only when flash is set to off. As shown in FIG. 18V, low-light indicator 602c2 is the only low-light indicator displayed as active and flash status indicator 602a2 is the only flash status indicator that is set to inactive because adjustable flash control 662B is set to off. [596] FIG. 18W illustrates an environment where the amount 1890 of light FOV is between twenty lux and ten lux.
  • device 600 displays low-light status indicator as inactive only when flash is set to on.
  • low-light indicator 602cl is the only low-light indicator displayed as inactive and flash status indicator 602al is the only flash status indicator that is set to active because adjustable flash control 662A is set to on.
  • FIG. 18X illustrates a surroundings where the amount 1892 of light in the FOV is above twenty lux. Because the amount of light in the FOV is above 20 lux (e.g., normal light), a low-light indicator is not displayed on any of devices 600A-600C. Flash status indicator 602c-2 is displayed as active because adjustable flash control 662A is set to on. Flash status indicator 602c-3 is displayed as inactive because adjustable flash control 662B is set to off. Device 600C does not display a flash status indicator because adjustable flash control 662C is set to auto and device 600 has determined that flash is not automatically operable above 10 lux.
  • lux e.g., normal light
  • FIGS. 19A-19B are a flow diagram illustrating a method for varying frame rates using an electronic device in accordance with some embodiments.
  • Method 1900 is performed at a device (e.g., 100, 300, 500, 600) with a display device (e.g., a touch-sensitive display), and one or more cameras (e.g., one or more cameras (e.g., dual cameras, triple camera, quad cameras, etc.) on different sides of the electronic device (e.g., a front camera, a back camera)).
  • Some operations in method 1900 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • the electronic device is a computer system.
  • the computer system is optionally in communication (e.g., wired communication, wireless communication) with a display generation component and with one or more input devices.
  • the display generation component is configured to provide visual output, such as display via a CRT display, display via an LED display, or display via image projection.
  • the display generation component is integrated with the computer system.
  • the display generation component is separate from the computer system.
  • the one or more input devices are configured to receive input, such as a touch-sensitive surface receiving user input.
  • the one or more input devices are integrated with the computer system.
  • the one or more input devices are separate from the computer system.
  • the computer system can transmit, via a wired or wireless connection, data (e.g., image data or video data) to an integrated or external display generation component to visually produce the content (e.g., using a display device) and can receive, a wired or wireless connection, input from the one or more input devices.
  • data e.g., image data or video data
  • an integrated or external display generation component to visually produce the content (e.g., using a display device)
  • receive, a wired or wireless connection, input from the one or more input devices e.g., image data or video data
  • method 1900 provides an intuitive way for varying frame rates.
  • the method reduces the cognitive burden on a user for varying frame rates, thereby creating a more efficient human-machine interface.
  • the electronic device displays (1902), via the display device, a media capture user interface that includes displaying a representation (e.g., 630) (e.g., a representation over-time, a live preview feed of data from the camera) of a field-of-view of the one or more cameras (e.g., an open observable area that is visible to a camera, the horizontal (or vertical or diagonal) length of an image at a given distance from the camera lens).
  • a representation e.g., 630
  • a field-of-view of the one or more cameras e.g., an open observable area that is visible to a camera, the horizontal (or vertical or diagonal) length of an image at a given distance from the camera lens.
  • displaying the media capture user interface includes (1904), in accordance with a determination that the variable frame rate criteria are met, displaying (1906) an indication (e.g., 602c) (e.g., a low-light status indicator) that a variable frame rate mode is active. Displaying the indication that a variable frame rate mode is active in accordance with a determination that the variable frame rate criteria are met provides a user with visual feedback of the state of the variable frame rate mode (e.g., 630 in 18B and 18C).
  • an indication e.g., 602c
  • a low-light status indicator e.g., a low-light status indicator
  • displaying the media capture user interface includes (1904), in accordance with a determination that the variable frame rate criteria are no satisfied, displaying (1908) the media capture user interface without the indication that the variable frame rate mode is active.
  • the low-light status indicator indicates that the device is operating in a low-light mode (e.g., low-light status indicator includes a status (e.g., active or inactive) of whether the device is operating in a low-light mode).
  • the representation (e.g., 1802) of the field-of-view of the one or more cameras updated based on the detected changes in the field-of-view of the one or more cameras at the first frame rate is displayed, on the display device, at a first brightness (e.g., 630 in 18B and 18C).
  • the representation (e.g., 1802) of the field-of-view of the one or more cameras updated based on the detected changes in the field-of-view of the one or more cameras at the second frame rate that is lower than the first frame rate is displayed (e.g., by the electronic device), on the display device, at a second brightness that is visually brighter than the first brightness (e.g., 630 in 18B and 18C).
  • decreasing the frame rate increases the brightness of the representation that is displayed on the display (e.g., 630 in 18B and 18C).
  • the electronic device While displaying the media capture user interface (e.g., 608), the electronic device (e.g., 600) detects (1910), via the one or more cameras, changes (e.g., changes that are indicative of movement) in the field-of-view of the one or more cameras (e.g., 630 in 18B and 18C).
  • changes e.g., changes that are indicative of movement
  • the detected changes include detected movement (e.g., movement of the electronic device; a rate of change of the content in the field-of-view).
  • the second frame rate is based on an amount of the detected movement.
  • the second frame rate increases as the movement increases (e.g., 630 in 18B and 18C).
  • variable frame rate criteria e.g., a set of criteria that govern whether the representation of the field-of-view is updated with a variable or static frame rate
  • the electronic device e.g., 600
  • the electronic device By updating the representation of the field-of-view of the one or more cameras based on the detected changes in the field-of-view of the one or more cameras at a first frame rate in accordance with a determination that the detected changes in the field-of-view of the one or more cameras satisfy movement criteria, the electronic device performs an operation when a set of conditions has been met without requiring further user input, which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • frame rate criteria include a criterion that is satisfied when the electronic device is determined to be moving (e.g., the predetermined threshold is based on position displacement, speed, velocity, acceleration, or a combination of any thereof).
  • frame rate criteria include a criterion that is satisfied when the electronic device (e.g., 600) is determined to be not moving (e.g., 630 in 18B and 18C) (e.g., substantially stationary (e.g., movement of the device is more than or equal to a predetermined threshold (e.g., the predetermined threshold is based on position displacement, speed, velocity, acceleration, or a combination of any thereof))).
  • variable frame rate criteria e.g., a set of criteria that govern whether the representation of the field-of-view is updated with a variable or static frame rate
  • the electronic device e.g., 600
  • the electronic device By updating the representation of the field-of-view of the one or more cameras based on the detected changes in the field-of-view of the one or more cameras at the second frame rate in accordance with a determination that the detected changes in the field-of-view of the one or more cameras do not satisfy the movement criteria, the electronic device performs an operation when a set of conditions has been met (or, on the other hand, has not been met) without requiring further user input, which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • variable frame rate criteria include a criterion that is satisfied when ambient light in the field-of-view of the one or more cameras is below a threshold value (e.g., the variable frame rate criteria are not satisfied when ambient light is above the threshold value) and prior to detecting the changes in the field-of-view of the one or more cameras, the representation of the field-of-view of the one or more cameras is updated at a third frame rate (e.g., a frame rate in normal lighting conditions) (e.g., 1888, 1890, and 1892) (1918).
  • a third frame rate e.g., a frame rate in normal lighting conditions
  • the electronic device in response to detecting the changes in the field-of-view of the one or more cameras and in accordance with a determination that the variable frame rate criteria are not met, the electronic device (e.g., 600) maintains (1920) the updating of the representation of the field-of-view of the one or more cameras at the third frame rate (e.g., irrespective of whether the detected changes in the field-of-view of the one or more cameras satisfies the movement criteria (e.g., without determining or without consideration of the determination)) (e.g., 630 in FIG. 8A).
  • the electronic device By maintaining the updating of the representation of the field-of-view of the one or more cameras at the third frame rate in response to detecting the changes in the field-of-view of the one or more cameras and in accordance with a determination that the variable frame rate criteria are not met, the electronic device performs an operation when a set of conditions has been met (or, on the other hand, has not been met) without requiring further user input, which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
  • variable frame rate criteria include a criterion that is satisfied when a flash mode is inactive.
  • the low-light status indicator e.g., 602c
  • the low-light status indicator is mutually exclusive with a flash operation (e.g., active when a flash operation is inactive or inactive when a flash operation is active).
  • the status of a flash operation and the status of a low-light capture mode are opposite of each other.
  • the second frame rate is based on an amount of ambient light in the field-of-view of the one or more cameras being below a respective threshold.
  • the ambient can be detected by one or more cameras or a detected ambient light sensor.
  • the frame decreases as the ambient light decreases.
  • the movement criteria includes a criterion that is satisfied when the detected changes in the field-of-view of the one or more cameras correspond to movement of the electronic device (e.g., 600) (e.g., correspond to a rate of change of the content in the field-of-view due to movement) that is greater than a movement threshold (e.g., a threshold rate of movement).
  • a movement threshold e.g., a threshold rate of movement
  • methods 700, 900, 1100, 1300, 1500, 1700, 2000, 2100, 2300, 2500, 2700, 2800, 3000, 3200, 3400, 3600, 3800, 4000, and 4200 optionally include one or more of the characteristics of the various methods described above with reference to method 1900.
  • FIGS. 20A-20C are a flow diagram illustrating a method for accommodating lighting conditions using an electronic device in accordance with some embodiments.
  • Method 2000 is performed at a device (e.g., 100, 300, 500, 600) with a display device (e.g., a touch-sensitive display) and one or more cameras (e.g., one or more cameras (e.g., dual cameras, triple camera, quad cameras, etc.) on different sides of the electronic device (e.g., a front camera, a back camera)).
  • Some operations in method 2000 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • the electronic device is a computer system.
  • the computer system is optionally in communication (e.g., wired communication, wireless communication) with a display generation component and with one or more input devices.
  • the display generation component is configured to provide visual output, such as display via a CRT display, display via an LED display, or display via image projection.
  • the display generation component is integrated with the computer system.
  • the display generation component is separate from the computer system.
  • the one or more input devices are configured to receive input, such as a touch-sensitive surface receiving user input.
  • the one or more input devices are integrated with the computer system.
  • the one or more input devices are separate from the computer system.
  • the computer system can transmit, via a wired or wireless connection, data (e.g., image data or video data) to an integrated or external display generation component to visually produce the content (e.g., using a display device) and can receive, a wired or wireless connection, input from the one or more input devices.
  • data e.g., image data or video data
  • an integrated or external display generation component to visually produce the content (e.g., using a display device)
  • receive, a wired or wireless connection, input from the one or more input devices e.g., image data or video data
  • method 2000 provides an intuitive way for accommodating lighting conditions.
  • the method reduces the cognitive burden on a user for viewing camera indications, thereby creating a more efficient human-machine interface.
  • the electronic device receives (2002) a request to display a camera user interface (e.g., a request to display the camera application or a request to switch to a media capture mode within the camera application).
  • a request to display a camera user interface e.g., a request to display the camera application or a request to switch to a media capture mode within the camera application.
  • the electronic device In response to receiving the request to display the camera user interface, the electronic device (e.g., 600) displays (2004), via the display device, a camera user interface.
  • Displaying the camera user interface (2004) includes the electronic device (e.g., 600) displaying (2006), via the display device (e.g. ,602), a representation (e.g., 630) (e.g., a representation over-time, a live preview feed of data from the camera) of a field-of-view of the one or more cameras (e.g., an open observable area that is visible to a camera, the horizontal (or vertical or diagonal) length of an image at a given distance from the camera lens).
  • a representation e.g., 630
  • Displaying the camera user interface (2004) includes, in accordance with a determination that low-light conditions have been met, where the low-light conditions include a condition that is met when ambient light in the field-of-view of the one or more cameras is below a respective threshold (e.g., 20 lux) (e.g., or, in the alternative, between a respective range of values), the electronic device (e.g., 600) displaying (2008), concurrently with the representation (e.g., 630) of the field-of-view of the one or more cameras, a control (e.g., 1804) (e.g., a slider) for adjusting a capture duration for capturing media (e.g., image, video) in response to a request to capture media (e.g., a capture duration adjustment control).
  • a control e.g., 1804
  • the adjustable control (e.g., 1804) includes tick marks, where each tick mark is representative of a value on the adjustable control.
  • the ambient light determined by detecting ambient light via one or more cameras or a dedicated ambient light sensor.
  • Displaying the camera user interface (2004) includes, in accordance with a determination that the low-light conditions have not been met, the electronic device (e.g., 600) forgoes display of (2010) the control (e.g., 1804) for adjusting the capture duration.
  • the electronic device performing an operation when a set of conditions has been met (or, has not been met) without requiring further user input, which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
  • the electronic device while displaying the control (e.g., a slider) for adjusting the capture duration, acquires (2012) (e.g., receives, determines, obtains) an indication that low-light conditions (e.g., decrease in ambient light or increase in ambient light) are no longer met (e.g., at another time another determination of whether low-light conditions are met occurs).
  • the electronic device in response to acquiring the indication, ceases to display (2014), via the display device, the control for adjusting the capture duration.
  • the electronic device By ceasing to display (e.g., automatically, without user input) the control for adjusting the capture duration in response to acquiring the indication that low-light conditions are no longer met, the electronic device performing an operation when a set of conditions has been met (or, has not been met) without requiring further user input, which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
  • the electronic device e.g., 600
  • the control e.g., 1804 for adjusting the capture duration for capturing media in response to a request to capture media.
  • the electronic device while displaying the representation (e.g., 630) of the field-of- view of the one or more cameras without concurrently displaying the control (e.g., 1804) for adjusting the capture duration, acquires (2030) (e.g., receives, determines, detects, obtains) an indication that low-light conditions have been met (e.g., at another time another determination of whether low-light conditions are met occurs).
  • the electronic device in response to acquiring the indication, displays (2032), concurrently with the representation of the field-of-view of the one or more cameras, the control (e.g., 1804) for adjusting the capture duration.
  • the control for adjusting the capture duration in response to acquiring the indication that low-light conditions have been met provides to a user a quick and convenient access to the control for adjusting the capture duration when the control is likely to be needed.
  • Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device e.g., 600 maintains forgoing display of the control for adjusting the capture duration for capturing media in response to a request to capture media.
  • the low-light conditions include a condition that is met when a flash mode is inactive (e.g., a flash setting is set to off, the status of a flash operation is inactive).
  • the control e.g., 1804 for adjusting the capture duration is a slider.
  • the slider includes tick marks, where each tick mark (e.g., displayed at intervals) is representative of a capture duration.
  • displaying the camera user interface further includes the electronic device (e.g., 600) displaying (2016), concurrently with the representation (e.g., 1802) of the field-of-view of the one or more cameras, a media capturing affordance (e.g., 610) (e.g., a selectable user interface object) that, when selected, initiates the capture of media using the one or more cameras (e.g., a shutter affordance; a shutter button).
  • a media capturing affordance e.g., 610
  • a selectable user interface object e.g., a selectable user interface object
  • the electronic device while displaying the control (e.g., 1804) for adjusting the capture duration, displays (2018) a first indication (e.g., number, slider knob (e.g., bar) on slider track) of a first capture duration (e.g., measured in time (e.g., total capture time; exposure time), number of pictures/frames). Displaying the first indication of the first capture duration while displaying the control for adjusting the capture duration provides visual feedback to a user of the set capture duration for the displayed representation.
  • a first indication e.g., number, slider knob (e.g., bar) on slider track
  • a first capture duration e.g., measured in time (e.g., total capture time; exposure time), number of pictures/frames.
  • Providing improved visual feedback to the user enhances the operability of the device and makes the user- device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device in response to receiving a request (e.g., dragging a slider control on the adjustable control to an indication (e.g., value) on the adjustable control) to adjust the control (e.g., 1804) for adjusting the capture duration from the first capture duration (e.g., measured in time (e.g., total capture time; exposure time), number of pictures/frames) to a second capture duration (e.g., measured in time (e.g., total capture time; exposure time), number of pictures/frames), the electronic device (e.g., 600) replaces (2020) display of the first indication of the first capture duration with display of a second indication of the second capture duration.
  • a request e.g., dragging a slider control on the adjustable control to an indication (e.g., value) on the adjustable control
  • the control e.g., 1804 for adjusting the capture duration from the first capture duration (e.g., measured in time (e.g., total capture time; exposure time), number of pictures/frame
  • the capture duration is displayed when set. In some embodiments, the capture duration is not displayed. In some embodiments, the duration is the same as the value set via the adjustable control. In some embodiments, the duration is different than the value set via the adjustable input control (e.g., the value is 1 second but the duration is 0.9 seconds; the value is 1 second but the duration is 8 pictures). In some of these embodiments, the correspondence (e.g., translation) of the value to the duration is based on the type of the electronic device (e.g., 600) and/or camera or the type of software that is running of the electronic device or camera.
  • the representation (e.g., 630) of the field-of-view of the one or more cameras is a first representation of the first field of view of the one or more cameras (2022).
  • the electronic device e.g., 600
  • a brightness of the fourth representation is different than a brightness of the fifth representation (2028).
  • the electronic device while displaying the second indication of the second capture duration, the electronic device (e.g., 600) receives a request to capture media.
  • receiving the request to capture the media corresponds to a selection of the media capture affordance (e.g., tap).
  • the electronic device in response to receiving the request to capture media and in accordance with a determination that the second capture duration corresponds to a predetermined capture duration that deactivates low-light capture mode (e.g., a duration less than or equal to zero (e.g., a duration that corresponds to a duration to operate the device in normal conditions or another condition)), the electronic device (e.g., 600) initiates capture, via the one or more cameras, of media based on a duration (e.g., a normal duration (e.g., equal to a duration for capturing still photos on the electronic device) that is different than the second capture duration).
  • a duration e.g., a normal duration (e.g., equal to a duration for capturing still photos on the electronic device) that is different than the second capture duration).
  • the electronic device By initiating capture of media based on the duration (e.g., that is different than the second capture duration) in response to receiving the request to capture media and in accordance with a determination that the second capture duration corresponds to the predetermined capture duration that deactivates low-light capture mode, the electronic device performs an operation when a set of conditions has been met without requiring further user input, which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the duration e.g., that is different than the second capture duration
  • the electronic device while displaying the second indication of the second capture duration, receives a request to capture media.
  • receiving the request to capture the media corresponds to a selection of the media capture affordance (e.g., 610) (e.g., tap).
  • the electronic device in response to receiving the request to capture media (and, in some embodiments, in accordance with a determination that the second capture duration does not correspond to a predetermined capture duration that deactivates low-light capture mode), the electronic device (e.g., 600) initiates capture, via the one or more cameras, of media based on the second capture duration.
  • the media capture user interface e.g., 608) includes a representation of the media after the media is captured.
  • the electronic device e.g., 600
  • the electronic device ceases to display the representation (e.g., 630) of the field-of- view of the one or more cameras.
  • the representation e.g., 630
  • the representation is not displayed at all while capturing media when low-light conditions are met.
  • the representation is not displayed for a predetermined period of time while capturing media when low-light conditions are met. Not displaying the representation at all while capturing media when low-light conditions are met or not displaying the
  • representation for the predetermined period of time while capturing media when low-light conditions are met reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the control (e.g., 1804) for adjusting the capture duration is displayed in a first color (e.g., black).
  • the electronic device (e.g., 600) displays the control (e.g., 1804) for adjusting the capture duration in a second color (e.g., red) that is different than the first color.
  • the electronic device displays a first animation (e.g., winding up and setting up egg timer) that moves a third indication of a third capture value (e.g., predetermined starting value or wound down value (e.g., zero)) to the second indication of the second capture duration (e.g., sliding an indication (e.g., slider bar) across the slider over (e.g., winding up from zero to value)).
  • a first animation e.g., winding up and setting up egg timer
  • a third indication of a third capture value e.g., predetermined starting value or wound down value (e.g., zero)
  • the second indication of the second capture duration e.g., sliding an indication (e.g., slider bar) across the slider over (e.g., winding up from zero to value).
  • Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device displays a second animation (e.g., egg timer counting down) that moves the second indication of the second capture duration to the third indication of the third capture value (e.g., sliding an indication (e.g., slider bar) across the slider over) (e.g., wounding down (e.g., counting down from value to zero)), where a duration of the second animation corresponds to a duration of the second capture duration and is different from a duration of the first animation.
  • Displaying the second animation provides a user with visual feedback of the change(s) in the set capture value. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
  • the electronic device e.g., 600
  • a first tactile output e.g., a haptic (e.g., a vibration) output.
  • the electronic device while displaying the second animation, the electronic device (e.g., 600) provides a second tactile output (e.g., a haptic (e.g., a vibration) output).
  • the first tactile output can be a different type of tactile output than the second tactile output.
  • Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device (e.g., 600) captures the media based on the second capture duration.
  • the media is first media captured based on the second capture duration.
  • the electronic device e.g., 600
  • receives a request to capture second media e.g., second selection (e.g., tap) of the second affordance for requesting to capture media while capturing media
  • the electronic device in response to receiving the request to capture second media based on the second capture duration, the electronic device (e.g., 600) initiates capture of the second media based on the second capture duration.
  • the electronic device after initiating capture of the second media based on the second capture duration, the electronic device (e.g., 600) receives a request terminate capture of the second media before the second capture duration has elapsed. In some embodiments, in response to receiving the request to terminate capture of the second media, the electronic device (e.g., 600) terminates (e.g., stops, ceases) the capturing of the second media based on the second capture duration. In some embodiments, in response to receiving the request to terminate capture of the second media, the electronic device (e.g., 600) displays a representation of the second media that was captured before termination, based on visual information captured by the one or more cameras prior to receiving the request to terminate capture of the second media.
  • the second media is darker or has less contrast than the first media item because less visual information was captured than would have been captured if the capture of the second media item had not been terminated before the second capture duration elapsed, leading to a reduced ability to generate a clear image.
  • the media is first media captured based on the second capture duration.
  • the electronic device e.g., 600
  • receives a request to capture third media e.g., second selection (e.g., tap) of the second affordance for requesting to capture media while capturing media
  • the electronic device in response to receiving the request to capture third media based on the second capture duration, the electronic device (e.g., 600) initiates capture of the third media based on the second capture duration.
  • the electronic device after initiating capture of the third media based on the second capture duration, in accordance with a determination that detected changes in the field-of-view of the one or more cameras (e.g., one or more cameras integrated into a housing of the electronic device) exceeds movement criteria (in some embodiments, user is moving device above a threshold while capturing; in some embodiments, if the movement does not exceed movement criteria, the electronic device will continue to capture the media without interruption), the electronic device (e.g., 600) terminates (e.g., stops, ceases) the capturing of the third media.
  • movement criteria in some embodiments, user is moving device above a threshold while capturing; in some embodiments, if the movement does not exceed movement criteria, the electronic device will continue to capture the media without interruption
  • the electronic device displays a representation of the third media that was captured before termination, based on visual information captured by the one or more cameras prior to receiving the request to terminate capture of the second media.
  • the third media is darker or has less contrast than the first media item because less visual information was captured than would have been captured if the capture of the third media item had not been terminated before the second capture duration elapsed, leading to a reduced ability to generate a clear image.
  • the electronic device e.g., 600
  • Replacing display of the affordance for requesting to capture media with display of an affordance for terminating capture of media in response to receiving the request to capture media enables a user to quickly and easily access the affordance for terminating capture of media when such an affordance is likely to be needed.
  • Providing additional control options without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the stop affordance is displayed during an amount of time based on the camera duration.
  • the electronic device after displaying the stop affordance (e.g., 1806) for the amount of time based on the camera duration, the electronic device (e.g., 600), when the camera duration expires, replaces display of the stop affordance with the affordance (e.g., 610) for requesting to capture media.
  • the electronic device displays a first representation of the first media that is captured at a first capture time (e.g., a point in time of the capture (e.g., at 2 seconds after starting the capturing of media)).
  • a first capture time e.g., a point in time of the capture (e.g., at 2 seconds after starting the capturing of media).
  • the electronic device after displaying the first representation of the first media, the electronic device (e.g., 600) replaces display of the first representation of the first media with display of a second representation of the first media that is captured at a second capture time that is after the first capture time (e.g., a point in time of the capture (e.g., at 3 seconds after starting the capturing of media)), where the second representation is visually distinguished (e.g., brighter) from the first representation of the first media (e.g., displaying an increasingly bright, well defined composite image as more image data is acquired and used to generate the composite image).
  • the replacing display of the first representation of the first media with display of the second representation of the first media occurs after a predetermined period of time.
  • the replacement e.g., brightening
  • occurs at evenly spaced intervals e.g., not smooth brightening).
  • displaying the camera user interface includes, in accordance with a determination that low light conditions have been met, the electronic device (e.g., 600) displaying, concurrently with the control (e.g., 1804) for adjusting capture duration, a low-light capture status indicator (e.g., 602c) that indicates that a status of a low-light capture mode is active.
  • the electronic device e.g., 600
  • the control e.g., 1804
  • a low-light capture status indicator e.g., 602c
  • the electronic device By displaying the low-light capture status indicator concurrently with the control for adjusting capture duration in accordance with a determination that low light conditions have been met, the electronic device performs an operation when a set of conditions has been met without requiring further user input, which in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device while displaying the low-light capture status indicator, the electronic device (e.g., 600) receives a first selection (e.g., tap) of the low-light capture status indicator (e.g., 602c). In some embodiments, in response to receiving a first selection of the low-light capture status indicator (e.g., 602c), the electronic device (e.g., 600) ceases to display the control (e.g., 1804) for adjusting the capture duration while maintaining display of the low-light capture status indicator.
  • a first selection e.g., tap
  • the electronic device in response to receiving a first selection of the low-light capture status indicator (e.g., 602c)
  • the electronic device e.g., 600
  • ceases to display the control e.g., 1804 for adjusting the capture duration while maintaining display of the low-light capture status indicator.
  • the electronic device in response to receiving a first selection of the low-light capture status indicator (e.g., 602c), the electronic device (e.g., 600) updates an appearance of the low-light capture status indicator to indicate that the status of the low-light capture mode is inactive.
  • the low-light capture status indicator e.g., 602c
  • the control for adjusting capture duration ceases to be displayed (e.g., while low-light conditions are met).
  • displaying the camera user interface includes, in accordance with a determination that low light conditions have been met while displaying the low-light capture status indicator that indicates the low-light capture mode is inactive, the electronic device (e.g., 600) receiving a second selection (e.g., tap) of the low-light capture status indicator (e.g., 602c).
  • the electronic device in response to receiving the second selection of the low-light capture status indicator (e.g., 602c), the electronic device (e.g., 600) redisplays the control (e.g., 1804) for adjusting the capture duration.
  • control e.g., 1804
  • an indication of the capture value that was previously is displayed on the control (e.g., the control continues to remain set to the last value that it was previously set to).
  • the electronic device in response to receiving the first selection of the low-light capture status indicator (e.g., 602c), the electronic device (e.g., 600) configures the electronic device to not perform a flash operation.
  • a flash status indicator e.g., 602a
  • a flash operation does not occur (e.g., flash does not trigger) when capturing the media.
  • the low-light conditions include a condition that is met when the low-light capture status indicator has been selected.
  • the low-light capture status indicator is selected (e.g., the electronic device detects a gesture directed to the low-light status indicator) before the control for adjusting capture duration is displayed.
  • methods 700, 900, 1100, 1300, 1500, 1700, 1900, 2100, 2300, 2500, 2700, 2800, 3000, 3200, 3400, 3600, 3800, 4000, and 4200 optionally include one or more of the characteristics of the various methods described above with reference to method 2000. For brevity, these details are not repeated below.
  • FIGS. 21 A-21C are a flow diagram illustrating a method for providing camera indications using an electronic device in accordance with some embodiments.
  • Method 2100 is performed at a device (e.g., 100, 300, 500, 600) with a display device (e.g., a touch-sensitive display) and one or more cameras (e.g., one or more cameras (e.g., dual cameras, triple camera, quad cameras, etc.) on different sides of the electronic device (e.g., a front camera, a back camera)) and, optionally, a dedicated ambient light sensor.
  • Some operations in method 2100 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • the electronic device is a computer system.
  • the computer system is optionally in communication (e.g., wired communication, wireless communication) with a display generation component and with one or more input devices.
  • the display generation component is configured to provide visual output, such as display via a CRT display, display via an LED display, or display via image projection.
  • the display generation component is integrated with the computer system.
  • the display generation component is separate from the computer system.
  • the one or more input devices are configured to receive input, such as a touch-sensitive surface receiving user input.
  • the one or more input devices are integrated with the computer system.
  • the one or more input devices are separate from the computer system.
  • the computer system can transmit, via a wired or wireless connection, data (e.g., image data or video data) to an integrated or external display generation component to visually produce the content (e.g., using a display device) and can receive, a wired or wireless connection, input from the one or more input devices.
  • data e.g., image data or video data
  • an integrated or external display generation component to visually produce the content (e.g., using a display device)
  • receive, a wired or wireless connection, input from the one or more input devices e.g., image data or video data
  • method 2100 provides an intuitive way for providing camera indications.
  • the method reduces the cognitive burden on a user for viewing camera indications, thereby creating a more efficient human-machine interface.
  • the electronic device e.g., 600 displays (2102), via the display device, a camera user interface.
  • the electronic device While displaying the camera user interface, the electronic device (e.g., 600) detects (2104), via one or more sensors of the electronic device (e.g., one or ambient light sensors, one or more cameras), an amount of light (e.g., amount of brightness (e.g., 20 lux, 5 lux)) in a field- of-view of the one or more cameras.
  • the electronic device detects (2104), via one or more sensors of the electronic device (e.g., one or ambient light sensors, one or more cameras), an amount of light (e.g., amount of brightness (e.g., 20 lux, 5 lux)) in a field- of-view of the one or more cameras.
  • an amount of light e.g., amount of brightness (e.g., 20 lux, 5 lux)
  • the electronic device e.g., 600
  • the electronic device concurrently displays (2108), in the camera user interface
  • the low-light environment criteria include a criterion that is satisfied when the amount of light in the field-of-view of the one or more cameras is in a predetermined ranged (e.g., between 20-0 lux)
  • a flash status indicator e.g., 602a
  • 2110 e.g., a flash mode affordance (e.g., a selectable user interface object)
  • Displaying the flash status indicator in accordance with a determination that the amount of light in the field-of-view of the one or more cameras satisfies low-light environment criteria provides a user with feedback about the detected amount of light and the resulting flash setting.
  • Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the low-light capture status indicator corresponds to an option to operate that the electronic device (e.g., 600) in a mode (e.g., low-light environment mode) or in a way that was not previously selectable (e.g., not readily available (e.g., having more than one input to select) or displayed) on the camera user interface (e.g., 608).
  • the electronic device e.g., 600
  • maintains display of the low-light capture status indicator e.g., 602c
  • the electronic device does not maintain display of the low-light capture status indicator (e.g., 602c) or ceases to display the low-light indicator once even if light detected in the image is below the predetermined threshold.
  • one or more of the flash status indicator (e.g., 602a) or the low-light capture status indicator (e.g., 602c) will indicate that the status of its respective modes are (e.g., active (e.g., displayed as a color (e.g., green, yellow, blue)) or inactive (e.g., displayed as a color (grayed-out, red, transparent)).
  • the flash operation criteria include a criterion that is satisfied when a flash setting is set to automatically determine whether the flash operation is set to active or inactive (e.g., flash setting is set to auto)
  • the flash status indicator e.g., 602a
  • the flash operation e.g., device will using additional light from a light source (e.g., a light source included in the device) while capturing media) is active (e.g., active (“on”), inactive (“off’)).
  • the flash status indicator indicating that the status of the flash operation is active in accordance with the determination that the amount of light in the field-of- view of the one or more cameras satisfies low-light environment criteria and a flash operation criteria is met informs a user of the current setting of the flash operation and the amount of light in the environment.
  • Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the flash operation criteria include a criterion that is satisfied when a flash setting is set to automatically determine whether the flash operation is set to active or inactive (e.g., flash setting is set to auto)
  • the low- light capture indicator e.g., 602c indicates that the status of the low-light capture mode is inactive (e.g., active (“on”), inactive (“off’)).
  • the flash status indicator indicates that the status of the flash operation (e.g., the operability that a flash will potentially occur when capturing media) is active
  • the low-light capture indicator indicates that the status of the low-light capture mode is inactive.
  • the flash status indicator indicates that the status of the flash operation is inactive
  • the low-light capture indicator indicates that the status of the low-light capture mode is active.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)
  • Telephone Function (AREA)
  • Indication In Cameras, And Counting Of Exposures (AREA)
  • Camera Bodies And Camera Details Or Accessories (AREA)
  • Stroboscope Apparatuses (AREA)
  • Exposure Control For Cameras (AREA)
  • Details Of Cameras Including Film Mechanisms (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Solid-Sorbent Or Filter-Aiding Compositions (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
EP20728854.9A 2019-05-06 2020-05-06 User interfaces for capturing and managing visual media Pending EP3966676A2 (en)

Applications Claiming Priority (19)

Application Number Priority Date Filing Date Title
US201962844110P 2019-05-06 2019-05-06
US201962856036P 2019-06-01 2019-06-01
US201962897968P 2019-09-09 2019-09-09
US16/582,595 US10674072B1 (en) 2019-05-06 2019-09-25 User interfaces for capturing and managing visual media
US16/583,020 US10645294B1 (en) 2019-05-06 2019-09-25 User interfaces for capturing and managing visual media
DKPA201970592A DK201970592A1 (en) 2019-05-06 2019-09-26 User interfaces for capturing and managing visual media
DKPA201970601A DK180452B1 (en) 2019-05-06 2019-09-26 USER INTERFACES FOR RECEIVING AND HANDLING VISUAL MEDIA
DKPA201970595 2019-09-26
DKPA201970593A DK180685B1 (en) 2019-05-06 2019-09-26 USER INTERFACES FOR RECEIVING AND HANDLING VISUAL MEDIA
DKPA201970600 2019-09-26
DKPA201970603A DK180679B1 (en) 2019-05-06 2019-09-26 USER INTERFACES FOR RECEIVING AND HANDLING VISUAL MEDIA
US16/584,044 US10735642B1 (en) 2019-05-06 2019-09-26 User interfaces for capturing and managing visual media
US16/584,100 US10735643B1 (en) 2019-05-06 2019-09-26 User interfaces for capturing and managing visual media
US16/584,693 US10791273B1 (en) 2019-05-06 2019-09-26 User interfaces for capturing and managing visual media
US16/586,344 US10652470B1 (en) 2019-05-06 2019-09-27 User interfaces for capturing and managing visual media
DKPA201970605 2019-09-27
US16/586,314 US10681282B1 (en) 2019-05-06 2019-09-27 User interfaces for capturing and managing visual media
US202063020462P 2020-05-05 2020-05-05
PCT/US2020/031643 WO2020227386A2 (en) 2019-05-06 2020-05-06 User interfaces for capturing and managing visual media

Publications (1)

Publication Number Publication Date
EP3966676A2 true EP3966676A2 (en) 2022-03-16

Family

ID=74688824

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20728854.9A Pending EP3966676A2 (en) 2019-05-06 2020-05-06 User interfaces for capturing and managing visual media

Country Status (5)

Country Link
EP (1) EP3966676A2 (zh)
JP (7) JP6854049B2 (zh)
KR (4) KR20230015526A (zh)
CN (3) CN112887586B (zh)
AU (4) AU2022200966B2 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117015976A (zh) * 2021-03-26 2023-11-07 索尼集团公司 成像装置、成像装置控制方法和程序
CN115131222A (zh) * 2021-03-29 2022-09-30 华为技术有限公司 一种图像处理方法以及相关设备
US20240314426A1 (en) * 2021-12-10 2024-09-19 Petnow Inc. Electronic apparatus for obtaining biometric information of companion animal, and operation method thereof
KR102623605B1 (ko) * 2021-12-10 2024-01-11 주식회사 펫나우 반려 동물의 생체 정보를 취득하는 전자 장치 및 그 동작 방법
CN116437193A (zh) * 2021-12-31 2023-07-14 荣耀终端有限公司 电子设备的控制方法及电子设备
CN114615480B (zh) * 2022-03-11 2024-07-02 峰米(重庆)创新科技有限公司 投影画面调整方法、装置、设备、存储介质和程序产品
CN116939354A (zh) * 2022-03-30 2023-10-24 北京字跳网络技术有限公司 相机功能页面切换方法、装置、电子设备及存储介质
CN115100839B (zh) * 2022-07-27 2022-11-01 苏州琅日晴传媒科技有限公司 一种监控视频测量数据分析安全预警系统
CN117768772A (zh) * 2022-09-16 2024-03-26 荣耀终端有限公司 相机应用界面的交互方法及装置
CN115470153B (zh) * 2022-11-14 2023-03-24 成都安易迅科技有限公司 智能终端系统ui的稳定流畅度评测方法和系统、设备

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5557358A (en) * 1991-10-11 1996-09-17 Minolta Camera Kabushiki Kaisha Camera having an electronic viewfinder for displaying an object image under different photographic conditions
JP4342926B2 (ja) * 2003-12-24 2009-10-14 三菱電機株式会社 画像入力方法および画像入力装置
JP4446787B2 (ja) 2004-04-21 2010-04-07 富士フイルム株式会社 撮像装置、および表示制御方法
JP2006332809A (ja) 2005-05-23 2006-12-07 Fujifilm Holdings Corp 撮像装置
JP4483841B2 (ja) * 2006-09-06 2010-06-16 カシオ計算機株式会社 撮像装置
JP2008116823A (ja) 2006-11-07 2008-05-22 Nikon Corp カメラ
JP5039786B2 (ja) 2007-07-23 2012-10-03 パナソニック株式会社 撮像装置
JP2009246468A (ja) 2008-03-28 2009-10-22 Fujifilm Corp 撮影装置及び撮影装置の制御方法
JP4980982B2 (ja) 2008-05-09 2012-07-18 富士フイルム株式会社 撮像装置、撮像方法、合焦制御方法及びプログラム
JP5262928B2 (ja) 2009-02-13 2013-08-14 富士通株式会社 撮像装置、携帯端末装置および合焦機構制御方法
JP4870218B2 (ja) * 2010-02-26 2012-02-08 オリンパス株式会社 撮像装置
US8885978B2 (en) * 2010-07-05 2014-11-11 Apple Inc. Operating a device to capture high dynamic range images
KR101700363B1 (ko) * 2010-09-08 2017-01-26 삼성전자주식회사 적절한 밝기를 갖는 입체 영상을 생성하는 디지털 영상 촬영 장치 및 이의 제어 방법
KR101674959B1 (ko) 2010-11-02 2016-11-10 엘지전자 주식회사 이동 단말기 및 이것의 영상 촬영 제어 방법
KR101710631B1 (ko) 2010-12-23 2017-03-08 삼성전자주식회사 손 떨림 보정 모듈을 구비하는 디지털 영상 촬영 장치 및 이의 제어 방법
JP5717453B2 (ja) 2011-01-14 2015-05-13 キヤノン株式会社 撮像装置及び撮像装置の制御方法
KR101984921B1 (ko) * 2012-10-18 2019-05-31 엘지전자 주식회사 휴대 단말기의 동작 방법
US9264630B2 (en) * 2013-01-04 2016-02-16 Nokia Technologies Oy Method and apparatus for creating exposure effects using an optical image stabilizing device
JP6034740B2 (ja) * 2013-04-18 2016-11-30 オリンパス株式会社 撮像装置および撮像方法
KR20150014290A (ko) * 2013-07-29 2015-02-06 엘지전자 주식회사 영상표시장치 및 영상표시장치 동작방법
US9712756B2 (en) 2013-08-21 2017-07-18 Canon Kabushiki Kaisha Image capturing apparatus and control method thereof
JP6234151B2 (ja) * 2013-10-09 2017-11-22 キヤノン株式会社 撮像装置
CN105829945B (zh) * 2013-10-18 2019-04-09 光实验室股份有限公司 用于实施和/或使用摄影机设备的方法和装置
US10074158B2 (en) * 2014-07-08 2018-09-11 Qualcomm Incorporated Systems and methods for stereo depth estimation using global minimization and depth interpolation
KR102145542B1 (ko) * 2014-08-14 2020-08-18 삼성전자주식회사 촬영 장치, 복수의 촬영 장치를 이용하여 촬영하는 촬영 시스템 및 그 촬영 방법
JP3194297U (ja) * 2014-08-15 2014-11-13 リープ モーション, インコーポレーテッドLeap Motion, Inc. 自動車用及び産業用のモーション感知制御装置
JP2016066978A (ja) * 2014-09-26 2016-04-28 キヤノンマーケティングジャパン株式会社 撮像装置、その制御方法とプログラム
US9712751B2 (en) * 2015-01-22 2017-07-18 Apple Inc. Camera field of view effects based on device orientation and scene content
US9979890B2 (en) * 2015-04-23 2018-05-22 Apple Inc. Digital viewfinder user interface for multiple cameras
US9652125B2 (en) * 2015-06-18 2017-05-16 Apple Inc. Device, method, and graphical user interface for navigating media content
JP6546474B2 (ja) * 2015-07-31 2019-07-17 キヤノン株式会社 撮像装置およびその制御方法
US10334154B2 (en) 2015-08-31 2019-06-25 Snap Inc. Automated adjustment of digital image capture parameters
WO2017051605A1 (ja) 2015-09-25 2017-03-30 富士フイルム株式会社 撮像システム及び撮像制御方法
KR20170123125A (ko) * 2016-04-28 2017-11-07 엘지전자 주식회사 이동단말기 및 그 제어방법
US9854156B1 (en) * 2016-06-12 2017-12-26 Apple Inc. User interface for camera effects
KR102257353B1 (ko) * 2016-09-23 2021-06-01 애플 인크. 향상된 사용자 상호작용들에 대한 이미지 데이터
US10432874B2 (en) * 2016-11-01 2019-10-01 Snap Inc. Systems and methods for fast video capture and sensor adjustment
KR20180095331A (ko) * 2017-02-17 2018-08-27 엘지전자 주식회사 이동단말기 및 그 제어 방법
CN108391053A (zh) * 2018-03-16 2018-08-10 维沃移动通信有限公司 一种拍摄控制方法及终端
CN108668083B (zh) * 2018-07-24 2020-09-01 维沃移动通信有限公司 一种拍照方法及终端

Also Published As

Publication number Publication date
JP2022528011A (ja) 2022-06-07
JP2024105236A (ja) 2024-08-06
JP6924319B2 (ja) 2021-08-25
JP6854049B2 (ja) 2021-04-07
JP6924886B2 (ja) 2021-08-25
KR20230015526A (ko) 2023-01-31
KR102368385B1 (ko) 2022-02-25
JP2021108463A (ja) 2021-07-29
JP2022188060A (ja) 2022-12-20
AU2023282230A1 (en) 2024-01-18
CN113811855A (zh) 2021-12-17
JP2021051751A (ja) 2021-04-01
KR20210145278A (ko) 2021-12-01
JP2021040300A (ja) 2021-03-11
KR102492067B1 (ko) 2023-01-26
JP6929478B2 (ja) 2021-09-01
CN112887586A (zh) 2021-06-01
CN112887586B (zh) 2022-05-10
JP7467553B2 (ja) 2024-04-15
AU2022200966A1 (en) 2022-03-03
AU2022200966B2 (en) 2022-03-10
JP7171947B2 (ja) 2022-11-15
AU2022202377A1 (en) 2022-05-05
KR102419105B1 (ko) 2022-07-12
KR20210020987A (ko) 2021-02-24
CN115658198A (zh) 2023-01-31
JP2021051752A (ja) 2021-04-01
AU2022221466B2 (en) 2023-09-14
AU2022202377B2 (en) 2022-05-26
KR20220102664A (ko) 2022-07-20

Similar Documents

Publication Publication Date Title
AU2020267151B8 (en) User interfaces for capturing and managing visual media
DK180679B1 (en) USER INTERFACES FOR RECEIVING AND HANDLING VISUAL MEDIA
US20220053142A1 (en) User interfaces for capturing and managing visual media
US20220294992A1 (en) User interfaces for capturing and managing visual media
AU2022202377B2 (en) User interfaces for capturing and managing visual media

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20211101

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
RIN1 Information on inventor provided before grant (corrected)

Inventor name: JENSEN, RASMUS R.

Inventor name: BRASKET, JEFFREY A.

Inventor name: SOUZA DOS SANTOS, ANDRE

Inventor name: SORRENTINO III, WILLIAM A.

Inventor name: PRESTON, DANIEL TRENT

Inventor name: PAUL, GRANT

Inventor name: MCCORMACK, JONATHAN

Inventor name: LUPINETTI, NICHOLAS

Inventor name: HUBEL, PAUL

Inventor name: HANKEY, MARTHA E.

Inventor name: GIRLING, LUKAS ROBERT TOM

Inventor name: FEDERIGHI, CRAIG M.

Inventor name: DYE, ALAN C.

Inventor name: DESHPANDE, ALOK

Inventor name: BROUGHTON, LEE S.

Inventor name: MANZARI, JOHNNIE B.

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Free format text: PREVIOUS MAIN CLASS: G06F0003048000

Ipc: H04N0023630000

PUAG Search results despatched under rule 164(2) epc together with communication from examining division

Free format text: ORIGINAL CODE: 0009017

17Q First examination report despatched

Effective date: 20240408

17Q First examination report despatched

Effective date: 20240507

B565 Issuance of search results under rule 164(2) epc

Effective date: 20240507

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 23/60 20230101ALI20240502BHEP

Ipc: H04N 23/951 20230101ALI20240502BHEP

Ipc: H04N 23/80 20230101ALI20240502BHEP

Ipc: H04N 23/74 20230101ALI20240502BHEP

Ipc: H04N 23/73 20230101ALI20240502BHEP

Ipc: H04N 23/71 20230101ALI20240502BHEP

Ipc: H04N 23/69 20230101ALI20240502BHEP

Ipc: H04N 23/68 20230101ALI20240502BHEP

Ipc: H04N 23/67 20230101ALI20240502BHEP

Ipc: H04N 23/667 20230101ALI20240502BHEP

Ipc: H04N 23/62 20230101ALI20240502BHEP

Ipc: H04N 23/45 20230101ALI20240502BHEP

Ipc: G06F 3/048 20130101ALI20240502BHEP

Ipc: G06F 3/04845 20220101ALI20240502BHEP

Ipc: H04N 23/63 20230101AFI20240502BHEP