WO2020055613A1 - User interfaces for simulated depth effects - Google Patents

User interfaces for simulated depth effects Download PDF

Info

Publication number
WO2020055613A1
WO2020055613A1 PCT/US2019/049101 US2019049101W WO2020055613A1 WO 2020055613 A1 WO2020055613 A1 WO 2020055613A1 US 2019049101 W US2019049101 W US 2019049101W WO 2020055613 A1 WO2020055613 A1 WO 2020055613A1
Authority
WO
WIPO (PCT)
Prior art keywords
representation
image data
display
displaying
depth effect
Prior art date
Application number
PCT/US2019/049101
Other languages
French (fr)
Inventor
Behkish J. MANZARI
Original Assignee
Apple Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DKPA201870623A external-priority patent/DK201870623A1/en
Application filed by Apple Inc. filed Critical Apple Inc.
Priority to KR1020217006145A priority Critical patent/KR102534596B1/en
Priority to JP2021510849A priority patent/JP7090210B2/en
Priority to EP19769316.1A priority patent/EP3827334A1/en
Priority to AU2019338180A priority patent/AU2019338180B2/en
Priority to CN201980056883.9A priority patent/CN112654956A/en
Priority to KR1020237016569A priority patent/KR20230071201A/en
Publication of WO2020055613A1 publication Critical patent/WO2020055613A1/en
Priority to JP2022095182A priority patent/JP7450664B2/en
Priority to AU2022228121A priority patent/AU2022228121B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1626Constructional details or arrangements for portable computers with a single-body enclosure integrating a flat display, e.g. Personal Digital Assistants [PDAs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1637Details related to the display arrangement, including those related to the mounting of the display in the housing
    • G06F1/1643Details related to the display arrangement, including those related to the mounting of the display in the housing the display being associated to a digitizer, e.g. laptops that can be used as penpads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1656Details related to functional adaptations of the enclosure, e.g. to provide protection against EMI, shock, water, or to host detachable peripherals like a mouse or removable expansions units like PCMCIA cards, or to provide access to internal components for maintenance or to removable storage supports like CDs or DVDs, or to mechanically mount accessories
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/634Warning indications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • H04N5/2226Determination of depth image, e.g. for foreground/background separation

Definitions

  • the present disclosure relates generally to computer user interfaces, and more specifically to techniques for managing user interfaces for simulated depth effects
  • Some techniques for simulating depth effects using electronic devices are generally cumbersome and inefficient.
  • some existing techniques use a complex and time-consuming user interface, which may include multiple key presses or keystrokes.
  • the present technique provides electronic devices with faster, more efficient methods and interfaces for simulated depth effects.
  • Such methods and interfaces optionally complement or replace other methods for simulated depth effects.
  • Such methods and interfaces reduce the cognitive burden on a user and produce a more efficient human-machine interface.
  • Such methods and interfaces conserve power and increase the time between battery charges.
  • Such methods and interfaces also enable easy application and editing of applied depth effects using only the electronic device without the aid of another device, thereby enhancing user efficiency and convenience.
  • a method performed at an electronic device with a display and one or more input devices comprises: displaying, on the display, a representation of image data; while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input; in response to detecting the first input, displaying, on the display, an adjustable slider associated with manipulating the representation of image data, wherein the adjustable slider includes: a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and a selection indicator indicating that the first value is a currently-selected simulated depth effect value; while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider; and in response to detecting the input directed to the adjustable slider: moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for: displaying, on the display, a representation of image data; while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input; in response to detecting the first input, displaying, on the display, an adjustable slider associated with manipulating the representation of image data, wherein the adjustable slider includes: a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and a selection indicator indicating that the first value is a currently- selected simulated depth effect value; while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for: displaying, on the display, a representation of image data; while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input; in response to detecting the first input, displaying, on the display, an adjustable slider associated with manipulating the representation of image data, wherein the adjustable slider includes: a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and a selection indicator indicating that the first value is a currently- selected simulated depth effect value; while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider; and in response
  • an electronic device comprises a display, one or more input devices, one or more processors, and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, on the display, a representation of image data; while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input; in response to detecting the first input, displaying, on the display, an adjustable slider associated with manipulating the representation of image data, wherein the adjustable slider includes: a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and a selection indicator indicating that the first value is a currently-selected simulated depth effect value; while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider; and in response to detecting the input
  • an electronic device comprises a display; one or more input devices; means for displaying, on the display, a representation of image data; means, while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, for detecting, via the one or more input devices, a first input; and means, in response to detecting the first input, for displaying, on the display, an adjustable slider associated with manipulating the representation of image data, wherein the adjustable slider includes: a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and a selection indicator indicating that the first value is a currently- selected simulated depth effect value; means, while displaying the adjustable slider, for detecting, via the one or more input devices, an input directed to the adjustable slider; and means, in response to detecting the input directed to the adjustable slider, for: moving the adjustable slider to indicate that a second value, of the
  • a method performed at an electronic device with a display and one or more input devices comprises: receiving, via the one or more input devices, a request to apply a simulated depth effect to a representation of image data, wherein depth data for a subject within the representation of image data is available; and in response to receiving the request to apply the simulated depth effect to the representation of image data, displaying, on the display, the representation of image data with the simulated depth effect, including: distorting a first portion of the representation of image data that has a first depth in a first manner, wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data; and distorting a second portion of the representation of image data that has the first depth in a second manner that is different from the first manner, wherein the second manner is determined based on a distance of the second portion from the predefined portion of the representation of image data.
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for: receiving, via the one or more input devices, a request to apply a simulated depth effect to a representation of image data, wherein depth data for a subject within the representation of image data is available; and in response to receiving the request to apply the simulated depth effect to the representation of image data, displaying, on the display, the representation of image data with the simulated depth effect, including: distorting a first portion of the representation of image data that has a first depth in a first manner, wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data; and distorting a second portion of the representation of image data that has the first depth in a second manner that is different from the first manner, wherein the
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for: receiving, via the one or more input devices, a request to apply a simulated depth effect to a representation of image data, wherein depth data for a subject within the representation of image data is available; and in response to receiving the request to apply the simulated depth effect to the representation of image data, displaying, on the display, the representation of image data with the simulated depth effect, including: distorting a first portion of the representation of image data that has a first depth in a first manner, wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data; and distorting a second portion of the representation of image data that has the first depth in a second manner that is different from the first manner, wherein the second manner is determined
  • an electronic device comprises a display, one or more input devices, one or more processors, and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving, via the one or more input devices, a request to apply a simulated depth effect to a representation of image data, wherein depth data for a subject within the representation of image data is available; and in response to receiving the request to apply the simulated depth effect to the representation of image data, displaying, on the display, the representation of image data with the simulated depth effect, including: distorting a first portion of the representation of image data that has a first depth in a first manner, wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data; and distorting a second portion of the representation of image data that has the first depth in a second manner that is different from the first manner, wherein the second manner is determined based on a distance
  • an electronic device comprises a display; one or more input devices; means for receiving, via the one or more input devices, a request to apply a simulated depth effect to a representation of image data, wherein depth data for a subject within the representation of image data is available; and means, in response to receiving the request to apply the simulated depth effect to the representation of image data, for displaying, on the display, the representation of image data with the simulated depth effect, including: distorting a first portion of the representation of image data that has a first depth in a first manner, wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data; and distorting a second portion of the representation of image data that has the first depth in a second manner that is different from the first manner, wherein the second manner is determined based on a distance of the second portion from the predefined portion of the representation of image data.
  • a method performed at an electronic device with a display and one or more sensors, including one or more cameras comprises: while displaying, on the display, a user interface of a camera application, detecting, via the one or more sensors, external interference that will impair operation of a respective function of the one or more cameras; and in response to detecting the interference external to the electronic device: in accordance with a determination that a first criteria has been satisfied, displaying, on the display, a notification indicating that an operation mode of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras; and in accordance with a determination that the first criteria has not been satisfied, forgoing displaying, on the display, the notification indicating that the operation mode of the one or more cameras has been changed.
  • a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more sensors, including one or more cameras, the one or more programs including instructions for: while displaying, on the display, a user interface of a camera application, detecting, via the one or more sensors, external interference that will impair operation of a respective function of the one or more cameras; and in response to detecting the interference external to the electronic device: in accordance with a determination that a first criteria has been satisfied, displaying, on the display, a notification indicating that an operation mode of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras; and in accordance with a determination that the first criteria has not been satisfied, forgoing displaying, on the display, the notification indicating that the operation mode of the one or more cameras has been changed.
  • a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more sensors, including one or more cameras, the one or more programs including instructions for: while displaying, on the display, a user interface of a camera application, detecting, via the one or more sensors, external interference that will impair operation of a respective function of the one or more cameras; and in response to detecting the interference external to the electronic device: in accordance with a determination that a first criteria has been satisfied, displaying, on the display, a notification indicating that an operation mode of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras; and in accordance with a determination that the first criteria has not been satisfied, forgoing displaying, on the display, the notification indicating that the operation mode of the one or more cameras has been changed.
  • an electronic device comprises a display, one or more sensors, including one or more cameras, one or more processors, and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: while displaying, on the display, a user interface of a camera application, detecting, via the one or more sensors, external interference that will impair operation of a respective function of the one or more cameras; and in response to detecting the interference external to the electronic device: in accordance with a determination that a first criteria has been satisfied, displaying, on the display, a notification indicating that an operation mode of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras; and in accordance with a determination that the first criteria has not been satisfied, forgoing displaying, on the display, the notification indicating that the operation mode of the one or more cameras has been changed.
  • an electronic device comprises a display; one or more sensors, including one or more cameras; means, while displaying, on the display, a user interface of a camera application, for detecting, via the one or more sensors, external interference that will impair operation of a respective function of the one or more cameras; and means, in response to detecting the interference external to the electronic device, for: in accordance with a determination that a first criteria has been satisfied, displaying, on the display, a notification indicating that an operation mode of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras; and in accordance with a determination that the first criteria has not been satisfied, forgoing displaying, on the display, the notification indicating that the operation mode of the one or more cameras has been changed.
  • Executable instructions for performing these functions are, optionally, included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors. Executable instructions for performing these functions are, optionally, included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
  • devices are provided with faster, more efficient methods and interfaces for adjusting image effects, thereby increasing the effectiveness, efficiency, and user satisfaction with such devices.
  • Such methods and interfaces may complement or replace other methods for adjusting image effects.
  • FIG. 1 A is a block diagram illustrating a portable multifunction device with a touch- sensitive display, in accordance with some embodiments.
  • FIG. 1B is a block diagram illustrating exemplary components for event handling, in accordance with some embodiments.
  • FIG. 2 illustrates a portable multifunction device having a touch screen, in accordance with some embodiments.
  • FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface, in accordance with some embodiments.
  • FIG. 4A illustrates an exemplary user interface for a menu of applications on a portable multifunction device, in accordance with some embodiments.
  • FIG. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface that is separate from the display, in accordance with some embodiments.
  • FIG. 5 A illustrates a personal electronic device, in accordance with some
  • FIG. 5B is a block diagram illustrating a personal electronic device, in accordance with some embodiments.
  • FIGS. 6A-6T illustrate exemplary user interfaces for adjusting a simulated depth effect, in accordance with some embodiments.
  • FIGS. 7A-7B are a flow diagram illustrating a method for managing user interfaces for adjusting a simulated depth effect, in accordance with some embodiments.
  • FIGS. 8A-8R illustrate exemplary user interfaces for displaying adjustments to a simulated depth effect, in accordance with some embodiments.
  • FIGS. 9A-9B are a flow diagram illustrating a method for managing user interfaces for displaying adjustments to a simulated depth effect, in accordance with some embodiments.
  • FIGS. 10A-10F illustrate exemplary user interfaces for indicating an interference to adjusting simulated image effects, in accordance with some embodiments.
  • FIG. 11 is a flow diagram illustrating a method for managing user interfaces for indicating an interference to adjusting simulated image effects, in accordance with some embodiments.
  • FIGS. 1A-1B, 2, 3, 4A-4B, and 5A-5B provide a description of exemplary devices for performing the techniques for managing event notifications.
  • FIGS. 6A-6T illustrate exemplary user interfaces for adjusting a simulated depth effect, in accordance with some embodiments.
  • FIGS. 7A-7B are a flow diagram illustrating a method for managing user interfaces for adjusting a simulated depth effect, in accordance with some embodiments.
  • the user interfaces in FIGS. 6A-6T are used to illustrate the processes described below, including the processes in FIGS. 7A-7B.
  • FIGS. 8A-8R illustrate exemplary user interfaces for displaying adjustments to a simulated depth effect, in accordance with some embodiments.
  • FIGS. 9A-9B are a flow diagram illustrating a method for managing user interfaces for displaying adjustments to a simulated depth effect, in accordance with some embodiments.
  • the user interfaces in FIGS. 8A- 8R are used to illustrate the processes described below, including the processes in FIGS. 9A-9B.
  • FIGS. 10A-10F illustrate exemplary user interfaces for indicating an interference to adjusting simulated image effects, in accordance with some embodiments.
  • FIG. 11 is a flow diagram illustrating a method for managing user interfaces for indicating an interference to adjusting simulated image effects, in accordance with some embodiments.
  • FIGS. 10A-10F are used to illustrate the processes described below, including the processes in FIG. 11.
  • the term“if’ is, optionally, construed to mean“when” or“upon” or“in response to determining” or“in response to detecting,” depending on the context.
  • the phrase“if it is determined” or“if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or“in response to determining” or“upon detecting [the stated condition or event]” or“in response to detecting [the stated condition or event],” depending on the context.
  • the device is a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions.
  • portable multifunction devices include, without limitation, the iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, California.
  • Other portable electronic devices such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch screen displays and/or touchpads), are, optionally, used. It should also be understood that, in some embodiments, the device is not a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions.
  • portable multifunction devices include, without limitation, the iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, California.
  • Other portable electronic devices such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch screen displays and/or touchpads), are, optionally, used. It should also be understood that, in some embodiments, the device is not a portable
  • a communications device but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touchpad).
  • a touch-sensitive surface e.g., a touch screen display and/or a touchpad
  • an electronic device that includes a display and a touch-sensitive surface is described. It should be understood, however, that the electronic device optionally includes one or more other physical user-interface devices, such as a physical keyboard, a mouse, and/or a joystick.
  • the device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
  • applications such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
  • the various applications that are executed on the device optionally use at least one common physical user-interface device, such as the touch-sensitive surface.
  • One or more functions of the touch-sensitive surface as well as corresponding information displayed on the device are, optionally, adjusted and/or varied from one application to the next and/or within a respective application.
  • a common physical architecture (such as the touch-sensitive surface) of the device optionally supports the variety of applications with user interfaces that are intuitive and transparent to the user.
  • FIG. 1 A is a block diagram illustrating portable multifunction device 100 with touch-sensitive display system 112 in accordance with some embodiments.
  • Touch-sensitive display 112 is sometimes called a“touch screen” for convenience and is sometimes known as or called a“touch-sensitive display system.”
  • Device 100 includes memory 102 (which optionally includes one or more computer-readable storage mediums), memory controller 122, one or more processing units (CPUs) 120, peripherals interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, input/output (I/O) subsystem 106, other input control devices 116, and external port 124.
  • Device 100 optionally includes one or more optical sensors 164.
  • Device 100 optionally includes one or more contact intensity sensors 165 for detecting intensity of contacts on device 100 (e.g., a touch-sensitive surface such as touch-sensitive display system 112 of device 100).
  • Device 100 optionally includes one or more tactile output generators 167 for generating tactile outputs on device 100 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 112 of device 100 or touchpad 355 of device 300). These components optionally communicate over one or more communication buses or signal lines 103.
  • the term“intensity” of a contact on a touch- sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (proxy) for the force or pressure of a contact on the touch-sensitive surface.
  • the intensity of a contact has a range of values that includes at least four distinct values and more typically includes hundreds of distinct values (e.g., at least 256). Intensity of a contact is, optionally, determined (or measured) using various approaches and various sensors or combinations of sensors.
  • one or more force sensors underneath or adjacent to the touch-sensitive surface are, optionally, used to measure force at various points on the touch-sensitive surface.
  • force measurements from multiple force sensors are combined (e.g., a weighted average) to determine an estimated force of a contact.
  • a pressure-sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on the touch-sensitive surface.
  • the size of the contact area detected on the touch-sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive surface proximate to the contact and/or changes thereto are, optionally, used as a substitute for the force or pressure of the contact on the touch-sensitive surface.
  • the substitute measurements for contact force or pressure are used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the substitute measurements).
  • the substitute measurements for contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure).
  • the intensity threshold is a pressure threshold measured in units of pressure.
  • the term“tactile output” refers to physical displacement of a device relative to a previous position of the device, physical displacement of a component (e.g., a touch-sensitive surface) of a device relative to another component (e.g., housing) of the device, or displacement of the component relative to a center of mass of the device that will be detected by a user with the user’s sense of touch.
  • a component e.g., a touch-sensitive surface
  • another component e.g., housing
  • the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in physical characteristics of the device or the component of the device.
  • movement of a touch-sensitive surface is, optionally, interpreted by the user as a“down click” or“up click” of a physical actuator button.
  • a user will feel a tactile sensation such as an“down click” or“up click” even when there is no movement of a physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user’s movements.
  • movement of the touch-sensitive surface is, optionally, interpreted or sensed by the user as“roughness” of the touch-sensitive surface, even when there is no change in smoothness of the touch-sensitive surface.
  • a tactile output is described as corresponding to a particular sensory perception of a user (e.g., an“up click,” a“down click,” “roughness”), unless otherwise stated, the generated tactile output corresponds to physical displacement of the device or a component thereof that will generate the described sensory perception for a typical (or average) user.
  • device 100 is only one example of a portable computing device.
  • Memory 102 optionally includes high-speed random access memory and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices.
  • Memory controller 122 optionally controls access to memory 102 by other components of device 100.
  • Peripherals interface 118 can be used to couple input and output peripherals of the device to CPU 120 and memory 102.
  • the one or more processors 120 run or execute various software programs and/or sets of instructions stored in memory 102 to perform various functions for device 100 and to process data.
  • peripherals interface 118, CPU 120, and memory controller 122 are, optionally, implemented on a single chip, such as chip 104. In some other embodiments, they are, optionally, implemented on separate chips.
  • RF (radio frequency) circuitry 108 receives and sends RF signals, also called electromagnetic signals.
  • RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals.
  • RF circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth.
  • an antenna system an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth.
  • SIM subscriber identity module
  • RF circuitry 108 optionally communicates with networks, such as the Internet, also referred to as the World Wide Web (WWW), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices by wireless communication.
  • the RF circuitry 108 optionally includes well-known circuitry for detecting near field communication (NFC) fields, such as by a short-range communication radio.
  • the wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies, including but not limited to Global System for Mobile
  • GSM Global System for Mobile Communications
  • EDGE Enhanced Data GSM Environment
  • HSDPA high-speed downlink packet access
  • HSUPA high-speed uplink packet access
  • Evolution, Data-Only (EV- DO) HSPA, HSPA+, Dual-Cell HSPA (DC-HSPDA), long term evolution (LTE), near field communication (NFC), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Bluetooth Low Energy (BTLE), Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.1 la, IEEE 802.1 lb, IEEE 802.
  • Wi-Fi Wireless Fidelity
  • VoIP voice over Internet Protocol
  • Wi-MAX a protocol for e-mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant
  • SMS Short Message Service
  • Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100.
  • Audio circuitry 110 receives audio data from peripherals interface 118, converts the audio data to an electrical signal, and transmits the electrical signal to speaker 111.
  • Speaker 111 converts the electrical signal to human-audible sound waves.
  • Audio circuitry 110 also receives electrical signals converted by microphone 113 from sound waves.
  • Audio circuitry 110 converts the electrical signal to audio data and transmits the audio data to peripherals interface 118 for processing. Audio data is, optionally, retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripherals interface 118.
  • audio circuitry 110 also includes a headset jack (e.g., 212, FIG. 2).
  • the headset jack provides an interface between audio circuitry 110 and removable audio input/output peripherals, such as output-only headphones or a headset with both output (e.g., a headphone for one or both ears) and input (e.g., a
  • I/O subsystem 106 couples input/output peripherals on device 100, such as touch screen 112 and other input control devices 116, to peripherals interface 118.
  • I/O subsystem 106 optionally includes display controller 156, optical sensor controller 158, depth camera controller 169, intensity sensor controller 159, haptic feedback controller 161, and one or more input controllers 160 for other input or control devices.
  • the one or more input controllers 160 receive/send electrical signals from/to other input control devices 116.
  • the other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth.
  • input controller(s) 160 are, optionally, coupled to any (or none) of the following: a keyboard, an infrared port, a USB port, and a pointer device such as a mouse.
  • the one or more buttons optionally include an up/down button for volume control of speaker 111 and/or microphone 113.
  • the one or more buttons optionally include a push button (e.g., 206, FIG. 2).
  • a quick press of the push button optionally disengages a lock of touch screen 112 or optionally begins a process that uses gestures on the touch screen to unlock the device, as described in U.S. Patent Application 11/322,549,“Unlocking a Device by Performing Gestures on an Unlock Image,” filed December 23, 2005, U.S. Pat. No. 7,657,849, which is hereby incorporated by reference in its entirety.
  • a longer press of the push button e.g., 206) optionally turns power to device 100 on or off.
  • the functionality of one or more of the buttons are, optionally, user-customizable.
  • Touch screen 112 is used to implement virtual or soft buttons and one or more soft keyboards.
  • Touch-sensitive display 112 provides an input interface and an output interface between the device and a user.
  • Display controller 156 receives and/or sends electrical signals from/to touch screen 112.
  • Touch screen 112 displays visual output to the user.
  • the visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively termed“graphics”). In some embodiments, some or all of the visual output optionally corresponds to user-interface objects.
  • Touch screen 112 has a touch-sensitive surface, sensor, or set of sensors that accepts input from the user based on haptic and/or tactile contact.
  • Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch screen 112 and convert the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages, or images) that are displayed on touch screen 112.
  • user-interface objects e.g., one or more soft keys, icons, web pages, or images
  • a point of contact between touch screen 112 and the user corresponds to a finger of the user.
  • Touch screen 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments.
  • LCD liquid crystal display
  • LPD light emitting polymer display
  • LED light emitting diode
  • touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112.
  • capacitive, resistive, infrared, and surface acoustic wave technologies as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112.
  • projected mutual capacitance sensing technology is used, such as that found in the iPhone® and iPod Touch® from Apple Inc. of Cupertino, California.
  • a touch-sensitive display in some embodiments of touch screen 112 is, optionally, analogous to the multi-touch sensitive touchpads described in the following U.S. Patents:
  • touch screen 112 displays visual output from device 100, whereas touch-sensitive touchpads do not provide visual output.
  • a touch-sensitive display in some embodiments of touch screen 112 is described in the following applications: (1) U.S. Patent Application No. 11/381,313,“Multipoint Touch Surface Controller,” filed May 2, 2006; (2) U.S. Patent Application No. 10/840,862,“Multipoint Touchscreen,” filed May 6, 2004; (3) U.S. Patent Application No. 10/903,964,“Gestures For Touch Sensitive Input Devices,” filed July 30, 2004; (4) U.S. Patent Application No. 11/048,264, “Gestures For Touch Sensitive Input Devices,” filed January 31, 2005; (5) U.S. Patent
  • Touch screen 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen has a video resolution of approximately 160 dpi.
  • the user optionally makes contact with touch screen 112 using any suitable object or appendage, such as a stylus, a finger, and so forth.
  • the user interface is designed to work primarily with finger-based contacts and gestures, which can be less precise than stylus-based input due to the larger area of contact of a finger on the touch screen.
  • the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user.
  • device 100 in addition to the touch screen, device 100 optionally includes a touchpad for activating or deactivating particular functions.
  • the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output.
  • the touchpad is, optionally, a touch-sensitive surface that is separate from touch screen 112 or an extension of the touch-sensitive surface formed by the touch screen.
  • Device 100 also includes power system 162 for powering the various components.
  • Power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in portable devices.
  • power sources e.g., battery, alternating current (AC)
  • AC alternating current
  • a recharging system e.g., a recharging system
  • a power failure detection circuit e.g., a power failure detection circuit
  • a power converter or inverter e.g., a power converter or inverter
  • a power status indicator e.g., a light-emitting diode (LED)
  • Device 100 optionally also includes one or more optical sensors 164.
  • FIG. 1 A shows an optical sensor coupled to optical sensor controller 158 in EO subsystem 106.
  • Optical sensor 164 optionally includes charge-coupled device (CCD) or complementary metal-oxide
  • CMOS complementary metal-oxide-semiconductor
  • optical sensor 164 optionally captures still images or video.
  • an optical sensor is located on the back of device 100, opposite touch screen display 112 on the front of the device so that the touch screen display is enabled for use as a viewfinder for still and/or video image acquisition.
  • an optical sensor is located on the front of the device so that the user’s image is, optionally, obtained for video conferencing while the user views the other video conference participants on the touch screen display.
  • Device 100 optionally also includes one or more depth camera sensors 175.
  • FIG. 1 A shows a depth camera sensor coupled to depth camera controller 169 in I/O subsystem 106.
  • Depth camera sensor 175 receives data from the environment to create a three dimensional model of an object (e.g., a face) within a scene from a viewpoint (e.g., a depth camera sensor).
  • depth camera sensor 175 in conjunction with imaging module 143 (also called a camera module), depth camera sensor 175 is optionally used to determine a depth map of different portions of an image captured by the imaging module 143.
  • a depth camera sensor is located on the front of device 100 so that the user’s image with depth information is, optionally, obtained for video conferencing while the user views the other video conference participants on the touch screen display and to capture selfies with depth map data.
  • the depth camera sensor 175 is located on the back of device, or on the back and the front of the device 100.
  • the position of depth camera sensor 175 can be changed by the user (e.g., by rotating the lens and the sensor in the device housing) so that a depth camera sensor 175 is used along with the touch screen display for both video conferencing and still and/or video image acquisition.
  • a depth map (e.g., depth map image) contains information (e.g., values) that relates to the distance of objects in a scene from a viewpoint (e.g., a camera, an optical sensor, a depth camera sensor).
  • a viewpoint e.g., a camera, an optical sensor, a depth camera sensor.
  • each depth pixel defines the position in the viewpoint’s z-axis where its corresponding two-dimensional pixel is located.
  • a depth map is composed of pixels wherein each pixel is defined by a value (e.g., 0 - 255).
  • the“0” value represents pixels that are located at the most distant place in a“three dimensional” scene and the“255” value represents pixels that are located closest to a viewpoint (e.g., a camera, an optical sensor, a depth camera sensor) in the “three dimensional” scene.
  • a depth map represents the distance between an object in a scene and the plane of the viewpoint.
  • the depth map includes information about the relative depth of various features of an object of interest in view of the depth camera (e.g., the relative depth of eyes, nose, mouth, ears of a user’s face).
  • the depth map includes information that enables the device to determine contours of the object of interest in a z direction.
  • Device 100 optionally also includes one or more contact intensity sensors 165.
  • FIG. 1A shows a contact intensity sensor coupled to intensity sensor controller 159 in I/O subsystem 106.
  • Contact intensity sensor 165 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electric force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors used to measure the force (or pressure) of a contact on a touch-sensitive surface).
  • Contact intensity sensor 165 receives contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment.
  • at least one contact intensity sensor is collocated with, or proximate to, a touch-sensitive surface (e.g., touch- sensitive display system 112).
  • at least one contact intensity sensor is located on the back of device 100, opposite touch screen display 112, which is located on the front of device 100.
  • Device 100 optionally also includes one or more proximity sensors 166.
  • FIG. 1 A shows proximity sensor 166 coupled to peripherals interface 118.
  • proximity sensor 166 is, optionally, coupled to input controller 160 in I/O subsystem 106.
  • Proximity sensor 166 optionally performs as described in U.S. Patent Application Nos. 11/241,839,“Proximity Detector In Handheld Device”; 11/240,788,“Proximity Detector In Handheld Device”;
  • the proximity sensor turns off and disables touch screen 112 when the multifunction device is placed near the user’s ear (e.g., when the user is making a phone call).
  • Device 100 optionally also includes one or more tactile output generators 167.
  • FIG. 1 A shows a tactile output generator coupled to haptic feedback controller 161 in I/O subsystem 106.
  • Tactile output generator 167 optionally includes one or more electroacoustic devices such as speakers or other audio components and/or electromechanical devices that convert energy into linear motion such as a motor, solenoid, electroactive polymer, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component that converts electrical signals into tactile outputs on the device).
  • Contact intensity sensor 165 receives tactile feedback generation instructions from haptic feedback module 133 and generates tactile outputs on device 100 that are capable of being sensed by a user of device 100.
  • At least one tactile output generator is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system 112) and, optionally, generates a tactile output by moving the touch-sensitive surface vertically (e.g., in/out of a surface of device 100) or laterally (e.g., back and forth in the same plane as a surface of device 100).
  • a touch-sensitive surface e.g., touch-sensitive display system 112
  • a tactile output generator is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system 112) and, optionally, generates a tactile output by moving the touch-sensitive surface vertically (e.g., in/out of a surface of device 100) or laterally (e.g., back and forth in the same plane as a surface of device 100).
  • At least one tactile output generator sensor is located on the back of device 100, opposite touch screen display 112, which is located on the front of device 100.
  • Device 100 optionally also includes one or more accelerometers 168.
  • FIG. 1 A shows accelerometer 168 coupled to peripherals interface 118.
  • accelerometer 168 is, optionally, coupled to an input controller 160 in I/O subsystem 106.
  • Accelerometer 168 optionally performs as described in U.S. Patent Publication No. 20050190059,“Acceleration- based Theft Detection System for Portable Electronic Devices,” and U.S. Patent Publication No. 20060017692,“Methods And Apparatuses For Operating A Portable Device Based On An Accelerometer,” both of which are incorporated by reference herein in their entirety.
  • information is displayed on the touch screen display in a portrait view or a landscape view based on an analysis of data received from the one or more accelerometers.
  • Device 100 optionally includes, in addition to accelerometer(s) 168, a magnetometer and a GPS (or GLONASS or other global navigation system) receiver for obtaining information concerning the location and orientation (e.g., portrait or landscape) of device 100.
  • GPS or GLONASS or other global navigation system
  • the software components stored in memory 102 include operating system 126, communication module (or set of instructions) 128, contact/motion module (or set of instructions) 130, graphics module (or set of instructions) 132, text input module (or set of instructions) 134, Global Positioning System (GPS) module (or set of instructions) 135, and applications (or sets of instructions) 136.
  • memory 102 FIG. 1A
  • 370 FIG. 3
  • Device/global internal state 157 includes one or more of: active application state, indicating which applications, if any, are currently active; display state, indicating what applications, views or other information occupy various regions of touch screen display 112; sensor state, including information obtained from the device’s various sensors and input control devices 116; and location information concerning the device’s location and/or attitude.
  • Operating system 126 e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS,
  • WINDOWS or an embedded operating system such as VxWorks
  • VxWorks includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
  • general system tasks e.g., memory management, storage device control, power management, etc.
  • Communication module 128 facilitates communication with other devices over one or more external ports 124 and also includes various software components for handling data received by RF circuitry 108 and/or external port 124.
  • External port 124 e.g., Universal Serial Bus (USB), FIREWIRE, etc.
  • USB Universal Serial Bus
  • FIREWIRE FireWire
  • the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or similar to and/or compatible with, the 30-pin connector used on iPod® (trademark of Apple Inc.) devices.
  • Contact/motion module 130 optionally detects contact with touch screen 112 (in conjunction with display controller 156) and other touch-sensitive devices (e.g., a touchpad or physical click wheel).
  • Contact/motion module 130 includes various software components for performing various operations related to detection of contact, such as determining if contact has occurred (e.g., detecting a finger-down event), determining an intensity of the contact (e.g., the force or pressure of the contact or a substitute for the force or pressure of the contact), determining if there is movement of the contact and tracking the movement across the touch- sensitive surface (e.g., detecting one or more finger-dragging events), and determining if the contact has ceased (e.g., detecting a finger-up event or a break in contact).
  • Contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact, which is represented by a series of contact data, optionally includes determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact. These operations are, optionally, applied to single contacts (e.g., one finger contacts) or to multiple simultaneous contacts (e.g., “multitouch”/multiple finger contacts). In some embodiments, contact/motion module 130 and display controller 156 detect contact on a touchpad.
  • contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether a user has“clicked” on an icon).
  • at least a subset of the intensity thresholds are determined in accordance with software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and can be adjusted without changing the physical hardware of device 100).
  • a mouse“click” threshold of a trackpad or touch screen display can be set to any of a large range of predefined threshold values without changing the trackpad or touch screen display hardware.
  • a user of the device is provided with software settings for adjusting one or more of the set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting a plurality of intensity thresholds at once with a system-level click “intensity” parameter).
  • Contact/motion module 130 optionally detects a gesture input by a user.
  • Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts).
  • a gesture is, optionally, detected by detecting a particular contact pattern.
  • detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (liftoff) event at the same position (or substantially the same position) as the finger-down event (e.g., at the position of an icon).
  • detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (liftoff) event.
  • Graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual property) of graphics that are displayed.
  • the term“graphics” includes any object that can be displayed to a user, including, without limitation, text, web pages, icons (such as user-interface objects including soft keys), digital images, videos, animations, and the like.
  • graphics module 132 stores data representing graphics to be used. Each graphic is, optionally, assigned a corresponding code.
  • Graphics module 132 receives, from applications etc., one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller 156.
  • Haptic feedback module 133 includes various software components for generating instructions used by tactile output generator(s) 167 to produce tactile outputs at one or more locations on device 100 in response to user interactions with device 100.
  • Text input module 134 which is, optionally, a component of graphics module 132, provides soft keyboards for entering text in various applications (e.g., contacts 137, e-mail 140, IM 141, browser 147, and any other application that needs text input).
  • applications e.g., contacts 137, e-mail 140, IM 141, browser 147, and any other application that needs text input.
  • GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to telephone 138 for use in location-based dialing; to camera 143 as picture/video metadata; and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets).
  • applications e.g., to telephone 138 for use in location-based dialing; to camera 143 as picture/video metadata; and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets).
  • Applications 136 optionally include the following modules (or sets of instructions), or a subset or superset thereof:
  • Contacts module 137 (sometimes called an address book or contact list);
  • Video conference module 139 • Video conference module 139;
  • Camera module 143 for still and/or video images
  • Image management module 144
  • Calendar module 148 • Calendar module 148;
  • Widget modules 149 which optionally include one or more of: weather widget 149-1, stocks widget 149-2, calculator widget 149-3, alarm clock widget 149-4, dictionary widget 149-5, and other widgets obtained by the user, as well as user-created widgets 149-6;
  • Widget creator module 150 for making user-created widgets 149-6;
  • Video and music player module 152 which merges video player module and music
  • Map module 154 • Map module 154;
  • Examples of other applications 136 that are, optionally, stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
  • contacts module 137 are, optionally, used to manage an address book or contact list (e.g., stored in application internal state 192 of contacts module 137 in memory 102 or memory 370), including: adding name(s) to the address book; deleting name(s) from the address book; associating telephone number(s), e-mail address(es), physical address(es) or other information with a name; associating an image with a name;
  • an address book or contact list e.g., stored in application internal state 192 of contacts module 137 in memory 102 or memory 370
  • categorizing and sorting names providing telephone numbers or e-mail addresses to initiate and/or facilitate communications by telephone 138, video conference module 139, e-mail 140, or IM 141; and so forth.
  • telephone module 138 are optionally, used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contacts module 137, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation, and disconnect or hang up when the conversation is completed.
  • the wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies.
  • video conference module 139 includes executable instructions to initiate, conduct, and terminate a video conference between a user and one or more other participants in accordance with user instructions.
  • e-mail client module 140 includes executable instructions to create, send, receive, and manage e-mail in response to user instructions.
  • e-mail client module 140 makes it very easy to create and send e-mails with still or video images taken with camera module 143.
  • the instant messaging module 141 includes executable instructions to enter a sequence of characters corresponding to an instant message, to modify previously entered characters, to transmit a respective instant message (for example, using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for telephony -based instant messages or using XMPP, SIMPLE, or IMPS for Internet-based instant messages), to receive instant messages, and to view received instant messages.
  • SMS Short Message Service
  • MMS Multimedia Message Service
  • XMPP extensible Markup Language
  • SIMPLE Session Initation Protocol
  • IMPS Internet Messaging Protocol
  • transmitted and/or received instant messages optionally include graphics, photos, audio files, video files and/or other attachments as are supported in an MMS and/or an Enhanced Messaging Service (EMS).
  • EMS Enhanced Messaging Service
  • instant messaging refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and Internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
  • workout support module 142 includes executable instructions to create workouts (e.g., with time, distance, and/or calorie burning goals);
  • workout sensors sports devices
  • receive workout sensor data calibrate sensors used to monitor a workout
  • select and play music for a workout and display, store, and transmit workout data.
  • camera module 143 includes executable instructions to capture still images or video (including a video stream) and store them into memory 102, modify
  • image management module 144 includes executable instructions to arrange, modify (e.g., edit), or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album), and store still and/or video images.
  • modify e.g., edit
  • present e.g., in a digital slide show or album
  • 147 includes executable instructions to browse the Internet in accordance with user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages.
  • calendar module 148 includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to- do lists, etc.) in accordance with user instructions.
  • widget modules 149 are mini-applications that are, optionally, downloaded and used by a user (e.g., weather widget 149-1, stocks widget 149-2, calculator widget 149-3, alarm clock widget 149-4, and dictionary widget 149-5) or created by the user (e.g., user-created widget 149- 6).
  • a widget includes an HTML (Hypertext Markup Language) file, a CSS (Cascading Style Sheets) file, and a JavaScript file.
  • a widget includes an XML (Extensible Markup Language) file and a JavaScript file (e.g., Yahoo! Widgets).
  • the widget creator module 150 are, optionally, used by a user to create widgets (e.g., turning a user-specified portion of a web page into a widget).
  • search module 151 includes executable instructions to search for text, music, sound, image, video, and/or other files in memory 102 that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions.
  • search criteria e.g., one or more user-specified search terms
  • video and music player module 152 includes executable instructions that allow the user to download and play back recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, and executable instructions to display, present, or otherwise play back videos (e.g., on touch screen 112 or on an external, connected display via external port 124).
  • device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple Inc.).
  • notes module 153 includes executable instructions to create and manage notes, to-do lists, and the like in accordance with user instructions.
  • map module 154 are, optionally, used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data on stores and other points of interest at or near a particular location, and other location-based data) in accordance with user instructions.
  • maps e.g., driving directions, data on stores and other points of interest at or near a particular location, and other location-based data
  • online video module 155 includes instructions that allow the user to access, browse, receive (e.g., by streaming and/or download), play back (e.g., on the touch screen or on an external, connected display via external port 124), send an e-mail with a link to a particular online video, and otherwise manage online videos in one or more file formats, such as H.264.
  • instant messaging module 141 rather than e-mail client module 140, is used to send a link to a particular online video.
  • Each of the above-identified modules and applications corresponds to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein).
  • These modules e.g., sets of instructions
  • modules need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules are, optionally, combined or otherwise rearranged in various embodiments.
  • video player module is, optionally, combined with music player module into a single module (e.g., video and music player module 152, FIG. 1A).
  • memory 102 optionally stores a subset of the modules and data structures identified above. Furthermore, memory 102 optionally stores additional modules and data structures not described above.
  • device 100 is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad.
  • a touch screen and/or a touchpad as the primary input control device for operation of device 100, the number of physical input control devices (such as push buttons, dials, and the like) on device 100 is, optionally, reduced.
  • the predefined set of functions that are performed exclusively through a touch screen and/or a touchpad optionally include navigation between user interfaces.
  • the touchpad when touched by the user, navigates device 100 to a main, home, or root menu from any user interface that is displayed on device 100.
  • a“menu button” is implemented using a touchpad.
  • the menu button is a physical push button or other physical input control device instead of a touchpad.
  • FIG. 1B is a block diagram illustrating exemplary components for event handling in accordance with some embodiments.
  • memory 102 (FIG. 1 A) or 370 (FIG. 3) includes event sorter 170 (e.g., in operating system 126) and a respective application 136-1 (e.g., any of the aforementioned applications 137-151, 155, 380-390).
  • event sorter 170 e.g., in operating system 126
  • application 136-1 e.g., any of the aforementioned applications 137-151, 155, 380-390.
  • Event sorter 170 receives event information and determines the application 136-1 and application view 191 of application 136-1 to which to deliver the event information. Event sorter
  • application 170 includes event monitor 171 and event dispatcher module 174.
  • application 136-1 includes application internal state 192, which indicates the current application view(s) displayed on touch-sensitive display 112 when the application is active or executing.
  • device/global internal state 157 is used by event sorter 170 to determine which application(s) is (are) currently active, and application internal state 192 is used by event sorter 170 to determine application views 191 to which to deliver event information.
  • application internal state 192 includes additional information, such as one or more of: resume information to be used when application 136-1 resumes execution, user interface state information that indicates information being displayed or that is ready for display by application 136-1, a state queue for enabling the user to go back to a prior state or view of application 136-1, and a redo/undo queue of previous actions taken by the user.
  • Event monitor 171 receives event information from peripherals interface 118.
  • Event information includes information about a sub-event (e.g., a user touch on touch-sensitive display 112, as part of a multi-touch gesture).
  • Peripherals interface 118 transmits information it receives from I/O subsystem 106 or a sensor, such as proximity sensor 166, accelerometer(s) 168, and/or microphone 113 (through audio circuitry 110).
  • Information that peripherals interface 118 receives from I/O subsystem 106 includes information from touch-sensitive display 112 or a touch-sensitive surface.
  • event monitor 171 sends requests to the peripherals interface 118 at predetermined intervals. In response, peripherals interface 118 transmits event information. In other embodiments, peripherals interface 118 transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or for more than a predetermined duration).
  • event sorter 170 also includes a hit view determination module 172 and/or an active event recognizer determination module 173.
  • Hit view determination module 172 provides software procedures for determining where a sub-event has taken place within one or more views when touch-sensitive display 112 displays more than one view. Views are made up of controls and other elements that a user can see on the display. [0113] Another aspect of the user interface associated with an application is a set of views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. The application views (of a respective application) in which a touch is detected optionally correspond to programmatic levels within a programmatic or view hierarchy of the application.
  • the lowest level view in which a touch is detected is, optionally, called the hit view
  • the set of events that are recognized as proper inputs are, optionally, determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture.
  • Hit view determination module 172 receives information related to sub-events of a touch-based gesture.
  • hit view determination module 172 identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (e.g., the first sub-event in the sequence of sub-events that form an event or potential event).
  • the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
  • Active event recognizer determination module 173 determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some
  • active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views.
  • Event dispatcher module 174 dispatches the event information to an event recognizer (e.g., event recognizer 180). In embodiments including active event recognizer determination module 173, event dispatcher module 174 delivers the event information to an event recognizer determined by active event recognizer determination module 173. In some embodiments, event dispatcher module 174 stores in an event queue the event information, which is retrieved by a respective event receiver 182.
  • an event recognizer e.g., event recognizer 180.
  • event dispatcher module 174 delivers the event information to an event recognizer determined by active event recognizer determination module 173.
  • event dispatcher module 174 stores in an event queue the event information, which is retrieved by a respective event receiver 182.
  • operating system 126 includes event sorter 170.
  • application 136-1 includes event sorter 170.
  • event sorter 170 is a stand-alone module, or a part of another module stored in memory 102, such as contact/motion module 130.
  • application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for handling touch events that occur within a respective view of the application’s user interface.
  • Each application view 191 of the application 136-1 includes one or more event recognizers 180.
  • a respective application view 191 includes a plurality of event recognizers 180.
  • one or more of event recognizers 180 are part of a separate module, such as a user interface kit or a higher level object from which application 136-1 inherits methods and other properties.
  • a respective event handler 190 includes one or more of: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170.
  • Event handler 190 optionally utilizes or calls data updater 176, object updater 177, or GUI updater 178 to update the application internal state 192.
  • one or more of the application views 191 include one or more respective event handlers 190.
  • one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
  • a respective event recognizer 180 receives event information (e.g., event data 179) from event sorter 170 and identifies an event from the event information.
  • Event recognizer 180 includes event receiver 182 and event comparator 184.
  • event recognizer 180 also includes at least a subset of: metadata 183, and event delivery instructions 188 (which optionally include sub-event delivery instructions).
  • Event receiver 182 receives event information from event sorter 170.
  • the event information includes information about a sub-event, for example, a touch or a touch movement.
  • the event information also includes additional information, such as location of the sub-event.
  • the event information optionally also includes speed and direction of the sub-event.
  • events include rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation (also called device attitude) of the device.
  • Event comparator 184 compares the event information to predefined event or sub- event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event.
  • event comparator 184 includes event definitions 186.
  • Event definitions 186 contain definitions of events (e.g., predefined sequences of sub-events), for example, event 1 (187-1), event 2 (187-2), and others.
  • sub-events in an event (187) include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching.
  • the definition for event 1 (187-1) is a double tap on a displayed object.
  • the double tap for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first liftoff (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second liftoff (touch end) for a predetermined phase.
  • the definition for event 2 (187-2) is a dragging on a displayed object.
  • the dragging for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display 112, and liftoff of the touch (touch end).
  • the event also includes information for one or more associated event handlers 190.
  • event definition 187 includes a definition of an event for a respective user-interface object.
  • event comparator 184 performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on touch-sensitive display 112, when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test to determine which of the three user-interface objects is associated with the touch (sub event). If each displayed object is associated with a respective event handler 190, the event comparator uses the result of the hit test to determine which event handler 190 should be activated.
  • event comparator 184 selects an event handler associated with the sub event and the object triggering the hit test.
  • the definition for a respective event (187) also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer’s event type.
  • a respective event recognizer 180 determines that the series of sub-events do not match any of the events in event definitions 186, the respective event recognizer 180 enters an event impossible, event failed, or event ended state, after which it disregards subsequent sub- events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture.
  • a respective event recognizer 180 includes metadata 183 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers.
  • metadata 183 includes configurable properties, flags, and/or lists that indicate how event recognizers interact, or are enabled to interact, with one another.
  • metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub events are delivered to varying levels in the view or programmatic hierarchy.
  • a respective event recognizer 180 activates event handler 190 associated with an event when one or more particular sub-events of an event are recognized.
  • a respective event recognizer 180 delivers event information associated with the event to event handler 190.
  • Activating an event handler 190 is distinct from sending (and deferred sending) sub-events to a respective hit view.
  • event recognizer 180 throws a flag associated with the recognized event, and event handler 190 associated with the flag catches the flag and performs a predefined process.
  • event delivery instructions 188 include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process.
  • data updater 176 creates and updates data used in application 136-1. For example, data updater 176 updates the telephone number used in contacts module 137, or stores a video file used in video player module.
  • object updater 177 creates and updates objects used in application 136-1. For example, object updater 177 creates a new user-interface object or updates the position of a user-interface object.
  • GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends it to graphics module 132 for display on a touch-sensitive display.
  • event handler(s) 190 includes or has access to data updater 176, object updater 177, and GUI updater 178.
  • data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
  • event handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices 100 with input devices, not all of which are initiated on touch screens.
  • mouse movement and mouse button presses optionally coordinated with single or multiple keyboard presses or holds; contact movements such as taps, drags, scrolls, etc. on touchpads; pen stylus inputs; movement of the device; oral instructions; detected eye
  • FIG. 2 illustrates a portable multifunction device 100 having a touch screen 112 in accordance with some embodiments.
  • the touch screen optionally displays one or more graphics within user interface (UI) 200.
  • UI user interface
  • a user is enabled to select one or more of the graphics by making a gesture on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figure) or one or more styluses 203 (not drawn to scale in the figure).
  • selection of one or more graphics occurs when the user breaks contact with the one or more graphics.
  • the gesture optionally includes one or more taps, one or more swipes (from left to right, right to left, upward and/or downward), and/or a rolling of a finger (from right to left, left to right, upward and/or downward) that has made contact with device 100.
  • inadvertent contact with a graphic does not select the graphic.
  • a swipe gesture that sweeps over an application icon optionally does not select the corresponding application when the gesture corresponding to selection is a tap.
  • Device 100 optionally also include one or more physical buttons, such as“home” or menu button 204.
  • menu button 204 is, optionally, used to navigate to any application 136 in a set of applications that are, optionally, executed on device 100.
  • the menu button is implemented as a soft key in a GUI displayed on touch screen 112.
  • device 100 includes touch screen 112, menu button 204, push button 206 for powering the device on/off and locking the device, volume adjustment button(s) 208, subscriber identity module (SIM) card slot 210, headset jack 212, and docking/charging external port 124.
  • Push button 206 is, optionally, used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process.
  • device 100 also accepts verbal input for activation or deactivation of some functions through microphone 113.
  • Device 100 also, optionally, includes one or more contact intensity sensors 165 for detecting intensity of contacts on touch screen 112 and/or one or more tactile output generators 167 for generating tactile outputs for a user of device 100.
  • FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
  • Device 300 need not be portable.
  • device 300 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child’s learning toy), a gaming system, or a control device (e.g., a home or industrial controller).
  • Device 300 typically includes one or more processing units (CPUs) 310, one or more network or other communications interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components.
  • CPUs processing units
  • Communication buses 320 optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components.
  • Device 300 includes input/output (I/O) interface 330 comprising display 340, which is typically a touch screen display.
  • I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and touchpad 355, tactile output generator 357 for generating tactile outputs on device 300 (e.g., similar to tactile output generator(s) 167 described above with reference to FIG. 1 A), sensors 359 (e.g., optical, acceleration, proximity, touch- sensitive, and/or contact intensity sensors similar to contact intensity sensor(s) 165 described above with reference to FIG. 1 A).
  • sensors 359 e.g., optical, acceleration, proximity, touch- sensitive, and/or contact intensity sensors similar to contact intensity sensor(s) 165 described above with reference to FIG. 1 A).
  • Memory 370 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 370 optionally includes one or more storage devices remotely located from CPU(s) 310. In some embodiments, memory 370 stores programs, modules, and data structures analogous to the programs, modules, and data structures stored in memory 102 of portable multifunction device 100 (FIG. 1 A), or a subset thereof. Furthermore, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100.
  • memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk authoring module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (FIG. 1 A) optionally does not store these modules.
  • Each of the above-identified elements in FIG. 3 is, optionally, stored in one or more of the previously mentioned memory devices.
  • Each of the above-identified modules corresponds to a set of instructions for performing a function described above.
  • the above-identified modules or programs (e.g., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules are, optionally, combined or otherwise rearranged in various embodiments.
  • memory 370 optionally stores a subset of the modules and data structures identified above. Furthermore, memory 370 optionally stores additional modules and data structures not described above. [0136] Attention is now directed towards embodiments of user interfaces that are, optionally, implemented on, for example, portable multifunction device 100.
  • FIG. 4A illustrates an exemplary user interface for a menu of applications on portable multifunction device 100 in accordance with some embodiments. Similar user interfaces are, optionally, implemented on device 300.
  • user interface 400 includes the following elements, or a subset or superset thereof:
  • Tray 408 with icons for frequently used applications such as: o Icon 416 for telephone module 138, labeled“Phone,” which optionally includes an indicator 414 of the number of missed calls or voicemail messages;
  • Icon 418 for e-mail client module 140 labeled“Mail,” which optionally includes an indicator 410 of the number of unread e-mails;
  • Icon 422 for video and music player module 152 also referred to as iPod (trademark of Apple Inc.) module 152, labeled“iPod;” and
  • Icons for other applications such as: o Icon 424 for IM module 141, labeled“Messages;”
  • Icon 430 for camera module 143 labeled“Camera;”
  • Icon 432 for online video module 155 labeled“Online Video;”
  • Icon 442 for workout support module 142 labeled“Workout Support”
  • Icon 444 for notes module 153 labeled“Notes;”
  • Icon 446 for a settings application or module, labeled“Settings,” which provides access to settings for device 100 and its various applications 136.
  • icon 422 for video and music player module 152 is labeled“Music” or“Music Player.”
  • Other labels are, optionally, used for various application icons.
  • a label for a respective application icon includes a name of an application corresponding to the respective application icon.
  • a label for a particular application icon is distinct from a name of an application corresponding to the particular application icon.
  • FIG. 4B illustrates an exemplary user interface on a device (e.g., device 300, FIG. 3) with a touch-sensitive surface 451 (e.g., a tablet or touchpad 355, FIG. 3) that is separate from the display 450 (e.g., touch screen display 112).
  • Device 300 also, optionally, includes one or more contact intensity sensors (e.g., one or more of sensors 359) for detecting intensity of contacts on touch-sensitive surface 451 and/or one or more tactile output generators 357 for generating tactile outputs for a user of device 300.
  • one or more contact intensity sensors e.g., one or more of sensors 359
  • tactile output generators 357 for generating tactile outputs for a user of device 300.
  • the device detects inputs on a touch-sensitive surface that is separate from the display, as shown in FIG. 4B.
  • the touch-sensitive surface e.g., 451 in FIG. 4B
  • the touch-sensitive surface has a primary axis (e.g., 452 in FIG. 4B) that corresponds to a primary axis (e.g., 453 in FIG. 4B) on the display (e.g., 450).
  • the device detects contacts (e.g., 460 and 462 in FIG.
  • finger inputs e.g., finger contacts, finger tap gestures, finger swipe gestures
  • one or more of the finger inputs are replaced with input from another input device (e.g., a mouse-based input or stylus input).
  • a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a contact) followed by movement of the cursor along the path of the swipe (e.g., instead of movement of the contact).
  • a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact).
  • a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact).
  • multiple user inputs it should be understood that multiple computer mice are, optionally, used simultaneously, or a mouse and finger contacts are, optionally, used simultaneously.
  • FIG. 5A illustrates exemplary personal electronic device 500.
  • Device 500 includes body 502.
  • device 500 can include some or all of the features described with respect to devices 100 and 300 (e.g., FIGS. 1 A-4B).
  • device 500 has touch-sensitive display screen 504, hereafter touch screen 504.
  • touch screen 504 optionally includes one or more intensity sensors for detecting intensity of contacts (e.g., touches) being applied.
  • the one or more intensity sensors of touch screen 504 (or the touch-sensitive surface) can provide output data that represents the intensity of touches.
  • the user interface of device 500 can respond to touches based on their intensity, meaning that touches of different intensities can invoke different user interface operations on device 500.
  • Exemplary techniques for detecting and processing touch intensity are found, for example, in related applications: International Patent Application Serial No.
  • device 500 has one or more input mechanisms 506 and 508.
  • Input mechanisms 506 and 508, if included, can be physical. Examples of physical input mechanisms include push buttons and rotatable mechanisms.
  • device 500 has one or more attachment mechanisms. Such attachment mechanisms, if included, can permit attachment of device 500 with, for example, hats, eyewear, earrings, necklaces, shirts, jackets, bracelets, watch straps, chains, trousers, belts, shoes, purses, backpacks, and so forth. These attachment mechanisms permit device 500 to be worn by a user.
  • FIG. 5B depicts exemplary personal electronic device 500.
  • device 500 can include some or all of the components described with respect to FIGS. 1 A, 1B, and 3.
  • Device 500 has bus 512 that operatively couples I/O section 514 with one or more computer processors 516 and memory 518.
  • EO section 514 can be connected to display 504, which can have touch-sensitive component 522 and, optionally, intensity sensor 524 (e.g., contact intensity sensor).
  • EO section 514 can be connected with communication unit 530 for receiving application and operating system data, using Wi-Fi, Bluetooth, near field communication (NFC), cellular, and/or other wireless communication techniques.
  • Device 500 can include input mechanisms 506 and/or 508.
  • Input mechanism 506 is, optionally, a rotatable input device or a depressible and rotatable input device, for example.
  • Input mechanism 508 is, optionally, a button, in some examples.
  • Input mechanism 508 is, optionally, a microphone, in some examples.
  • Personal electronic device 500 optionally includes various sensors, such as GPS sensor 532, accelerometer 534, directional sensor 540 (e.g., compass), gyroscope 536, motion sensor 538, and/or a combination thereof, all of which can be operatively connected to I/O section 514.
  • sensors such as GPS sensor 532, accelerometer 534, directional sensor 540 (e.g., compass), gyroscope 536, motion sensor 538, and/or a combination thereof, all of which can be operatively connected to I/O section 514.
  • Memory 518 of personal electronic device 500 can include one or more non- transitory computer-readable storage mediums, for storing computer-executable instructions, which, when executed by one or more computer processors 516, for example, can cause the computer processors to perform the techniques described below, including processes 700, 900, and 1100 (FIGS. 7A-7B, 9A-9B, and 11).
  • a computer-readable storage medium can be any medium that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device.
  • the storage medium is a transitory computer-readable storage medium.
  • the storage medium is a non-transitory computer-readable storage medium.
  • the non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. Examples of such storage include magnetic disks, optical discs based on CD, DVD, or Blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like.
  • Personal electronic device 500 is not limited to the components and configuration of FIG. 5B, but can include other or additional components in multiple configurations.
  • the term“affordance” refers to a user-interactive graphical user interface object that is, optionally, displayed on the display screen of devices 100, 300, and/or 500 (FIGS. 1 A, 3, and 5A-5B).
  • an image e.g., icon
  • a button e.g., button
  • text e.g., hyperlink
  • the term“focus selector” refers to an input element that indicates a current part of a user interface with which a user is interacting.
  • the cursor acts as a“focus selector” so that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 355 in FIG. 3 or touch-sensitive surface 451 in FIG. 4B) while the cursor is over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input.
  • a touch-sensitive surface e.g., touchpad 355 in FIG. 3 or touch-sensitive surface 451 in FIG. 4B
  • a particular user interface element e.g., a button, window, slider, or other user interface element
  • a detected contact on the touch screen acts as a“focus selector” so that when an input (e.g., a press input by the contact) is detected on the touch screen display at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input.
  • a particular user interface element e.g., a button, window, slider, or other user interface element
  • focus is moved from one region of a user interface to another region of the user interface without corresponding movement of a cursor or movement of a contact on a touch screen display (e.g., by using a tab key or arrow keys to move focus from one button to another button); in these implementations, the focus selector moves in accordance with movement of focus between different regions of the user interface.
  • the focus selector is generally the user interface element (or contact on a touch screen display) that is controlled by the user so as to communicate the user’s intended interaction with the user interface (e.g., by indicating, to the device, the element of the user interface with which the user is intending to interact).
  • a focus selector e.g., a cursor, a contact, or a selection box
  • a press input is detected on the touch-sensitive surface (e.g., a touchpad or touch screen) will indicate that the user is intending to activate the respective button (as opposed to other user interface elements shown on a display of the device).
  • the term“characteristic intensity” of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on multiple intensity samples.
  • characteristic intensity is, optionally, based on a predefined number of intensity samples, or a set of intensity samples collected during a predetermined time period (e.g., 0.05, 0.1, 0.2, 0.5, 1, 2,
  • a characteristic intensity of a contact is, optionally, based on one or more of: a maximum value of the intensities of the contact, a mean value of the intensities of the contact, an average value of the intensities of the contact, a top 10 percentile value of the intensities of the contact, a value at the half maximum of the intensities of the contact, a value at the 90 percent maximum of the intensities of the contact, or the like.
  • the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether an operation has been performed by a user.
  • the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold.
  • a contact with a characteristic intensity that does not exceed the first threshold results in a first operation
  • a contact with a characteristic intensity that exceeds the first intensity threshold and does not exceed the second intensity threshold results in a second operation
  • a contact with a characteristic intensity that exceeds the second threshold results in a third operation.
  • a comparison between the characteristic intensity and one or more thresholds is used to determine whether or not to perform one or more operations (e.g., whether to perform a respective operation or forgo performing the respective operation), rather than being used to determine whether to perform a first operation or a second operation.
  • a portion of a gesture is identified for purposes of determining a characteristic intensity.
  • a touch-sensitive surface optionally receives a continuous swipe contact transitioning from a start location and reaching an end location, at which point the intensity of the contact increases.
  • the characteristic intensity of the contact at the end location is, optionally, based on only a portion of the continuous swipe contact, and not the entire swipe contact (e.g., only the portion of the swipe contact at the end location).
  • a smoothing algorithm is, optionally, applied to the intensities of the swipe contact prior to determining the characteristic intensity of the contact.
  • the smoothing algorithm optionally includes one or more of: an unweighted sliding-average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm.
  • these smoothing algorithms eliminate narrow spikes or dips in the intensities of the swipe contact for purposes of
  • the intensity of a contact on the touch-sensitive surface is, optionally, characterized relative to one or more intensity thresholds, such as a contact-detection intensity threshold, a light press intensity threshold, a deep press intensity threshold, and/or one or more other intensity thresholds.
  • the light press intensity threshold corresponds to an intensity at which the device will perform operations typically associated with clicking a button of a physical mouse or a trackpad.
  • the deep press intensity threshold corresponds to an intensity at which the device will perform operations that are different from operations typically associated with clicking a button of a physical mouse or a trackpad.
  • the device when a contact is detected with a characteristic intensity below the light press intensity threshold (e.g., and above a nominal contact-detection intensity threshold below which the contact is no longer detected), the device will move a focus selector in accordance with movement of the contact on the touch-sensitive surface without performing an operation associated with the light press intensity threshold or the deep press intensity threshold.
  • a characteristic intensity below the light press intensity threshold e.g., and above a nominal contact-detection intensity threshold below which the contact is no longer detected
  • intensity thresholds are consistent between different sets of user interface figures.
  • An increase of characteristic intensity of the contact from an intensity below the light press intensity threshold to an intensity between the light press intensity threshold and the deep press intensity threshold is sometimes referred to as a“light press” input.
  • An increase of characteristic intensity of the contact from an intensity below the deep press intensity threshold to an intensity above the deep press intensity threshold is sometimes referred to as a“deep press” input.
  • An increase of characteristic intensity of the contact from an intensity below the contact- detection intensity threshold to an intensity between the contact-detection intensity threshold and the light press intensity threshold is sometimes referred to as detecting the contact on the touch- surface.
  • a decrease of characteristic intensity of the contact from an intensity above the contact- detection intensity threshold to an intensity below the contact-detection intensity threshold is sometimes referred to as detecting liftoff of the contact from the touch-surface.
  • the contact-detection intensity threshold is zero. In some embodiments, the contact-detection intensity threshold is greater than zero.
  • one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting the respective press input performed with a respective contact (or a plurality of contacts), where the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or plurality of contacts) above a press-input intensity threshold.
  • the respective operation is performed in response to detecting the increase in intensity of the respective contact above the press-input intensity threshold (e.g., a“down stroke” of the respective press input).
  • the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press-input threshold (e.g., an“up stroke” of the respective press input).
  • the device employs intensity hysteresis to avoid accidental inputs sometimes termed“jitter,” where the device defines or selects a hysteresis intensity threshold with a predefined relationship to the press-input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press-input intensity threshold or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press-input intensity threshold).
  • the hysteresis intensity threshold is X intensity units lower than the press-input intensity threshold or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press-input intensity threshold.
  • the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the hysteresis intensity threshold that corresponds to the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., an“up stroke” of the respective press input).
  • the press input is detected only when the device detects an increase in intensity of the contact from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press-input intensity threshold and, optionally, a subsequent decrease in intensity of the contact to an intensity at or below the hysteresis intensity, and the respective operation is performed in response to detecting the press input (e.g., the increase in intensity of the contact or the decrease in intensity of the contact, depending on the circumstances).
  • the descriptions of operations performed in response to a press input associated with a press-input intensity threshold or in response to a gesture including the press input are, optionally, triggered in response to detecting either: an increase in intensity of a contact above the press-input intensity threshold, an increase in intensity of a contact from an intensity below the hysteresis intensity threshold to an intensity above the press-input intensity threshold, a decrease in intensity of the contact below the press-input intensity threshold, and/or a decrease in intensity of the contact below the hysteresis intensity threshold corresponding to the press-input intensity threshold.
  • the operation is, optionally, performed in response to detecting a decrease in intensity of the contact below a hysteresis intensity threshold corresponding to, and lower than, the press-input intensity threshold.
  • multifunction device 100 device 300, or device 500.
  • FIGS. 6A-6T illustrate exemplary user interfaces for adjusting a simulated depth effect (e.g., a Bokeh effect), in accordance with some embodiments.
  • the user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 7A- 7B.
  • FIG. 6A illustrates a front-view 600A and a rear-view 600B of an electronic device 600 (e.g., a smartphone).
  • Electronic device 600 includes a display 602 (e.g., integrated with a touch-sensitive surface), an input device 604 (e.g., a mechanical input button, a press-able input button), a front-facing sensor 606 (e.g., including one or more front-facing cameras), and a rear facing sensor 608 (e.g., including one or more rear-facing cameras).
  • electronic device 600 also includes one or more biometric sensors (e.g., a fingerprint sensor, a facial recognition sensor, an iris/retina scanner).
  • biometric sensors e.g., a fingerprint sensor, a facial recognition sensor, an iris/retina scanner.
  • Electronic device 600 optionally also includes one or more depth camera sensors
  • the one or more depth camera sensors receive data from the environment to create a three- dimensional model of an object (e.g., a face) within a scene from a viewpoint (e.g., a depth camera sensor).
  • a viewpoint e.g., a depth camera sensor.
  • the one or more depth camera sensors are optionally used to determine a depth map of different portions of an image captured by the imaging module.
  • one or more depth camera sensors are located on the front of device so that the user’s image with depth information is, optionally, obtained for video conferencing while the user views the other video conference participants on the touch screen display and to capture selfies with depth map data.
  • the one or more depth camera sensors are located on the back of device, or on the back and the front of the device. In some embodiments, the position(s) of the one or more depth camera sensors can be changed by the user (e.g., by rotating the lens and the sensor in the device housing) so that a depth camera sensor is used along with the touch screen display for both video conferencing and still and/or video image acquisition. In some embodiments, the one or more depth camera sensors are integrated with front-facing camera 606 and/or rear-facing camera 608.
  • a depth map (e.g., depth map image) contains information (e.g., values) that relates to the distance of objects in a scene from a viewpoint (e.g., a camera, an optical sensor, a depth camera sensor).
  • a viewpoint e.g., a camera, an optical sensor, a depth camera sensor.
  • each depth pixel defines the position in the viewpoint’s z-axis where its corresponding two-dimensional pixel is located.
  • a depth map is composed of pixels wherein each pixel is defined by a value (e.g., 0 - 255).
  • the“0” value represents pixels that are located at the most distant place in a“three dimensional” scene and the“255” value represents pixels that are located closest to a viewpoint (e.g., a camera, an optical sensor, a depth camera sensor) in the “three dimensional” scene.
  • a depth map represents the distance between an object in a scene and the plane of the viewpoint.
  • the depth map includes information about the relative depth of various features of an object of interest in view of the depth camera (e.g., the relative depth of eyes, nose, mouth, ears of a user’s face).
  • the depth map includes information that enables the device to determine contours of the object of interest in a z direction.
  • electronic device 600 displays, on display 602, a user interface 610 (e.g., a lockscreen user interface) that includes an affordance 612 for launching an image capture application (e.g., a camera application, an image/photo capturing and editing application). While displaying user interface 610, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) an activation 601 of affordance 612 (e.g., a tap gesture on affordance 612). [0163] In FIG. 6B, in response to detecting activation 601, electronic device 600 displays, on display 602, a user interface 614 of the image capture application. In this example, image capture application is in a photo mode. While displaying user interface 614 of the image capture application, electronic device 600 receives, via rear-facing camera 608, image data
  • electronic device 600 receives, via front-facing camera 606, image data corresponding to the environment within the field-of-view of front-facing camera 606.
  • Electronic device 600 displays, in an image display region 616 of user interface 614 of the image capture application, an image representation 618 of the image data received via rear-facing camera 608.
  • image representation 618 includes a subject 620 (e.g., a view of a person that includes the face of the person and at least a portion of the upper body of the person).
  • image representation 618 also includes a light-emitting object 622 A (corresponding to a real light-emitting object in the real environment), light-emitting objects 622B (corresponding to real light-emitting objects in the real environment), and light- emitting objects 622C (corresponding to real light-emitting objects in the real environment).
  • image representation 618 also includes a non-light emitting object 624
  • User interface 614 of the image capture application also includes a first menu region 628 A and a second menu region 628B.
  • First menu region 628 A includes a plurality of affordances associated with adjusting image effects and/or properties.
  • Second menu region 628B includes a plurality of image capture mode options (e.g., photo mode, video mode, portrait mode, square mode, slow-motion mode).
  • image capture mode options e.g., photo mode, video mode, portrait mode, square mode, slow-motion mode.
  • electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) an activation 603 of a portrait mode affordance 626 corresponding to portrait mode.
  • electronic device 600 in response to detecting activation 603 of portrait mode affordance 626, changes the current image capture mode of the image capture application from photo mode to portrait mode.
  • electronic device 600 displays, in first menu region 628A of user interface 614, a depth effect affordance 630 (e.g., for adjusting a depth-of-field of image representation 618 by adjusting a simulated f-number, also known as the f-stop, f-ratio, or focal ratio).
  • a depth effect affordance 630 e.g., for adjusting a depth-of-field of image representation 618 by adjusting a simulated f-number, also known as the f-stop, f-ratio, or focal ratio.
  • electronic device 600 applies a simulated depth effect (e.g., a Bokeh effect, a depth-of-field effect, with a default 4.5 f-number) to image representation 618 displayed in image display region 616.
  • a simulated depth effect e.g., a Bokeh effect, a depth-of-field effect, with a default 4.5 f-number
  • the simulated depth effect is applied to the background of image representation 618, with subject 620 as the focal point.
  • the simulated depth effect is applied throughout image representation 618 based on a focal point within subject 620 (e.g., the center region of the face of subject 620, such as the nose of subject 620).
  • depth-of-field properties of an object within image representation 618 are adjusted based on one or more characteristics of the particular object (e.g., the type of object, such as whether the object corresponds to a light-emitting object or to a non-light-emitting object, the shape of the object, the distance of the object from the focal point).
  • characteristics of the particular object e.g., the type of object, such as whether the object corresponds to a light-emitting object or to a non-light-emitting object, the shape of the object, the distance of the object from the focal point.
  • the depth-of-field properties of light-emitting objects 622A, 622B, and 622C in image representation 618 are adjusted more drastically relative to non-light-emitting object 624 in image representation 618 (e.g., such that the light-emitting objects look more blurred, larger, brighter, more saturated, and/or with a more distorted shape than non-light-emitting objects). Adjustments to the depth-of-field properties of an object based on one or more characteristics of the object is described in greater detail below with reference to the user interfaces of FIGS. 8A-8R.
  • electronic device 600 while in portrait mode, detects (e.g., via a touch- sensitive surface of display 602) an activation 605 of depth effect affordance 630 (e.g., a tap gesture on depth effect affordance 630). In some embodiments, electronic device 600 changes a visual characteristic of depth effect affordance (e.g., changes a color of the affordance) upon detecting activation of the affordance.
  • electronic device 600 while in portrait mode, detects (e.g., via a touch-sensitive surface of display 602) a swipe gesture 607 (e.g., a vertical swipe gesture, a swipe-up gesture) within image display region 616.
  • electronic device 600 shifts upwards image display region 616 within user interface 614 (such that first menu region 628A becomes vertically narrower and second menu region 628B becomes vertically wider) to display, in second menu region 628B, a depth adjustment slider 632.
  • Depth adjustment slider 632 includes a plurality of tickmarks 634 corresponding to f- numbers and a needle 636 indicating the currently-selected tickmark (and thus the currently- selected f-number). Depth adjustment slider 632 also includes a f-number indicator 638 (e.g., located over or adjacent to needle 636) indicating the value of the currently-selected f-number.
  • the default f-number is 4.5.
  • electronic device 600 in addition to displaying the current f-number in f-number indicator 638, electronic device 600 also displays the current f-number in depth effect affordance 630.
  • electronic device 600 while displaying depth adjustment slider 632, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) a swipe gesture 609 (e.g., a horizontal swipe gesture, a swipe-right gesture) on depth adjustment slider 632 (e.g., over tickmarks 634).
  • a swipe gesture 609 e.g., a horizontal swipe gesture, a swipe-right gesture
  • tickmarks 634 are (horizontally) shifted in response to swipe gesture 609 and needle 636 remains affixed.
  • needle 636 is shifted over affixed tickmarks 634 in response to a swipe gesture on depth adjustment slider 632.
  • electronic device 600 adjusts, based on the focal point of image representation 618 (e.g., the nose of subject 620), the depth-of- field properties of the objects (e.g., light-emitting objects 622A, 622B, and 622C, and non-light- emitting object 624) within image representation 618.
  • the focal point of image representation 618 e.g., the nose of subject 620
  • the depth-of- field properties of the objects e.g., light-emitting objects 622A, 622B, and 622C, and non-light- emitting object 624
  • the current f-number (3.9) is decreased from the previous (default) f-number (4.5) as a result of swipe gesture 609.
  • Light-emitting objects 622A, 622B, and 622C are more blurred, larger, brighter, more saturated, and/or with a more distorted shape in FIG. 6H (with a 3.9 f-number) than in FIG. 6G (with a 4.5 f-number) and, likewise, non-light-emitting object 624 is more blurred, larger, larger, more saturated, and/or with a more distorted shape in FIG. 6H than in FIG. 6G.
  • the degree of change in the blurriness, the size, the degree of brightness, the degree of saturation, and/or the degree of shape-distortion of the objects from the previous f- number (4.5) to the lower f-number (3.9) is more drastic for light-emitting objects as compared to non -light-emitting objects.
  • each object is further distorted based on each object’s distance from the focal point (e.g., the nose of subject 620) of image representation 618 (e.g., if image representation 618 is viewed as an x, y-plane with the focal point being the center of the plane, the distance is measured as the straight line distance from the center of an object to the center of the plane).
  • the degree of shape distortion of object 622B-1 is more drastic (e.g., such that the object is less circular and more oval / stretched) than the degree of shape distortion of object 622B-2.
  • the degree of shape distortion of object 622C-1 is more drastic (e.g., such that the object is less circular and more oval / stretched) than the degree of shape distortion of object 622C-2.
  • the changes in the depth-of-field properties of objects within the image representation are described in greater detail below with reference to FIGS. 8A-8R.
  • electronic device 600 detects (e.g., via a touch-sensitive surface of display 602), a swipe gesture 611 (e.g., a continuation of swipe gesture 609) on depth adjustment slider 632.
  • a swipe gesture 611 e.g., a continuation of swipe gesture 609
  • electronic device 600 in response to detecting swipe gesture 611, electronic device 600 further adjusts, based on the focal point of image representation 618 (e.g., the nose of subject 620), the depth-of-field properties of the objects (e.g., light-emitting objects 622A, 622B, and 622C, and non-light-emitting object 624) within image representation 618.
  • the focal point of image representation 618 e.g., the nose of subject 620
  • the depth-of-field properties of the objects e.g., light-emitting objects 622A, 622B, and 622C, and non-light-emitting object 624
  • the current f-number (1.6) is further decreased from the previous f-number (3.9) as a result of swipe gesture 611.
  • Light-emitting objects 622A, 622B, and 622C are more blurred, larger, brighter, more saturated, and/or with a more distorted shape in FIG. 61 (with a 1.6 f- number) than in FIG. 6H (with a 3.9 f-number) and, likewise, non-light-emitting object 624 is more blurred, larger, brighter, more saturated, and/or with a more distorted shape in FIG. 61 than in FIG. 6H.
  • the degree of change in the blurriness, the size, the degree of brightness, the degree of saturation, and/or the degree of shape-distortion of the objects from the previous f-number (3.9) to the lower f-number (1.6) is more drastic for light-emitting objects as compared to non- light-emitting objects.
  • FIG. 6J while displaying, in image display region 616, image representation 618 corresponding to image data detected via rear-facing camera 608, and while the simulated depth- of-field is set to a 1.6 f-number (as indicated by f-number indicator 1.6) as previously set in FIG. 61, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) an activation 613 of image capture affordance 640 (e.g., a tap gesture on image capture affordance 640).
  • an activation 613 of image capture affordance 640 e.g., a tap gesture on image capture affordance 640.
  • electronic device 600 stores (e.g., in a local memory of the device and/or a remote server accessible by the device) image data corresponding to image representation 618 with the simulated depth effect (with a 1.6 f-number) applied.
  • electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) an activation 615 of a stored images affordance 642 (e.g., a tap gesture on stored images affordance 642.
  • an activation 615 of a stored images affordance 642 e.g., a tap gesture on stored images affordance 642.
  • electronic device in response to detecting activation 615 of stored images affordance 642, electronic device displays, on display 602, a user interface 644 of a stored images application.
  • User interface 644 includes an image display region 646 for displaying a stored image.
  • electronic device 600 displays, in image display region 646, a stored image
  • stored image representation 648 corresponding to image representation 618 captured in FIG. 6J.
  • stored image representation 648 includes a subject 650 (corresponding to subject 620), a light-emitting object 652 A (corresponding to light-emitting object 622 A), light-emitting objects 652B (corresponding to light-emitting objects 622B), light-emitting objects 652C (corresponding to light-emitting objects 622C), and non-light-emitting object 654 corresponding to non-light-emitting object 624).
  • stored image representation 648 is adjusted with a 1.6 f-number simulated depth-of-field setting.
  • electronic device 600 while displaying stored image representation 648, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) an activation 617 of an edit affordance 656 of user interface 644 (e.g., a tap gesture on edit affordance 656).
  • an edit affordance 656 of user interface 644 e.g., a tap gesture on edit affordance 656
  • electronic device 600 displays (e.g., in a menu region of user interface 644 below image display region 646 showing the stored image representation) depth adjustment slider 632 (set to a 1.6 f-number, as indicated by f-number indicator 638).
  • image display region 646 shifts upwards within user interface 644 to display depth adjustment slider 632 (e.g., similar to image display region 616 shifting upwards, as described with reference to FIG. 6F).
  • Electronic device 600 also displays (e.g., in a region of user interface 644 above image display region 646 showing the stored image representation), a depth effect indicator 658 indicating that the currently- displayed stored image representation (stored image representation 648) is adjusted with a simulated depth effect.
  • a depth effect indicator 658 indicating that the currently- displayed stored image representation (stored image representation 648) is adjusted with a simulated depth effect.
  • electronic device 600 while displaying depth adjustment slider 632, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602), a swipe gesture 619 (e.g., a horizontal swipe gesture, a swipe-left gesture) on depth adjustment slider 632 (e.g., over tickmarks 634).
  • a swipe gesture 619 e.g., a horizontal swipe gesture, a swipe-left gesture
  • tickmarks 634 are (horizontally) shifted in response to swipe gesture 619 and needle 636 remains affixed.
  • needle 636 is shifted over affixed tickmarks 634 in response to a swipe gesture on depth adjustment slider 632.
  • electronic device 600 adjusts, based on the focal point of stored image representation 648 (e.g., the nose of subject 650), the depth-of-field properties of the objects (e.g., light-emitting objects 652A, 652B, and 652C, and non-light-emitting object 654) within stored image representation 648.
  • the focal point of stored image representation 648 e.g., the nose of subject 650
  • the depth-of-field properties of the objects e.g., light-emitting objects 652A, 652B, and 652C, and non-light-emitting object 654
  • f-number indicator 638 As shown by f-number indicator 638, the current f-number (4.9) is increased from the previous (stored) f-number (1.6) as a result of swipe gesture 619. As such, light-emitting objects
  • 652A, 652B, and 652C are less blurred, smaller, less bright, less saturated, and/or with a less distorted shape (and more“sharp”) in FIG. 60 (with a 4.9 f-number) than in FIG. 6N (with a 1.6 f-number) and, likewise, non-light-emitting object 654 is less blurred, smaller, less bright, less saturated, and/or with a less distorted shape and instead sharper in FIG. 60 than in FIG. 6N.
  • the degree of change in the blurriness, the size, the degree of brightness, the degree of saturation, and/or with the degree of shape-distortion (and an increase in sharpness) of the objects from the previous f-number (1.6) to the higher f-number (4.9) is more drastic for light-emitting objects as compared to non-light-emitting objects.
  • the changes in the depth-of-field properties of objects within the image representation are described in greater detail below with reference to FIGS. 8A-8R.
  • electronic device 600 detects (e.g., via a touch-sensitive surface of display 602), a swipe gesture 621 (e.g., a continuation of swipe gesture 619) on depth adjustment slider 632.
  • a swipe gesture 621 e.g., a continuation of swipe gesture 619
  • electronic device 600 in response to detecting swipe gesture 621, electronic device 600 further adjusts, based on the focal point of stored image representation 648 (e.g., the nose of subject 650), the depth-of-field properties of the objects (e.g., light-emitting objects 652A, 652B, and 652C, and non-light-emitting object 654) within stored image representation 648.
  • the focal point of stored image representation 648 e.g., the nose of subject 650
  • the depth-of-field properties of the objects e.g., light-emitting objects 652A, 652B, and 652C, and non-light-emitting object 654
  • f-number indicator 638 As shown by f-number indicator 638, the current f-number (8.7) is increased from the previous f-number (4.9) as a result of swipe gesture 621. As such, light-emitting objects 652A, 652B, and 652C are less blurred, smaller, less bright, less saturated, and/or with a less distorted shape (and sharper, and thus closer to its real shape without any image distortion) in FIG. 6P (with a 8.7 f-number) than in FIG.
  • non-light-emitting object 654 is less blurred, smaller, less bright, less saturated, and/or with a less distorted shape (and sharper, and thus closer to its real shape without any image distortion) in FIG. 6P than in FIG. 60.
  • the degree of change in the blurriness, the size, the degree of brightness, the degree of saturation, and/or the degree of shape-distortion (and an increase in sharpness) of the objects from the previous f-number (5) to the higher f-number (10) is more drastic for light-emitting objects as compared to non-light-emitting objects.
  • the changes in the depth-of- field properties of objects within the image representation are described in greater detail below with reference to FIGS. 8A-8R.
  • FIG. 6Q illustrates electronic device 600 displaying, in display 602, a settings user interface 660 of the image capture application.
  • electronic device detects (e.g., via a touch-sensitive surface of display 602) an activation 623 of a preserve settings affordance 662 of settings user interface 660 (e.g., a tap gesture on preserve settings affordance 662).
  • electronic device 600 in response to detecting activation 623 of preserve settings affordance 662, displays, on display 602, a preserve settings user interface 664 associated with the image capture application and the stored images application.
  • Preserve settings user interface 664 includes a creative controls option 666 (e.g., with a corresponding toggle 668) for activating or de-activating creative controls.
  • electronic device 600 when creative controls is active, preserves previously-set image effects settings (e.g., including the simulated depth effect setting) when the image capture application and/or the stored images application are closed and re-launched (such that the previously-set image effects setting, such as the previously-set f-number, is automatically re-loaded and applied to the displayed image representation).
  • when creative controls is inactive electronic device 600 does not preserve the previously-set image effects settings, and image effects settings (including the depth effect setting) is restored to default values when the image capture application and/or stored images application are re-launched.
  • FIG. 6S illustrates an electronic device 670 (e.g., a laptop computer) with a display 672 and a front-facing camera 674.
  • electronic device 670 also includes a rear-facing camera.
  • electronic device 670 displays, on display 672, a user interface 676 of an image application (e.g., corresponding to the image capture application or the stored images application), where an image representation 678 corresponding to image representation 618 is displayed in user interface 676.
  • Electronic device 670 also displays, within user interface 676 (e.g., below image representation 678), a depth adjustment slider 680 similar to depth adjustment slider 632.
  • Depth adjustment slider 680 includes a plurality of tickmarks 682 corresponding to f- numbers and a needle 684 indicating the currently-selected tickmark (and thus the currently- selected f-number).
  • Depth adjustment slider 680 also includes a f-number indicator 686 (e.g., located adjacent to the slider) indicating the value of the currently-selected f-number.
  • a cursor 688 can be used to navigate needle 684 over tickmarks 682, thereby changing the f-number to adjust the simulated depth effect of image representation 678.
  • FIG. 6T illustrates an electronic device 690 (e.g., a tablet computer, a laptop computer with a touch-sensitive display) with a display 692.
  • electronic device 690 also includes a front-facing camera and/or a rear-facing camera.
  • electronic device 690 displays, on display 692, a user interface 694 of an image application (e.g., corresponding to the image capture application or the stored images application), where an image representation 696 corresponding to image representation 618 is displayed in user interface 694.
  • Electronic device 690 also displays, within user interface 694 (e.g., adjacent to image representation 696), a depth adjustment slider 698 (e.g., in a vertical direction) similar to depth adjustment slider 632.
  • Depth adjustment slider 698 includes a plurality of tickmarks 699 corresponding to f-numbers and a needle 697 indicating the currently- selected tickmark (and thus the currently-selected f-number).
  • Depth adjustment slider 698 also includes a f-number indicator 695 (e.g., located below or adjacent to the slider) indicating the value of the currently-selected f-number.
  • depth adjustment slider 698 can be adjusted via vertical swipe gestures such that tickmarks 699 are moved relative to an affixed needle 697. In some examples, depth adjustment slider 698 can be adjusted via vertical swipe gestures such that needle 697 is moved relative to affixed tickmarks 699.
  • electronic device 690 also displays (e.g., in a region of user interface 694 adjacent to image representation 696, in a region of user interface 694 adjacent to image representation 696 and opposite from depth adjustment slider 698), a plurality of lighting settings 693 corresponding to various lighting / light filtering options that can be applied to image representation 696, and can be changed via vertical swipe gestures.
  • depth adjustment slider 698 and lighting settings 693 can concurrently be adjusted and the concurrent adjustments can simultaneously be reflected in image representation 696.
  • FIGS. 7A-7B are a flow diagram illustrating a method for managing user interfaces for adjusting a simulated depth effect, in accordance with some embodiments.
  • Method 700 is performed at a device (e.g., 100, 300, 500, 600) with a display and one or more input devices (e.g., a touch-sensitive surface of the display, a mechanical input device).
  • a device e.g., 100, 300, 500, 600
  • input devices e.g., a touch-sensitive surface of the display, a mechanical input device.
  • method 700 provides an intuitive way for managing user interfaces for simulated depth effects.
  • the method reduces the cognitive burden on a user for managing and navigating user interfaces for simulated depth effects, thereby creating a more efficient human-machine interface.
  • enabling a user to navigate user interfaces faster and more efficiently by providing easy management of user interfaces for simulating depth effects conserves power and increases the time between battery charges.
  • the electronic device displays (702), on the display (e.g., 602), a representation of image data (e.g., 618, a displayed image corresponding to the image data, a portrait image of a person/subject).
  • a representation of image data e.g., 618, a displayed image corresponding to the image data, a portrait image of a person/subject.
  • the representation of image data is a live-feed image currently being captured by one or more cameras of the electronic device (e.g., 600).
  • the representation of image data e.g., 648 is a previously-taken image stored in and retrieved from memory (of the electronic device or an external server).
  • the depth data of the image can be adjusted / manipulated to apply a depth effect to the representation of image data.
  • the image data includes at least two components: an RGB component that encodes the visual characteristics of a captured image, and depth data that encodes information about the relative spacing relationship of elements within the captured image (e.g., the depth data encodes that a user is in the foreground, and background elements, such as a tree positioned behind the user, are in the background).
  • an RGB component that encodes the visual characteristics of a captured image
  • depth data that encodes information about the relative spacing relationship of elements within the captured image (e.g., the depth data encodes that a user is in the foreground, and background elements, such as a tree positioned behind the user, are in the background).
  • the depth data is a depth map.
  • a depth map (e.g., depth map image) contains information (e.g., values) that relates to the distance of objects in a scene from a viewpoint (e.g., a camera).
  • a viewpoint e.g., a camera
  • each depth pixel defines the position in the viewpoint’s z-axis where its corresponding two- dimensional pixel is located.
  • a depth map is composed of pixels wherein each pixel is defined by a value (e.g., 0 - 255).
  • the“0” value represents pixels that are located at the most distant place in a“three dimensional” scene and the“255” value represents pixels that are located closest to a viewpoint (e.g., camera) in the“three dimensional” scene.
  • a depth map represents the distance between an object in a scene and the plane of the viewpoint.
  • the depth map includes information about the relative depth of various features of an object of interest in view of the depth camera (e.g., the relative depth of eyes, nose, mouth, ears of a user’s face).
  • the depth map includes information that enables the device to determine contours of the object of interest in a z direction.
  • the depth data has a second depth component (e.g., a second portion of depth data that encodes a spatial position of the background in the camera display region; a plurality of depth pixels that form a discrete portion of the depth map, such as a background), separate from the first depth component, the second depth aspect including the representation of the background in the camera display region.
  • the first depth aspect and second depth aspect are used to determine a spatial relationship between the subject in the camera display region and the background in the camera display region. This spatial relationship can be used to distinguish the subject from the background. This distinction can be exploited to, for example, apply different visual effects (e.g., visual effects having a depth component) to the subject and background.
  • all areas of the image data that do not correspond to the first depth component are adjusted based on different degrees of blurriness/sharpness, size, brightness, saturation, and/or shape-distortion in order to simulate a depth effect, such as a Bokeh effect.
  • displaying, on the display, the representation of image data further comprises, in accordance with a determination that the representation of image data corresponds to stored image data (e.g., that of a stored/saved image or a previously-captured image), displaying the representation of image data with a prior simulated depth effect as previously modified by a prior first value for the simulated depth effect.
  • the representation of image data (e.g., 648) corresponds to stored image data when a
  • camera/image application for displaying representations of image data is in an edit mode (e.g., a mode for editing existing / previously-captured images or photos).
  • an edit mode e.g., a mode for editing existing / previously-captured images or photos.
  • the electronic device e.g., 600
  • the adjustable slider upon (e.g., concurrently with) displaying the representation of image data (e.g., within a camera/image application).
  • the adjustable slider e.g., 632 is displayed with the representation of image data without the first input.
  • whether the adjustable slider is automatically displayed upon displaying the representation of image data depends on the type of the electronic device (e.g., whether the electronic device is a smartphone, a smartwatch, a laptop computer, or a desktop computer).
  • the electronic device detects (706), via the one or more input devices, a first input (e.g., 605, 607, an activation of an affordance displayed on the display, a gesture, such as a slide-up gesture on the image, detected via the touch-sensitive surface of the display).
  • a first input e.g., 605, 607, an activation of an affordance displayed on the display, a gesture, such as a slide-up gesture on the image, detected via the touch-sensitive surface of the display.
  • the electronic device while displaying, on the display (e.g., 602), the representation of image data (e.g., 618, 648), the electronic device (e.g., 600) displays (704), on the display (e.g., in an affordances region (e.g., 628 A) corresponding to different types of effects that can be applied to the representation of image data), a simulated depth effect adjustment affordance (e.g., 630), wherein the first input is an activation (e.g., 605, a tap gesture) of the simulated depth effect adjustment affordance.
  • the simulated depth effect adjustment affordance includes a symbol indicating that the affordance relates to depth effects, such as a f- number symbol.
  • Displaying the simulated depth effect adjustment affordance while displaying the representation of image data and including a symbol indicating that the affordance relates to depth effects improves visual feedback by enabling a user to quickly and easily recognize that adjustments to depth-of-field properties can be made to the representation of image data.
  • Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the simulated depth effect is“simulated” in that the effect is (artificially) generated based on a manipulation of the underlying image data to create and apply the effect to the corresponding representation of image data (e.g., 618, 648) (e.g., as opposed to being a“natural” effect that is based on underlying data as originally captured via one or more cameras).
  • the simulated depth effect adjustment affordance prior to detecting the first input (e.g., 605, 607), is displayed with a first visual characteristic (e.g., a particular color indicating that the affordance is not currently selected, such as a default color or a white color).
  • a first visual characteristic e.g., a particular color indicating that the affordance is not currently selected, such as a default color or a white color.
  • the simulated depth effect adjustment affordance is displayed with a second visual characteristic (e.g., a particular color indicating that the affordance is currently selected, such as a highlight color or a yellow color) different from the first visual characteristic.
  • a second visual characteristic e.g., a particular color indicating that the affordance is currently selected, such as a highlight color or a yellow color
  • Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • displaying the simulated depth effect adjustment affordance comprises, in accordance with a determination that the currently-selected depth effect value corresponds to a default depth effect value (e.g., a default f-number value determined/set by the electronic device), forgoing displaying, in the simulated depth effect adjustment affordance, the currently-selected depth effect value.
  • a default depth effect value e.g., a default f-number value determined/set by the electronic device
  • the default depth effect value is a 4.5 f-number.
  • displaying the simulated depth effect adjustment affordance comprises, in accordance with a determination that the currently-selected depth effect value corresponds to a non-default depth effect value (e.g., any f-number value within a range of available f-number values that does not correspond to the default f-number value), displaying, in the simulated depth effect adjustment affordance (e.g., adjacent to a f- number symbol), the currently-selected depth effect value.
  • a non-default depth effect value e.g., any f-number value within a range of available f-number values that does not correspond to the default f-number value
  • the electronic device prior to detecting the first input (e.g., 605, 607), displays, on the display (e.g., 602), one or more mode selector affordances (e.g., a region with one or more affordances for changing a camera-related operation mode of the electronic device, such as a camera mode selector affordance), wherein displaying the adjustable slider (e.g., 632) comprises replacing display of the one or more mode selector affordances with the adjustable slider.
  • Replacing display of the one or more mode selector affordances with the adjustable slider improves visual feedback and enabling the user to quickly and easily recognize that the device is now in a depth effect adjustment mode.
  • Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
  • the electronic device prior to detecting the first input, displays, on the display (e.g., 602), a zoom control element (e.g., a region with one or more affordances for changing a zoom level of the camera), wherein displaying the adjustable slider (e.g., 632) comprises replacing display of the zoom control element.
  • a zoom control element e.g., a region with one or more affordances for changing a zoom level of the camera
  • the first input is a swipe gesture in a first direction in a first portion of the user interface (e.g., 614, a swipe-up gesture on the touch-sensitive surface of the display).
  • the swipe gesture is a swipe-up gesture on a region of the display corresponding to the representation of image data.
  • the swipe gesture is a swipe-up gesture on a region of the display corresponding to a bottom edge of the representation image data (e.g., 618).
  • the adjustable slider is not displayed and, optionally, a different operation is performed (e.g., switching camera modes or performing a zoom operation).
  • the adjustable slider is not displayed and, optionally, a different operation is performed.
  • Providing additional control options enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device In response to detecting the first input (e.g., 605, 607), the electronic device (e.g.,
  • an adjustable slider e.g., 632
  • the adjustable slider includes (710) a plurality of option indicators (e.g., 634, represented as tick marks, gauge marks) corresponding to a plurality of the selectable values for the simulated depth effect (e.g.,
  • the plurality of option indicators are slidable (e.g., horizontally or vertically) within the adjustable slider.
  • the adjustable slider also includes (712) a selection indicator (e.g., 636, represented as a needle) indicating that the first value is a currently-selected simulated depth effect value.
  • the position of the selection indicator (e.g., 636, needle) is fixed and the plurality of option indicators (e.g., 634, tickmarks) are adjustable within the slider (e.g., 632) such that the plurality of option indicators are moved relative to the selection indicator to adjust the currently-selected depth-of-field value.
  • the plurality of option indicators e.g., 634, tickmarks
  • the slider e.g., 632
  • only a subset of all of the available option indicators are concurrently displayed within the slider— option indicators that are not displayed are displayed within the slider in response to an adjustment of the slider (e.g., a user input moving the option indicators in a horizontal or vertical direction).
  • the plurality of option indicators (e.g., 634) are fixed and the position of the selection indicator (e.g., 636) is adjustable within the slider such that the selection indicator is moved relative to the plurality of option indicators to adjust the currently-selected depth-of-field value.
  • the electronic device in response to detecting the first input (e.g., 605, 607), slides (714) (e.g., vertically, sliding up by a predetermined amount) the representation of image data (e.g., 618) on the display (e.g., 602) to display (e.g., reveal) the adjustable slider (e.g., 632) (e.g., sliding the representation of the image data in a direction corresponding to a direction of a swipe input).
  • the adjustable slider e.g., 632
  • the electronic device While displaying the adjustable slider (e.g., 632), the electronic device (e.g., 600) detects (716) via the one or more input devices, an input directed to the adjustable slider.
  • the input e.g., 609, 611, 619, 621) directed to the adjustable slider (e.g., 632) is a (horizontal) swipe gesture (e.g., a swipe-left gesture or a swipe-right gesture) on the adjustable slider, wherein the swipe gesture includes a user movement (e.g., using a finger) in a first direction having at least a first velocity (greater than a threshold velocity) at an end of the swipe gesture (e.g., a velocity of movement of a contact performing the swipe gesture at or near when the contact is lifted-off from the touch-sensitive surface).
  • a swipe gesture e.g., a swipe-left gesture or a swipe-right gesture
  • the swipe gesture includes a user movement (e.g., using a finger) in a first direction having at least a first velocity (greater than a threshold velocity) at an end of the swipe gesture (e.g., a velocity of movement of a contact performing the swipe gesture at or near when the contact is lifted
  • the electronic device In response to detecting (718) the input (e.g., 609, 611, 619, 621) directed to the adjustable slider (e.g., 632) (e.g., a tap or swipe at a location corresponding to the adjustable slider), the electronic device (e.g., 600) moves (720) the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently- selected simulated depth effect value.
  • the adjustable slider e.g., 632
  • the electronic device In response to detecting (718) the input directed to the adjustable slider (e.g., a tap or swipe at a location corresponding to the adjustable slider), the electronic device (e.g., 600) changes (722) an appearance of the representation of image data (e.g., 618, 648) in accordance with the simulated depth effect as modified by the second value.
  • Changing an appearance of the representation of image data in response to detecting the input directed to the adjustable slider improves visual feedback by enabling the user to quickly and easily view changes to the representation of image data that is caused by the user’s input.
  • Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • moving the adjustable slider comprises moving the plurality of option indicators (e.g., 634, represented as tick marks) while the selection indicator (e.g., 636, represented as a needle) remains fixed.
  • moving the adjustable slider comprises sliding the plurality of tick marks corresponding to f-values while the needle stays fixed in the same location within the slider.
  • moving the adjustable slider comprises moving the selection indicator (e.g., represented as a needle) while the plurality of option indicators remain fixed (e.g., represented as tick marks).
  • moving the adjustable slider comprises sliding the needle back and forth over the plurality of tick marks corresponding to f-values while the tick marks stay fixed in the same location within the slider.
  • the electronic device e.g., 600
  • the adjustable slider e.g., 632
  • the electronic device e.g., 600
  • a first type of output e.g., tactile output, audio output
  • the electronic device generates a discrete output (e.g., a discrete tactile output, a discrete audio output) each time the selection indicator aligns with or passes an option indicator of the plurality of option indicators.
  • Generating a first type of output e.g., tactile output, audio output
  • Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
  • the adjustable slider e.g., 632
  • the representation of image data e.g., 618, 648
  • stored image data e.g., that of a stored/saved image or a previously-captured image
  • the first type of output includes (726) audio output (e.g., generated via one or more speakers of the electronic device and/or generated via one or more tactile output generators of the electronic device).
  • the first type of output does not include (728) audio output (e.g., generated via one or more speakers of the electronic device and/or generated via one or more tactile output generators of the electronic device).
  • the representation of image data corresponds to stored image data when a camera/image application for displaying representations of image data is in an edit mode (e.g., a mode for editing existing / previously- captured images or photos).
  • method 900 optionally includes one or more of the characteristics of the various methods described above with reference to method 700.
  • the simulated depth effect applied to an image representation, as described in method 900 can be adjusted using the depth adjustment slider described in method 700.
  • method 1100 optionally includes one or more of the characteristics of the various methods described above with reference to method 700.
  • the notification concerning detected interference, as described in method 1100 can be associated with detected magnetic interference that can impede with one or more depth sensors used for simulating depth effects. For brevity, these details are not repeated below.
  • FIGS. 8A-8R illustrate exemplary user interfaces for displaying adjustments to a simulated depth effect (e.g., a Bokeh effect), in accordance with some embodiments.
  • the user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 9A-9B.
  • FIG. 8A illustrates electronic device 600 as described above with reference to FIGS. 6A-6T.
  • electronic device 600 displays, on display 602, a user interface 804 of the image capture application, where the image capture application is in portrait mode. While in portrait mode, user interface 804 displays (e.g., above or adjacent to an image display region 806) a depth effect affordance 810 (e.g., corresponding to depth effect affordance 630).
  • a depth effect affordance 810 e.g., corresponding to depth effect affordance 630.
  • Electronic device 600 also displays, in image display region 806, an image representation 808 of image data captured via rear-facing camera 608.
  • image representation 808 does not include a subject (e.g., a person), as a subject is not within the field- of-view of rear-facing camera 608.
  • electronic device 600 displays, in image representation 808, subject markers 812 indicating that a subject need to be placed within the general region of image representation 808 occupied by the markers to properly enable portrait mode. Because a subject is not currently detected, electronic device 600 displays (e.g., in a top portion of image display region 806), a message 814 requesting that a subject be placed in the environment corresponding to the region of image representation 808 occupied by subject markers 812.
  • a real subject in the real environment is detected within the field-of-view of rear-facing camera 608.
  • electronic device 600 displays, in image representation 808, a subject 816 corresponding to the real subject detected within the field-of-view of rear-facing camera 608.
  • electronic device 600 provides, via subject markers 812 (e.g., by the markers“locking on” to the subject, by the markers changing a visual characteristic, such as changing to a different color), an indication that the subject is within the general region of image representation 808 occupied by subject markers 812 to properly enable portrait mode.
  • subject markers 812 e.g., by the markers“locking on” to the subject, by the markers changing a visual characteristic, such as changing to a different color
  • electronic device 600 displays a notification indicating that the subject be placed closer to the device. In some embodiments, if a subject is detected but is too close to electronic device 600 (e.g., less than a predefined distance away from the device, such as closer than 1 foot from the device) to fully enable portrait mode, electronic device 600 displays a notification indicating that the subject be placed farther away from the device.
  • electronic device 600 Upon detecting subject 816 within the general region of image representation 808 indicated by subject markers 812, electronic device 600 activates portrait mode. Upon activation of portrait mode, electronic device 600 adjusts image representation 812 by applying, based on a focal point within image representation 808 (e.g. the nose of subject 816), a simulated depth effect (e.g., a Bokeh effect, the simulated depth effect described above with respect to image representation 618) to objects within image representation 808 with the default f-number (e.g., 4.5).
  • image representation 808 includes light-emitting objects 818A, 818B, 818C, and 818D and non-light-emitting objects 820 A and 820B.
  • the simulated depth effect is also applied to portions of subject 816 that do not correspond to the focal point (e.g., portions of subject 816 other than the nose of the subject).
  • electronic device 600 while displaying image representation 808 with subject 816 detected, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) an activation 801 of depth effect affordance 810.
  • electronic device 600 displays (e.g., within a menu region of user interface 804 below image display region 806, a depth adjustment slider 822 (corresponding to depth adjustment slider 632 described above with reference to FIGS. 6A-6R).
  • depth adjustment slider 822 includes a plurality of tickmarks 824 corresponding to f-numbers, a needle 824 indicating the currently-selected tickmark (and thus the currently-selected f-number), and a f-number indicator 828 (e.g., located below or adjacent to the slider) indicating the value of the currently-selected f-number.
  • tickmarks 824 corresponding to f-numbers
  • needle 824 indicating the currently-selected tickmark (and thus the currently-selected f-number)
  • a f-number indicator 828 e.g., located below or adjacent to the slider
  • f- number indicator 828 indicates the default f-number value (e.g., of 4.5).
  • depth effect affordance 810 when depth adjustment slider 822 is activated, in addition to f-number indicator 828, depth effect affordance 810 also displays the current f-number.
  • electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) a swipe gesture 803 (e.g., a horizontal swipe gesture, a swipe-right gesture) on depth adjustment slider 822, thereby causing tickmarks 824 to horizontally slide relative to the affixed needle 826.
  • a swipe gesture 803 e.g., a horizontal swipe gesture, a swipe-right gesture
  • swipe gesture 803 causes depth adjustment slider 822 to slide such that a lower f-number (e.g., of 1.6) is set as the current f-number, as indicated by f-number indicator 828 (and, in some embodiments, also by depth effect affordance 810).
  • a lower f-number e.g., of 1.6
  • electronic device 800 adjusts image representation 808 to reflect the new depth-of-field value (e.g., of 1.6).
  • the new depth-of-field value e.g., of 1.6
  • light-emitting object 818A is more distorted (e.g., blurrier, larger, brighter, more saturated, and/or with a more distorted shape) in FIG. 8F (with f-number 1.6) than in FIG. 8E (with f- number 4.5).
  • light-emitting objects 818B are more distorted (e.g., blurrier, larger, brighter, more saturated, and/or with a more distorted shape) in FIG. 8F (with f-number 1.6) than in FIG. 8E (with f-number 4.5).
  • light-emitting objects 818C are more distorted (e.g., blurrier, larger, brighter, more saturated, and/or with a more distorted shape) in FIG. 8F (with f-number 1.6) than in FIG. 8E (with f-number 4.5).
  • non-light-emitting object 820A is more distorted (e.g., blurrier, larger, brighter, more saturated, and/or with a more distorted shape) in FIG. 8F (with f- number 1.6) than in FIG. 8E (with f-number 4.5).
  • non-light-emitting object 820B is more distorted (e.g., blurrier, larger, brighter, more saturated, and/or with a more distorted shape) in FIG. 8F (with f-number 1.6) than in FIG. 8E (with f-number 4.5).
  • the degree of distortion (e.g., the degree of blurriness, the size, the degree of brightness, the degree of saturation, and/or the degree of distortion in the shape of the object relative to the focal point) of the objects differs based on the distance of each object to the focal point of image representation 808 (e.g., the nose of subject 816).
  • each depth pixel (e.g., comprising a particular object) in image representation 808 defines the position in the viewpoint’s z-axis where its corresponding two-dimensional pixel is located, and each pixel is defined by a value (e.g., 0 - 255, where the“0” value represents pixels that are located at the most distant place in a“three dimensional” scene and the“255” value represents pixels that are located closest to a viewpoint (e.g., camera) in the“three dimensional” scene), then the degree of blurriness /sharpness, the size, the degree of brightness, the degree of saturation, and/or the degree of shape-distortion is dependent upon the distance in the z-axis direction (the value between 0 - 255).
  • a value e.g., 0 - 255
  • image representation 808 is viewed as a two-dimensional x, y-plane with the focal point (e.g., the nose of subject 820) as the center (e.g., the origin) of the plane, the straight-line distance from the (x, y) point of the pixels constituting an object in image representation 808 to the center of the plane affects the degree of shape distortion of the object— the greater the distance of the pixels from the center (the focal point), the greater the degree of shape distortion, and the closer the distance of the pixels from the center, the more minimal the shape distortion.
  • the focal point e.g., the nose of subject 820
  • the center e.g., the origin
  • the degree of distortion of object 818B-1 is greater than the change in the degree of distortion of object 818B-2 (e.g., object 818B-1 is relatively blurrier,, larger, brighter, more saturated, and/or more shape-distorted relative to the focal point than object 818B-2) because object 818B-1 is farther away from the focal point (e.g., the nose of subject 816) than object 818B-2.
  • object 818B-1 is relatively blurrier, larger, brighter, more saturated, and/or more shape-distorted relative to the focal point than object 818B-2.
  • the degree of distortion of object 818C-1 is greater than the degree of distortion of object 818C-2 (e.g., object 818C-1 becomes relatively “blurrier” and more shape-distorted relative to the focal point than object 818C-2) because object 818C-1 is farther away from the focal point (e.g., the nose of subject 816) than object 818C-2.
  • Differences in the degree of distortion based on the distance of an object to the focal point also applies to non-light-emitting objects (e.g., object 820A and 820B) and, in some embodiments, to portions of subject 816 not corresponding to the focal point (e.g., the upper body of the subjects, portions of the face and head of the subject surrounding the focal point).
  • the degree of distortion (e.g., the degree of blurriness, difference in size, the degree of brightness, the degree of saturation, and/or the degree of distortion in the shape of the object relative to the focal point) of the objects differs based on the type of the object— whether the object corresponds to a light-emitting object or a non-light-emitting object.
  • the resulting change in distortion is generally greater for light-emitting objects than for non-light-emitting objects for the same adjustment in depth-of-field.
  • the depth-of-field characteristic of the objects are adjusted continuously as depth adjustment slider 822 is navigated (e.g., from 4.5 in FIG. 8E to 1.6 in FIG. 8F).
  • electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) a swipe gesture 805 (e.g., a horizontal swipe gesture, a swipe-left gesture) on depth adjustment slider 822, thereby causing tickmarks 824 to horizontally slide in the opposite direction relative to the affixed needle 826.
  • a swipe gesture 805 e.g., a horizontal swipe gesture, a swipe-left gesture
  • swipe gesture 805 causes depth adjustment slider 822 to slide such that a higher f-number (e.g., of 8.7) is set as the current f-number, as indicated by f-number indicator 828 (and, in some embodiments, also by depth effect affordance 810).
  • a higher f-number e.g., of 8.7
  • electronic device 800 adjusts image representation 808 to reflect the new depth-of-field value (e.g., of 8.7). Specifically, because of the larger simulated depth-of-field value, light-emitting object 818A is less distorted (e.g., sharper, closer to an accurate depth-of-field value).
  • FIG. 8H (with f-number 8.7) than in FIG. 8F (with f-number
  • non-light-emitting object 820A is less distorted (e.g., sharper, closer to an accurate representation of its real form) in FIG. 8H (with f-number 8.7) than in FIG. 8F (with f-number
  • non-light-emitting object 820B is less distorted (e.g., sharper, closer to an accurate representation of its real form) in FIG. 8H (with f-number 8.7) than in FIG. 8F (with f-number
  • the degree of distortion e.g., the degree of blurriness, the difference in size, the degree of brightness, the degree of saturation, the degree of distortion in the shape of the object relative to the focal point
  • the degree of distortion differs based on the distance of each object to the focal point of image representation 808 (e.g., the nose of subject 816).
  • the focal point of image representation 808 e.g., the nose of subject 816.
  • the degree of distortion of object 818B-1 is still greater than the degree of distortion of object 818B-2 (e.g., object 818B-1 is still relatively blurrier, larger, brighter, more saturated, and/or more shape-distorted relative to the focal point than object 818B-2) because object 818B-1 is farther away from the focal point (e.g., the nose of subject 816) than object 818B-2.
  • object 818B-1 is still relatively blurrier, larger, brighter, more saturated, and/or more shape-distorted relative to the focal point than object 818B-2.
  • the degree of distortion of object 818C-1 is still greater than the degree of distortion of object 818C-2 (e.g., object 818C-1 becomes relatively blurrier, larger, brighter, more saturated, and/or more shape-distorted relative to the focal point than object 818C-2) because object 818C-1 is farther away from the focal point (e.g., the nose of subject 816) than object 818C-2.
  • FIGS. 8I-8M illustrate a plurality of circular objects 830 (which can be light-emitting objects or non-light-emitting objects) arranged in a five-by-five gird-like pattern with the focal point at center object 832.
  • FIGS. 8I-8M also illustrate a depth adjustment slider 834
  • FIGS. 8I-8M are provided to further illustrate, in one embodiment, the distortion of objects under different f-number settings, where the degree of distortion differs based on a distance of an object from the focal point.
  • FIG. 81 illustrates circular objects 830 adjusted, relative to object 832 as the focal point, with a 4.5 f-number. As shown in FIG. 81, objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape) than objects that are on or closer to the focal point.
  • FIG. 8J illustrates circular objects 830 adjusted, relative to object 832 as the focal point, with a 2.8 f-number.
  • Objects 830 in FIG. 8J appear“larger” because, under a smaller f-number, the objects are more blurred, larger, brighter, more saturated, and/or with a more distorted shape than corresponding objects 830 in FIG. 81.
  • objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape) than objects that are on or closer to the focal point.
  • FIG. 8K illustrates circular objects 830 adjusted, relative to object 832 as the focal point, with a 1.0 f-number.
  • Objects 830 in FIG. 8K appear even“larger” because, under an even smaller f- number, the objects are more blurred, larger, brighter, more saturated, and/or with a more distorted shape than corresponding objects 830 in FIG. 8J.
  • objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape ) than objects that are on or closer to the focal point.
  • FIG. 8L illustrates circular objects 830 adjusted, relative to object 832 as the focal point, with a 7.6 f-number.
  • Objects 830 in FIG. 8K appear“smaller” than corresponding objects 830 in FIG. 81 because, under a larger f-number, the objects are less blurred, smaller, less bright, less saturated, and/or with a less distorted shape and instead sharper than corresponding objects 830 in FIG. 81.
  • objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape) than objects that are on or closer to the focal point.
  • FIG. 8M illustrates circular objects 830 adjusted, relative to object 832 as the focal point, with a 14 f-number.
  • Objects 830 in FIG. 8M appear even“smaller” than corresponding objects 830 in FIG. 8L because, under an even larger f-number, the objects are less blurred, smaller, less bright, less saturated, and/or with a less distorted shape and instead sharper than corresponding objects 830 in FIG. 8L.
  • objects 830 in FIG. 8M are more of“true” circles than objects 830 in FIGS. 8I-8L. Still, as in FIGS. 8I-8L, in FIG.
  • FIGS. 8N-8R illustrate a plurality of circular objects 838 (which can be light-emitting objects or non-light-emitting objects) arranged in a five-by-five gird-like pattern with the focal point at center object 840 (similar to FIGS. 8I-8M).
  • FIGS. 8N-8R also illustrate depth adjustment slider 834 corresponding to depth adjustment slider 822 described above with reference to FIGS. 8A-8H.
  • FIGS. 8N-8R are provided to further illustrate, in another embodiment, the distortion of objects under different f-number settings, where the degree of distortion differs based on a distance of an object from the focal point.
  • FIG. 8N illustrates circular objects 838 adjusted, relative to object 840 as the focal point, with a 4.5 f-number. As shown in FIG. 8N, objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape) than objects that are on or closer to the focal point.
  • FIG. 80 illustrates circular objects 838 adjusted, relative to object 834 as the focal point, with a 2.8 f-number.
  • Objects 838 in FIG. 80 appear“larger” because, under a smaller f-number, the objects are more blurred, larger, brighter, more saturated, and/or with a more distorted shape than corresponding objects 838 in FIG. 8N.
  • objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape) than objects that are on or closer to the focal point.
  • FIG. 8P illustrates circular objects 838 adjusted, relative to object 840 as the focal point, with a 1.0 f-number.
  • Objects 838 in FIG. 8P appear even“larger” because, under an even smaller f- number, the objects are more blurred, larger, brighter, more saturated, and/or with a more distorted shape than corresponding objects 838 in FIG. 80.
  • objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape ) than objects that are on or closer to the focal point.
  • FIG. 8Q illustrates circular objects 838 adjusted, relative to object 840 as the focal point, with a 7.6 f-number.
  • Objects 838 in FIG. 8Q appear“smaller” than corresponding objects 838 in FIG. 8N because, under a larger f-number, the objects are less blurred, smaller, less bright, less saturated, and/or with a less distorted shape and instead sharper than corresponding objects 838 in FIG. 8N.
  • objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape) than objects that are on or closer to the focal point.
  • FIG. 8R illustrates circular objects 838 adjusted, relative to object 840 as the focal point, with a 14 f-number.
  • Objects 838 in FIG. 8R appear even“smaller” than corresponding objects 838 in FIG. 8Q because, under an even larger f-number, the objects are less blurred, smaller, less bright, less saturated, and/or with a less distorted shape and instead sharper than corresponding objects 838 in FIG. 8Q.
  • objects 838 in FIG. 8R are more of“true” circles than objects 838 in FIGS. 8N-8Q.
  • objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape) than objects that are on or closer to the focal point.
  • FIGS. 9A-9B are a flow diagram illustrating a method for managing user interfaces for displaying adjustments to a simulated depth effect, in accordance with some embodiments.
  • Method 900 is performed at a device (e.g., 100, 300, 500, 600) with a display and one or more input devices (e.g., a touch-sensitive surface of the display, a mechanical input device).
  • a device e.g., 100, 300, 500, 600
  • input devices e.g., a touch-sensitive surface of the display, a mechanical input device.
  • method 900 provides an intuitive way for managing user interfaces for simulated depth effects.
  • the method reduces the cognitive burden on a user for managing and navigating user interfaces for simulated depth effects, thereby creating a more efficient human-machine interface.
  • enabling a user to navigate user interfaces faster and more efficiently by providing easy management of user interfaces for simulating depth effects conserves power and increases the time between battery charges.
  • the electronic device receives (902), via the one or more input devices, a request to apply a simulated depth effect to a representation of image data (e.g., 808, a displayed image corresponding to the image data, a portrait image of a person/subject), wherein depth data for a subject within the representation of image data is available.
  • a representation of image data e.g. 808, a displayed image corresponding to the image data, a portrait image of a person/subject
  • the representation of image data (e.g., 808) is a live-feed image currently being captured by one or more cameras of the electronic device.
  • the representation of image data is a previously-taken image stored in and retrieved from memory (of the electronic device or an external server).
  • the depth data of the image can be adjusted / manipulated to apply a depth effect to the representation of image data.
  • the image data includes at least two components: an RGB component that encodes the visual characteristics of a captured image, and depth data that encodes information about the relative spacing relationship of elements within the captured image (e.g., the depth data encodes that a user is in the foreground, and background elements, such as a tree positioned behind the user, are in the background).
  • an RGB component that encodes the visual characteristics of a captured image
  • depth data that encodes information about the relative spacing relationship of elements within the captured image (e.g., the depth data encodes that a user is in the foreground, and background elements, such as a tree positioned behind the user, are in the background).
  • the depth data is a depth map.
  • a depth map (e.g., depth map image) contains information (e.g., values) that relates to the distance of objects in a scene from a viewpoint (e.g., a camera).
  • a viewpoint e.g., a camera
  • each depth pixel defines the position in the viewpoint’s z-axis where its corresponding two- dimensional pixel is located.
  • a depth map is composed of pixels wherein each pixel is defined by a value (e.g., 0 - 255).
  • the“0” value represents pixels that are located at the most distant place in a“three dimensional” scene and the“255” value represents pixels that are located closest to a viewpoint (e.g., camera) in the“three dimensional” scene.
  • a depth map represents the distance between an object in a scene and the plane of the viewpoint.
  • the depth map includes information about the relative depth of various features of an object of interest in view of the depth camera (e.g., the relative depth of eyes, nose, mouth, ears of a user’s face).
  • the depth map includes information that enables the device to determine contours of the object of interest in a z direction.
  • the depth data has a second depth component (e.g., a second portion of depth data that encodes a spatial position of the background in the camera display region; a plurality of depth pixels that form a discrete portion of the depth map, such as a background), separate from the first depth component, the second depth aspect including the representation of the background in the camera display region.
  • the first depth aspect and second depth aspect are used to determine a spatial relationship between the subject in the camera display region and the background in the camera display region. This spatial relationship can be used to distinguish the subject from the background. This distinction can be exploited to, for example, apply different visual effects (e.g., visual effects having a depth component) to the subject and background.
  • all areas of the image data that do not correspond to the first depth component are adjusted based on different degrees of blurriness/sharpness, the size, the degree of brightness, the degree of saturation, and/or the degree of shape-distortion in order to simulate a depth effect, such as a Bokeh effect.
  • the request corresponds to an adjustment (e.g., a sliding gesture in a horizontal or vertical direction) of an adjustable slider (e.g., 822) associated with modifying/adjusting the simulated depth effect applied to / being applied to the representation of image data (e.g., 808).
  • an adjustment e.g., a sliding gesture in a horizontal or vertical direction
  • an adjustable slider e.g., 822
  • Applying a simulated depth effect to a representation of image data using an adjustable slider enhances visual feedback by enabling the user to quickly and easily view adjustments being made by the user.
  • Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the simulated depth effect is“simulated” in that the effect is (artificially) generated based on a manipulation of the underlying image data to create and apply the effect to the corresponding representation of image data (e.g., 808) (e.g., as opposed to being a“natural” effect that is based on underlying data as originally captured via one or more cameras).
  • receiving, via the one or more input devices, the request to apply the simulated depth effect to the representation of image data comprises detecting, via the one or more input devices, one or more inputs selecting a value of an image distortion parameter, wherein distorting (a portion of) the representation of image data is based on (and is responsive to) one or more user inputs selecting a value of an image distortion parameter (e.g., via a movement of the adjustable slider for controlling the parameter).
  • the adjustable slider is adjusted to distort (e.g., apply a simulated depth effect to) the representation of image data, as described above with reference to FIGS. 6A-6T.
  • Providing an adjustable slider to be used to distort the representation of image data enhances user convenience by enabling the user to easily and efficient make adjustments to the displayed representation of image data.
  • Providing additional control options and reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user- device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • selecting a different value for the image distortion parameter causes a first change to the first portion of the representation of the image data and causes a second change to the second portion of the representation of the image data, wherein the first change is different from the second change and the first change and the second change both include the same type of change (e.g., an increase or decrease in blurriness, size, brightness, saturation, and/or shape-distortion).
  • the electronic device In response to receiving (904) the request to apply the simulated depth effect to the representation of image data (e.g., 808), the electronic device (e.g., 600) displays, on the display (e.g., 602), the representation of image data with the simulated depth effect. Displaying the representation of image data with the simulated depth effect in response to receiving the request to apply the simulated depth effect to the representation of image data enables a user to quickly and easily view and respond to the adjustments being made to the representation of image data.
  • Providing convenient control options and reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • Displaying, on the display (e.g., 602), the representation of image data (e.g., 808) with the simulated depth effect includes distorting (906) a first portion of the representation of image data that has a first depth in a first manner (e.g., a first particular blurriness/sharpness, a first particular size, a first particular brightness, a first particular saturation, and/or a first particular shape), wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data (e.g., a center of a field of view of a camera or a point of focus of the camera).
  • a first manner e.g., a first particular blurriness/sharpness, a first particular size, a first particular brightness, a first particular saturation, and/or a first particular shape
  • a user to adjust a representation of image data to apply an accurate simulated depth effect enhances user convenience/efficiency and operability and versatility of the device by allowing the user create a similar image/photo to what the user would have otherwise only been able to obtain using a larger and/or more expensive piece of hardware (e.g., a professional-level camera). That is, the simulated depth effect (a software effect) enables the user to utilize a device that is relatively smaller and less expensive to apply a depth effect to an image/photo (e.g., as opposed to if the user was using a camera sensor and lens included in / attached to the device that is capable of producing the depth effect via optical distortion).
  • Displaying, on the display (e.g., 602), the representation of image data (e.g., 808) with the simulated depth effect also includes distorting a second portion of the representation of image data that has the first depth in a second manner (e.g., a second particular
  • the simulated depth effect (a software effect) enables the user to utilize a device that is relatively smaller and less expensive to apply a depth effect to an image/photo (e.g., as opposed to if the user was using a camera sensor and lens included in / attached to the device that is capable of producing the depth effect via optical distortion).
  • This is turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • displaying, on the display (e.g., 602), the representation of image data (e.g., 808) with the simulated depth effect further includes distorting (910) a third portion of the representation of image data that is a same distance from the predefined portion as the first portion and has a second depth that is different from the first depth in the first manner with a magnitude (e.g., of blurriness/sharpness) determined based on the second depth (e.g., the depth of the third portion).
  • a magnitude e.g., of blurriness/sharpness
  • a user to adjust a representation of image data to apply an accurate simulated depth effect enhances user convenience/efficiency and operability and versatility of the device by allowing the user create a similar image/photo to what the user would have otherwise only been able to obtain using a larger and/or more expensive piece of hardware (e.g., a professional-level camera). That is, the simulated depth effect (a software effect) enables the user to utilize a device that is relatively smaller and less expensive to apply a depth effect to an image/photo (e.g., as opposed to if the user was using a camera sensor and lens included in / attached to the device that is capable of producing the depth effect via optical distortion). This is turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
  • displaying, on the display (e.g., 602), the representation of image data (e.g., 808) with the simulated depth effect further includes distorting (912) a fourth portion of the representation of image data that is a same distance from the predefined portion as the second portion and has the second depth in the second manner with a magnitude (e.g., of blurriness/sharpness) determined based on the second depth (e.g., the depth of the fourth portion).
  • a magnitude e.g., of blurriness/sharpness
  • a user to adjust a representation of image data to apply an accurate simulated depth effect enhances user convenience/efficiency and operability and versatility of the device by allowing the user create a similar image/photo to what the user would have otherwise only been able to obtain using a larger and/or more expensive piece of hardware (e.g., a professional-level camera). That is, the simulated depth effect (a software effect) enables the user to utilize a device that is relatively smaller and less expensive to apply a depth effect to an image/photo (e.g., as opposed to if the user was using a camera sensor and lens included in / attached to the device that is capable of producing the depth effect via optical distortion).
  • displaying, on the display (e.g., 602), the representation of image data (e.g., 808) with the simulated depth effect further includes distorting (914) one or more portions of the representation of image data, that is a same distance from the predefined portion (e.g., a reference point or focus point within the representation of image data) as the first potion and has the first depth, in the first manner.
  • the predefined portion e.g., a reference point or focus point within the representation of image data
  • portion of the representation of image data that have the same depth and are the same distance away from the predefined portion of the representation of image data are distorted in the same way.
  • a user to adjust a representation of image data to apply an accurate simulated depth effect enhances user convenience/efficiency and operability and versatility of the device by allowing the user create a similar image/photo to what the user would have otherwise only been able to obtain using a larger and/or more expensive piece of hardware (e.g., a professional-level camera). That is, the simulated depth effect (a software effect) enables the user to utilize a device that is relatively smaller and less expensive to apply a depth effect to an image/photo (e.g., as opposed to if the user was using a camera sensor and lens included in / attached to the device that is capable of producing the depth effect via optical distortion).
  • distorting the first portion of the representation of image data (e.g., 808) in the first manner comprises distorting the first portion based on (e.g., by applying) a first distortion shape (e.g., a circular shape or a lemon/oval-type shape).
  • distorting the second portion of the representation of image data in the second manner comprises distorting the second portion based on (e.g., by applying) a second distortion shape (e.g., a more circular shape or a more lemon/oval-type shape) different from the first distortion shape.
  • one or more objects (e.g., light-emitting objects) within the second portion are shape-distorted to a more lemon/oval shape than one or more objects (e.g., light- emitting objects) within the first portion.
  • distorting the first portion of the representation of image data (e.g., 808) in the first manner comprises distorting the first portion by a first degree of distortion (e.g., a degree of distortion of a shape of one or more objects within the first portion).
  • distorting the second portion of the representation of image data in the second manner comprises distorting the second portion by second degree of distortion (e.g., a degree of distortion of a shape of one or more objects within the second portion) that is greater than the first degree of distortion, wherein the second portion is at a greater distance (farther) from the predefined portion (e.g., a reference point or focus point within the representation of image data) than the first portion.
  • objects in the periphery of the representation of image data are distorted to be more lemon/oval in shape, whereas objects closer to the predefined portion (e.g., a center portion, a focus portion) are less distorted.
  • the degree of distortion changes (e.g., increases or decreases) gradually as the distance from the predefined portion of the changes.
  • distorting the first portion in the first manner comprises blurring (e.g., asymmetrically blurring / changing the sharpness of) the first portion by a first magnitude.
  • distorting the first portion in the first manner comprises distorting the second portion in the second manner comprises blurring (e.g., asymmetrically blurring / changing the sharpness of) the second portion by a second magnitude.
  • the first magnitude is greater than the second magnitude.
  • the second magnitude is greater than the first magnitude.
  • the electronic device prior to receiving the request to apply the simulated depth effect to the representation of image data (e.g., 808), the electronic device (e.g., 600) displays, on the display (e.g., 602), the representation of image data.
  • the electronic device while displaying the representation of image data, the electronic device (e.g., 600) detects, using the image data (e.g., via an analysis of the image data and/or based on a user input identifying that the region of the representation of image data includes a subject, such as a tap input in a live preview of camera data), a presence of the subject (e.g., a person, at least a portion of the person, such as the face of a person or a face and upper body of a person) within the representation of image data.
  • a subject e.g., a person, at least a portion of the person, such as the face of a person or a face and upper body of a person
  • displaying, on the display (e.g., 602), the representation of image data (e.g., 808) with the simulated depth effect further comprises distorting the first portion of the image and the second portion of the image without distorting (916) a portion of the representation of image data corresponding to (a center portion/region of) the subject.
  • the portion of the representation of image data corresponding to the subject is distorted less than the first portion of the image and the second portion of the image.
  • distorting the first portion of the representation of image data includes distorting the first portion in accordance with a determination that the first portion does not correspond to (a center portion/region of) the subject.
  • distorting the second portion of the representation of image data includes distorting the second portion in accordance with a determination that the second portion does not correspond to (a center portion/region of) the subject.
  • the electronic device in response to receiving the request to apply the simulated depth effect to the representation of image data (e.g., 808), the electronic device (e.g., 600) identifies (918), based on the image data (e.g., via an analysis of the image data), one or more objects within the representation of image data that are associated with light-emitting objects (e.g., 818A, 818B, 818C, 818D) (e.g., as opposed to those that are not associated with light- emitting objects).
  • light-emitting objects e.g., 818A, 818B, 818C, 818D
  • displaying, on the display (e.g., 602), the representation of image data (e.g., 808) with the simulated depth effect further comprises changing (920) an appearance of the one or more portions of the representation of image data that are associated with (e.g., are identified as) light-emitting objects (e.g., 818A, 818B, 818C, 818D) in a third manner relative to one or more portions of the representation of image data that are not associated with (e.g., are not identified as) light-emitting objects (e.g., 820A, 820B).
  • the third manner involves blurring/sharpening the objects by a greater magnitude compared to the fourth manner.
  • the third manner involves distorting the shape of the objects by a greater degree compared to the fourth manner.
  • changing the appearance of objects in the representation of image data (e.g., 808) that are associated with light-emitting objects (e.g., 818A, 818B, 818C, 818D) in the third manner includes one or more of: increasing (922) a brightness of the one or more portions of the representation of image data that are associated with light-emitting objects relative to other portions of the representation of image data that are not associated with light- emitting objects, increasing (924) a saturation of the one or more portions of the representation of image data that are associated with light-emitting objects relative to other portions of the representation of image data that are not associated with light-emitting objects, and increasing (926) a size of the one or more portions of the representation of image data that are associated with light-emitting objects relative to other portions of the representation of image data that are not associated with light-emitting objects (e.g., 820A, 820B).
  • the electronic device detects (928), via the one or more input devices, one or more inputs changing a value of an image distortion parameter, wherein distorting (a portion of) the representation of image data (e.g., 808) is based on (and is responsive to) one or more user inputs selecting a value of an image distortion parameter (e.g., via a movement of the adjustable slider for controlling the parameter).
  • the adjustable slider e.g., 822 is adjusted to distort (e.g., apply a simulated depth effect to) the representation of image data.
  • providing an adjustable slider to distort the representation of image data enables a user to quickly and easily provide one or more inputs to change a value of an image distortion parameter to distort the representation of image data.
  • Providing additional control options and reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • changing the value of the image distortion parameter changing (930) the magnitude of change of the appearance of one or more portions of the representation of image data that are associated with light-emitting objects (e.g., 818A, 818B, 818C, 818D) relative to other portions of the representation of image data that are not associated with light-emitting objects (e.g., 820A,
  • method 700 optionally includes one or more of the characteristics of the various methods described above with reference to method 900.
  • the depth adjustment slider described in method 700 can be used to apply the simulated depth effect to objects within an image representation.
  • method 1100 optionally includes one or more of the characteristics of the various methods described above with reference to method 900.
  • the notification concerning detected interference, as described in method 1100 can be associated with detected magnetic interference that can impede with one or more depth sensors used for simulating depth effects. For brevity, these details are not repeated below.
  • FIGS. 10A-10F illustrate exemplary user interfaces for indicating an interference to adjusting simulated image effects (e.g., simulated depth effects, such as a Bokeh effect), in accordance with some embodiments.
  • simulated image effects e.g., simulated depth effects, such as a Bokeh effect
  • the user interfaces in these figures are used to illustrate the processes described below, including the processes in FIG. 11.
  • FIG. 10A illustrates a rear- view of electronic device 600.
  • electronic device 600 includes one or more rear-facing cameras 608 and one or more rear depth camera sensors 1002 (e.g., similar to depth camera sensors 175).
  • one or more rear-facing cameras 608 are integrated with one or more rear depth camera sensors 1002.
  • FIG. 10B illustrates a front- view of electronic device 600 with display 602.
  • electronic device 600 includes one or more front-facing cameras 606 and one or more front depth camera sensors 1004.
  • one or more front-facing cameras 606 are integrated with one or more rear depth camera sensors 1004.
  • electronic device 600 displays, on display 602, an affordance 1006 for launching the image capture application. Further in FIG. 10B, while displaying affordance 1006, electronic device detects (e.g., via a touch-sensitive surface of display 602) an activation 1001 of affordance 1006.
  • electronic device 600 in response to detecting activation 1001 of affordance 1006 for launching the image capture application, electronic device 600 displays, on display 602, a user interface 1008 of the image capture application (e.g., corresponding to user interface 614 and user interface 804).
  • electronic device 600 Upon (or prior to / in response to) launching the image capture application, electronic device 600 does not detect an interference (e.g., a magnetic interference or other external interference, such as from an accessory of the device) that may impede with or hinder the operation of one or more sensors (e.g., one or more depth sensors 1002 and 1004 of the device) that are used to perform a simulated image effect function of image capture application (e.g., the simulated depth effect descried above with reference to FIGS. 6A-6T and 8A-8M). As such, electronic device 600 does not display a notification indicative of the presence of an interference.
  • an interference e.g., a magnetic interference or other external interference, such as from an accessory of the device
  • sensors e.g
  • FIG. 10D illustrates a rear- view of electronic device 600, where the device is at least partially covered by a protective case 1010 (e.g., a smartphone case).
  • a protective case 1010 e.g., a smartphone case.
  • Protective case 1010 includes a magnetic component 1012 (e.g., for securing the case and device to a holder, such as a car mount; a magnetic component that is part of an external battery case) detectable by one or more sensors of electronic device 600.
  • FIG. 10E illustrates a front- view of electronic device 600 at least partially covered by protective case 1010.
  • electronic device 600 displays, on display 602, affordance 1006 for launching the image capture application.
  • affordance 1006 for launching the image capture application.
  • electronic device detects (e.g., via a touch-sensitive surface of display 602) an activation 1003 of affordance 1006.
  • electronic device 600 in response to detecting activation 1003 of affordance 1006 for launching the image capture application, electronic device 600 displays, on display 602, user interface 1008 of the image capture application (e.g., corresponding to user interface 614 and user interface 804).
  • electronic device 600 detects an interference (e.g., a magnetic interference) from magnetic component 1012 of protective case 1010.
  • an interference e.g., a magnetic interference
  • notification 1014 in response to detecting the interference, displays (e.g., over user interface 1008 of the image capture application) a notification 1014 indicating that an interference has been detected and, because of the interference, one or more simulated image effects features (e.g., including the simulated depth effect feature described above with reference to FIGS. 6A-6T and 8A-8M) may be affected by the detected interference.
  • notification 1014 also includes an affordance 1016 for closing the notification and continuing with the use of the simulated image effects features despite the presence of the interference.
  • electronic device 600 displays notification 1014 after having previously detected the presence of the interference (e.g., from magnetic component 1012 of protective case 1010) in a predetermined number of instances (e.g., after having launched the image capture application and detected the interference for 3, 5, or 7 times). Thus, in some embodiments, if there were no previous instances of detection of the interference, electronic device 600 forgoes displaying notification 1014 upon launching the image capture application despite having detected the interference from magnetic component 1012 of protective case 1010.
  • the interference e.g., from magnetic component 1012 of protective case 1010
  • electronic device 600 displays a new notification 1014 after detecting the presence of the interference (e.g., from magnetic component 1012 of protective case 1010) in a greater number of instances than when notification 1014 was previously displayed. For example, if previous notification 1014 was displayed after having detected the interference upon 3 previous launches of the image capture application, electronic device 600 forgoes displaying new notification 1014 until having detected the interference in 5 previous launches of the image capture application.
  • the interference e.g., from magnetic component 1012 of protective case 1010
  • notification 1014 has already been presented on the device a predetermined number of times, electronic device 600 forgoes presenting the notification despite subsequent instances of detection of the interference.
  • electronic device 600 in response to detecting an activation of affordance 1016, changes a mode of one or more simulated image effects (e.g., including the simulated depth effect) such that one or more features of an image effect becomes unavailable or stripped down for use.
  • simulated image effects e.g., including the simulated depth effect
  • FIG. 11 is a flow diagram illustrating a method for managing user interfaces for indicating an interference to adjusting simulated image effects, in accordance with some embodiments.
  • Method 1100 is performed at a device (e.g., 100, 300, 500, 600) with a display and one or more sensors (e.g., one or more cameras, an interference detector capable of detecting an interference, such as magnetic interference, originating from a source that is external to the electronic device), including one or more cameras.
  • Some operations in method 1100 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • method 1100 provides an intuitive way for managing user interfaces for simulated depth effects.
  • the method reduces the cognitive burden on a user for managing and navigating user interfaces for simulated depth effects, thereby creating a more efficient human-machine interface.
  • enabling a user to navigate user interfaces faster and more efficiently by providing easy management of user interfaces for simulating depth effects conserves power and increases the time between battery charges.
  • the electronic device While displaying, on the display (e.g., 602), a user interface of a camera application (e.g., 1008), the electronic device (e.g., 600) detects (1102), via the one or more sensors, external interference (e.g., from 1012) that will impair operation of a respective function of the one or more cameras (e.g., 606, 608) (e.g., magnetic interference; an interference that affects one or more camera related functions of the electronic device (e.g., one or more depth effect-related functions)) (e.g., from an accessory attached to, affixed to, covering, or placed near the electronic device, such as a protective case of the device or an external attachment on the device).
  • external interference e.g., from 1012
  • an interference that affects one or more camera related functions of the electronic device e.g., one or more depth effect-related functions
  • Automatically detecting the external interference that will impair operation of a respective function of the one or more cameras reduces the number of inputs required from the user to control the device by enabling the user to bypass having to manually check whether there are external interferences affecting one or more functionality of the device. Reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user- device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • automatically detecting the external interference that will impair operation of a respective function of the one or more cameras and notifying the user of the detection provides the user with the option to correct the issue while still allowing the device to continue to operate at a reduced level of operation.
  • This in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the respective function is (1104) a focus function of the one or more cameras (e.g., 606, 608) of the electronic device (e.g., 600).
  • the interference is (1106) magnetic interference (e.g., from 1012).
  • the interference is (1108) from (e.g., is caused by or is detected because of) an accessory (e.g., 1010) of the electronic device (e.g., 600) (e.g., a protective outer case or cover (e.g., a case or cover that incorporates a battery) for the electronic device, a magnetic sticker or attachment piece affixed to / attached to the electronic device).
  • an accessory e.g., 1010
  • the electronic device e.g., 600
  • a protective outer case or cover e.g., a case or cover that incorporates a battery
  • a magnetic sticker or attachment piece affixed to / attached to the electronic device.
  • detecting the external interference (e.g. from 1012) that will impair the operation of the respective function of the one or more cameras (e.g., 606, 608) includes detecting the external interference upon displaying a user interface (e.g., 1008) for the camera application (e.g., in response to a user request to display a user interface for the camera application) on the electronic device.
  • the electronic device e.g., 600 detects for the external interference that will impair the operation of the respective function of the one or more cameras only when the user interface for the camera application is displayed, and does not detect for the external interference after the user interface for the camera application has been displayed or when the user interface for the camera application is not displayed on the electronic device.
  • Detecting for the external interference only when the user interface for the camera application is displayed, and not detecting for the external interference after the user interface for the camera application has been displayed or when the user interface for the camera application is not displayed reduces power consumption by detecting for the external interference when the functionality that may be affected by the external interference may be used on the device. Reducing power consumption enhances the operability of the device by improving the battery life of the device.
  • the electronic device In response to detecting (1110) the interference (e.g., from 1012) external to the electronic device (e.g., 600), in accordance with a determination that a first criteria has been satisfied (e.g., including the current occurrence, at least a predetermined number of previous occurrences of the interference has been detected, such as occurrences detected when the camera application was previously launched on the electronic device), the electronic device displays (1112), on the display (e.g., 602), a notification (e.g., 1014) indicating that an operation mode (e.g., a depth effect mode) of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras (e.g., 606, 608).
  • a first criteria e.g., including the current occurrence, at least a predetermined number of previous occurrences of the interference has been detected, such as occurrences detected when the camera application was previously launched on the electronic device
  • the electronic device displays (1112), on the display (
  • Displaying a notification indicating that an operation mode (e.g., a depth effect mode) of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras improves visual feedback by enabling the user to quickly and easily recognize that the device has changed an operation mode (e.g., a depth effect mode) of the one or more cameras to reduce an impact of the external interference.
  • Providing improved visual feedback to the user enhances the operability of the device and makes the user- device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the electronic device In response to detecting (1110) the interference external to the electronic device (e.g., 600), in accordance with a determination that the first criteria has not been satisfied (e.g., including the current occurrence, fewer than the predetermined number of previous occurrences of the interference has been detected), the electronic device (e.g., 600) forgoes displaying (1120), on the display (e.g., 602), the notification (e.g., 1014) indicating that the operation mode (e.g., a depth effect mode) of the one or more cameras (e.g., 606, 608) has been changed.
  • the operation mode e.g., a depth effect mode
  • Forgoing displaying the notification if fewer than the predetermined number of previous occurrences of the interference has been detected enhances improves device functionality by forgoing providing notifications for one-off events of interference detection (as opposed to persistent interference detection from, for example, an accessory of the device). Forgoing providing unnecessary notifications enhances user convenience and the operability of the device and makes the user- device interface more efficient which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • the first criteria includes (1114) a requirement that is met when a first predetermined amount (e.g., 5, 7, 11) of (discrete instances of) occurrences of detecting the external interference (e.g., from 1012) by the electronic device (e.g., 600).
  • a first predetermined amount e.g., 5, 7, 11
  • the predetermined number of discrete detections of the external interface is required to trigger display of the notification.
  • a discrete occurrence of detection of the external interference occurs when the user attempts to use the camera application in a manner that would make use of the respective function of the one or more cameras and the device checks for external interference to determine whether the device is able to use the respective function of the one or more cameras and determines that the external interference is present.
  • the device checks for the external interference at predetermined intervals (e.g., once per hour, once per day, the first time each day that the camera application is used).
  • the first predetermined number is (1116) dependent on (e.g., changes based on) the number of times the notification (e.g., 1014) has previously been displayed on the electronic device (e.g., 600).
  • the first predetermined number of detections of the external interface required to trigger the notification progressively increases based on the number of notifications that have already been displayed by the electronic device.
  • a particular number e.g., 3
  • a larger number e.g., 5
  • a yet greater number e.g., 7 of discrete detections of the external interference is required to trigger display of the third notification.
  • Enhancing user convenience enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
  • displaying, on the display (e.g., 602), the notification (e.g., 1014) includes displaying the notification in accordance with a determination that less than a second predetermined number of the notifications has previously been displayed on the electronic device (e.g., 600). In some embodiments, if at least the second predetermined number of notifications has previously been displayed on the electronic device, the electronic device forgoes displaying the notification (regardless of whether the first criteria has been satisfied).
  • the change (1118) to the operation mode of the one or more cameras to reduce the impact of the external interference (e.g., from 1012) on the respective function of the one or more cameras (e.g., 606, 608) includes reducing (or lower, diminishing) the responsiveness of one or more functions (e.g., simulated depth effect-related functions, optical image stabilization, autofocus, and/or operations that require precise movements of mechanical components that can be adversely affected by the presence of strong magnetic fields in the proximity of the mechanical components) of the one or more cameras (or disabling one or more of the functions altogether), wherein the one or more functions correspond to functions that cannot be reliably executed by the one or more cameras while the external interference is being detected by the electronic device.
  • one or more functions e.g., simulated depth effect-related functions, optical image stabilization, autofocus, and/or operations that require precise movements of mechanical components that can be adversely affected by the presence of strong magnetic fields in the proximity of the mechanical components
  • method 700 optionally includes one or more of the characteristics of the various methods described above with reference to method 1100.
  • adjusting a simulated depth effect using a depth adjustment slider, as described in method 700 can be affected by magnetic interference, which can impede with one or more depth sensors used for simulating depth effects.
  • method 900 optionally includes one or more of the characteristics of the various methods described above with reference to method 1100.
  • applying a simulated depth effect to objects within an image representation, as described in method 900 can be affected by magnetic interference, which can impede with one or more depth sensors used for simulating depth effects.
  • magnetic interference can impede with one or more depth sensors used for simulating depth effects.
  • this gathered data may include personal information data that uniquely identifies or can be used to contact or locate a specific person.
  • personal information data can include demographic data, location-based data, telephone numbers, email addresses, twitter IDs, home addresses, data or records relating to a user’s health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other identifying or personal information.
  • the present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users.
  • the personal information data can be used to recognize a person or subject within a captured image or photo. Accordingly, use of such personal information data enables users to more easily recognize the content of a captured image or photo and to organize such captures images or photos.
  • other uses for personal information data that benefit the user are also contemplated by the present disclosure. For instance, health and fitness data may be used to provide insights into a user’s general wellness, or may be used as positive feedback to individuals using technology to pursue wellness goals.
  • the present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices.
  • such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure.
  • Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes.
  • Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users.
  • policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and
  • HIPAA Health Accountability Act
  • the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data.
  • the present technology can be configured to allow users to select to“opt in” or“opt out” of participation in the collection of personal information data during registration for services or anytime thereafter.
  • the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app.
  • personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed.
  • data de-identification can be used to protect a user’s privacy. De-identification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods.
  • the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, images or photos can be organized based on non-personal information data or a bare minimum amount of personal information or publicly available information, such as the date and time associated with the image or photo.

Abstract

The present disclosure generally relates to user interfaces for adjusting simulated image effects. In some embodiments, user interfaces for adjusting a simulated depth effect is described. In some embodiments, user interfaces for displaying adjustments to a simulated depth effect is described. In some embodiments, user interfaces for indicating an interference to adjusting simulated image effects is described.

Description

USER INTERFACES FOR SIMULATED DEPTH EFFECTS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to ET.S. Provisional Patent Application Serial
No. 62/729,926, entitled“USER INTERFACES FOR SIMULATED DEPTH EFFECTS,” filed September 11, 2018, to U.S. Patent Application Serial No. 16/144,629, entitled“USER
INTERFACES FOR SIMULATED DEPTH EFFECTS,” filed September 27, 2018, and to Danish Application Serial No. PA201870623, entitled“USER INTERFACES FOR
SIMULATED DEPTH EFFECTS,” filed September 24, 2018. The contents of each of these applications are hereby incorporated by reference in their entireties.
FIELD
[0002] The present disclosure relates generally to computer user interfaces, and more specifically to techniques for managing user interfaces for simulated depth effects
BACKGROUND
[0003] At present, a user cannot capture an image or photo with precise depth-of-field properties without the aid of a bulky camera. Furthermore, a user cannot quickly and easily make precise adjustments to depth-of-field properties of a stored image or photo.
BRIEF SUMMARY
[0004] Some techniques for simulating depth effects using electronic devices, however, are generally cumbersome and inefficient. For example, some existing techniques use a complex and time-consuming user interface, which may include multiple key presses or keystrokes.
Existing techniques require more time than necessary, wasting user time and device energy. This latter consideration is particularly important in battery-operated devices.
[0005] Accordingly, the present technique provides electronic devices with faster, more efficient methods and interfaces for simulated depth effects. Such methods and interfaces optionally complement or replace other methods for simulated depth effects. Such methods and interfaces reduce the cognitive burden on a user and produce a more efficient human-machine interface. For battery-operated computing devices, such methods and interfaces conserve power and increase the time between battery charges. Such methods and interfaces also enable easy application and editing of applied depth effects using only the electronic device without the aid of another device, thereby enhancing user efficiency and convenience.
[0006] In accordance with some embodiments, a method performed at an electronic device with a display and one or more input devices is described. The method comprises: displaying, on the display, a representation of image data; while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input; in response to detecting the first input, displaying, on the display, an adjustable slider associated with manipulating the representation of image data, wherein the adjustable slider includes: a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and a selection indicator indicating that the first value is a currently-selected simulated depth effect value; while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider; and in response to detecting the input directed to the adjustable slider: moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently-selected simulated depth effect value; and changing an appearance of the representation of image data in accordance with the simulated depth effect as modified by the second value.
[0007] In accordance with some embodiments, a non-transitory computer-readable storage medium is described. The non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for: displaying, on the display, a representation of image data; while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input; in response to detecting the first input, displaying, on the display, an adjustable slider associated with manipulating the representation of image data, wherein the adjustable slider includes: a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and a selection indicator indicating that the first value is a currently- selected simulated depth effect value; while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider; and in response to detecting the input directed to the adjustable slider: moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently-selected simulated depth effect value; and changing an appearance of the representation of image data in accordance with the simulated depth effect as modified by the second value.
[0008] In accordance with some embodiments, a transitory computer-readable storage medium is described. The transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for: displaying, on the display, a representation of image data; while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input; in response to detecting the first input, displaying, on the display, an adjustable slider associated with manipulating the representation of image data, wherein the adjustable slider includes: a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and a selection indicator indicating that the first value is a currently- selected simulated depth effect value; while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider; and in response to detecting the input directed to the adjustable slider: moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently-selected simulated depth effect value; and changing an appearance of the representation of image data in accordance with the simulated depth effect as modified by the second value.
[0009] In accordance with some embodiments, an electronic device is described. The electronic device comprises a display, one or more input devices, one or more processors, and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, on the display, a representation of image data; while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input; in response to detecting the first input, displaying, on the display, an adjustable slider associated with manipulating the representation of image data, wherein the adjustable slider includes: a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and a selection indicator indicating that the first value is a currently-selected simulated depth effect value; while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider; and in response to detecting the input directed to the adjustable slider: moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently-selected simulated depth effect value; and changing an appearance of the representation of image data in accordance with the simulated depth effect as modified by the second value.
[0010] In accordance with some embodiments, an electronic device is described. The electronic device comprises a display; one or more input devices; means for displaying, on the display, a representation of image data; means, while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, for detecting, via the one or more input devices, a first input; and means, in response to detecting the first input, for displaying, on the display, an adjustable slider associated with manipulating the representation of image data, wherein the adjustable slider includes: a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and a selection indicator indicating that the first value is a currently- selected simulated depth effect value; means, while displaying the adjustable slider, for detecting, via the one or more input devices, an input directed to the adjustable slider; and means, in response to detecting the input directed to the adjustable slider, for: moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently-selected simulated depth effect value; and changing an appearance of the representation of image data in accordance with the simulated depth effect as modified by the second value.
[0011] In accordance with some embodiments, a method performed at an electronic device with a display and one or more input devices is described. The method comprises: receiving, via the one or more input devices, a request to apply a simulated depth effect to a representation of image data, wherein depth data for a subject within the representation of image data is available; and in response to receiving the request to apply the simulated depth effect to the representation of image data, displaying, on the display, the representation of image data with the simulated depth effect, including: distorting a first portion of the representation of image data that has a first depth in a first manner, wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data; and distorting a second portion of the representation of image data that has the first depth in a second manner that is different from the first manner, wherein the second manner is determined based on a distance of the second portion from the predefined portion of the representation of image data.
[0012] In accordance with some embodiments, a non-transitory computer-readable storage medium is described. The non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for: receiving, via the one or more input devices, a request to apply a simulated depth effect to a representation of image data, wherein depth data for a subject within the representation of image data is available; and in response to receiving the request to apply the simulated depth effect to the representation of image data, displaying, on the display, the representation of image data with the simulated depth effect, including: distorting a first portion of the representation of image data that has a first depth in a first manner, wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data; and distorting a second portion of the representation of image data that has the first depth in a second manner that is different from the first manner, wherein the second manner is determined based on a distance of the second portion from the predefined portion of the representation of image data.
[0013] In accordance with some embodiments, a transitory computer-readable storage medium is described. The transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for: receiving, via the one or more input devices, a request to apply a simulated depth effect to a representation of image data, wherein depth data for a subject within the representation of image data is available; and in response to receiving the request to apply the simulated depth effect to the representation of image data, displaying, on the display, the representation of image data with the simulated depth effect, including: distorting a first portion of the representation of image data that has a first depth in a first manner, wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data; and distorting a second portion of the representation of image data that has the first depth in a second manner that is different from the first manner, wherein the second manner is determined based on a distance of the second portion from the predefined portion of the representation of image data.
[0014] In accordance with some embodiments, an electronic device is described. The electronic device comprises a display, one or more input devices, one or more processors, and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving, via the one or more input devices, a request to apply a simulated depth effect to a representation of image data, wherein depth data for a subject within the representation of image data is available; and in response to receiving the request to apply the simulated depth effect to the representation of image data, displaying, on the display, the representation of image data with the simulated depth effect, including: distorting a first portion of the representation of image data that has a first depth in a first manner, wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data; and distorting a second portion of the representation of image data that has the first depth in a second manner that is different from the first manner, wherein the second manner is determined based on a distance of the second portion from the predefined portion of the representation of image data.
[0015] In accordance with some embodiments, an electronic device is described. The electronic device comprises a display; one or more input devices; means for receiving, via the one or more input devices, a request to apply a simulated depth effect to a representation of image data, wherein depth data for a subject within the representation of image data is available; and means, in response to receiving the request to apply the simulated depth effect to the representation of image data, for displaying, on the display, the representation of image data with the simulated depth effect, including: distorting a first portion of the representation of image data that has a first depth in a first manner, wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data; and distorting a second portion of the representation of image data that has the first depth in a second manner that is different from the first manner, wherein the second manner is determined based on a distance of the second portion from the predefined portion of the representation of image data.
[0016] In accordance with some embodiments, a method performed at an electronic device with a display and one or more sensors, including one or more cameras, is described. The method comprises: while displaying, on the display, a user interface of a camera application, detecting, via the one or more sensors, external interference that will impair operation of a respective function of the one or more cameras; and in response to detecting the interference external to the electronic device: in accordance with a determination that a first criteria has been satisfied, displaying, on the display, a notification indicating that an operation mode of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras; and in accordance with a determination that the first criteria has not been satisfied, forgoing displaying, on the display, the notification indicating that the operation mode of the one or more cameras has been changed.
[0017] In accordance with some embodiments, a non-transitory computer-readable storage medium is described. The non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more sensors, including one or more cameras, the one or more programs including instructions for: while displaying, on the display, a user interface of a camera application, detecting, via the one or more sensors, external interference that will impair operation of a respective function of the one or more cameras; and in response to detecting the interference external to the electronic device: in accordance with a determination that a first criteria has been satisfied, displaying, on the display, a notification indicating that an operation mode of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras; and in accordance with a determination that the first criteria has not been satisfied, forgoing displaying, on the display, the notification indicating that the operation mode of the one or more cameras has been changed.
[0018] In accordance with some embodiments, a transitory computer-readable storage medium is described. The transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more sensors, including one or more cameras, the one or more programs including instructions for: while displaying, on the display, a user interface of a camera application, detecting, via the one or more sensors, external interference that will impair operation of a respective function of the one or more cameras; and in response to detecting the interference external to the electronic device: in accordance with a determination that a first criteria has been satisfied, displaying, on the display, a notification indicating that an operation mode of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras; and in accordance with a determination that the first criteria has not been satisfied, forgoing displaying, on the display, the notification indicating that the operation mode of the one or more cameras has been changed.
[0019] In accordance with some embodiments, an electronic device is described. The electronic device comprises a display, one or more sensors, including one or more cameras, one or more processors, and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: while displaying, on the display, a user interface of a camera application, detecting, via the one or more sensors, external interference that will impair operation of a respective function of the one or more cameras; and in response to detecting the interference external to the electronic device: in accordance with a determination that a first criteria has been satisfied, displaying, on the display, a notification indicating that an operation mode of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras; and in accordance with a determination that the first criteria has not been satisfied, forgoing displaying, on the display, the notification indicating that the operation mode of the one or more cameras has been changed.
[0020] In accordance with some embodiments, an electronic device is described. The electronic device comprises a display; one or more sensors, including one or more cameras; means, while displaying, on the display, a user interface of a camera application, for detecting, via the one or more sensors, external interference that will impair operation of a respective function of the one or more cameras; and means, in response to detecting the interference external to the electronic device, for: in accordance with a determination that a first criteria has been satisfied, displaying, on the display, a notification indicating that an operation mode of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras; and in accordance with a determination that the first criteria has not been satisfied, forgoing displaying, on the display, the notification indicating that the operation mode of the one or more cameras has been changed.
[0021] Executable instructions for performing these functions are, optionally, included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors. Executable instructions for performing these functions are, optionally, included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
[0022] Thus, devices are provided with faster, more efficient methods and interfaces for adjusting image effects, thereby increasing the effectiveness, efficiency, and user satisfaction with such devices. Such methods and interfaces may complement or replace other methods for adjusting image effects.
DESCRIPTION OF THE FIGURES
[0023] For a better understanding of the various described embodiments, reference should be made to the Description of Embodiments below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
[0024] FIG. 1 A is a block diagram illustrating a portable multifunction device with a touch- sensitive display, in accordance with some embodiments.
[0025] FIG. 1B is a block diagram illustrating exemplary components for event handling, in accordance with some embodiments.
[0026] FIG. 2 illustrates a portable multifunction device having a touch screen, in accordance with some embodiments.
[0027] FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface, in accordance with some embodiments. [0028] FIG. 4A illustrates an exemplary user interface for a menu of applications on a portable multifunction device, in accordance with some embodiments.
[0029] FIG. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface that is separate from the display, in accordance with some embodiments.
[0030] FIG. 5 A illustrates a personal electronic device, in accordance with some
embodiments.
[0031] FIG. 5B is a block diagram illustrating a personal electronic device, in accordance with some embodiments.
[0032] FIGS. 6A-6T illustrate exemplary user interfaces for adjusting a simulated depth effect, in accordance with some embodiments.
[0033] FIGS. 7A-7B are a flow diagram illustrating a method for managing user interfaces for adjusting a simulated depth effect, in accordance with some embodiments.
[0034] FIGS. 8A-8R illustrate exemplary user interfaces for displaying adjustments to a simulated depth effect, in accordance with some embodiments.
[0035] FIGS. 9A-9B are a flow diagram illustrating a method for managing user interfaces for displaying adjustments to a simulated depth effect, in accordance with some embodiments.
[0036] FIGS. 10A-10F illustrate exemplary user interfaces for indicating an interference to adjusting simulated image effects, in accordance with some embodiments.
[0037] FIG. 11 is a flow diagram illustrating a method for managing user interfaces for indicating an interference to adjusting simulated image effects, in accordance with some embodiments.
DESCRIPTION OF EMBODIMENTS
[0038] The following description sets forth exemplary methods, parameters, and the like. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure but is instead provided as a description of exemplary embodiments. [0039] There is a need for electronic devices that provide efficient methods and interfaces for simulating depth effects. For example, there is a need for a device that can capture a live feed image/photo or display a stored image/photo and enable a user to quickly and easily make precise adjustments to depth-of-field properties of the image/photo. Such techniques can reduce the cognitive burden on a user who accesses displayed content associated with adjusting image effects, thereby enhancing productivity. Further, such techniques can reduce processor and battery power otherwise wasted on redundant user inputs.
[0040] Below, FIGS. 1A-1B, 2, 3, 4A-4B, and 5A-5B provide a description of exemplary devices for performing the techniques for managing event notifications. FIGS. 6A-6T illustrate exemplary user interfaces for adjusting a simulated depth effect, in accordance with some embodiments. FIGS. 7A-7B are a flow diagram illustrating a method for managing user interfaces for adjusting a simulated depth effect, in accordance with some embodiments. The user interfaces in FIGS. 6A-6T are used to illustrate the processes described below, including the processes in FIGS. 7A-7B. FIGS. 8A-8R illustrate exemplary user interfaces for displaying adjustments to a simulated depth effect, in accordance with some embodiments. FIG. 9A-9B are a flow diagram illustrating a method for managing user interfaces for displaying adjustments to a simulated depth effect, in accordance with some embodiments. The user interfaces in FIGS. 8A- 8R are used to illustrate the processes described below, including the processes in FIGS. 9A-9B. FIGS. 10A-10F illustrate exemplary user interfaces for indicating an interference to adjusting simulated image effects, in accordance with some embodiments. FIG. 11 is a flow diagram illustrating a method for managing user interfaces for indicating an interference to adjusting simulated image effects, in accordance with some embodiments. The user interfaces in
FIGS. 10A-10F are used to illustrate the processes described below, including the processes in FIG. 11.
[0041] Although the following description uses terms“first,”“second,” etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first touch could be termed a second touch, and, similarly, a second touch could be termed a first touch, without departing from the scope of the various described embodiments. The first touch and the second touch are both touches, but they are not the same touch. [0042] The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms“a,”“an,” and“the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term“and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms“includes,”“including,” “comprises,” and/or“comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0043] The term“if’ is, optionally, construed to mean“when” or“upon” or“in response to determining” or“in response to detecting,” depending on the context. Similarly, the phrase“if it is determined” or“if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or“in response to determining” or“upon detecting [the stated condition or event]” or“in response to detecting [the stated condition or event],” depending on the context.
[0044] Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described. In some embodiments, the device is a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of portable multifunction devices include, without limitation, the iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, California. Other portable electronic devices, such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch screen displays and/or touchpads), are, optionally, used. It should also be understood that, in some embodiments, the device is not a portable
communications device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touchpad).
[0045] In the discussion that follows, an electronic device that includes a display and a touch-sensitive surface is described. It should be understood, however, that the electronic device optionally includes one or more other physical user-interface devices, such as a physical keyboard, a mouse, and/or a joystick.
[0046] The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
[0047] The various applications that are executed on the device optionally use at least one common physical user-interface device, such as the touch-sensitive surface. One or more functions of the touch-sensitive surface as well as corresponding information displayed on the device are, optionally, adjusted and/or varied from one application to the next and/or within a respective application. In this way, a common physical architecture (such as the touch-sensitive surface) of the device optionally supports the variety of applications with user interfaces that are intuitive and transparent to the user.
[0048] Attention is now directed toward embodiments of portable devices with touch- sensitive displays. FIG. 1 A is a block diagram illustrating portable multifunction device 100 with touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display 112 is sometimes called a“touch screen” for convenience and is sometimes known as or called a“touch-sensitive display system.” Device 100 includes memory 102 (which optionally includes one or more computer-readable storage mediums), memory controller 122, one or more processing units (CPUs) 120, peripherals interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, input/output (I/O) subsystem 106, other input control devices 116, and external port 124. Device 100 optionally includes one or more optical sensors 164. Device 100 optionally includes one or more contact intensity sensors 165 for detecting intensity of contacts on device 100 (e.g., a touch-sensitive surface such as touch-sensitive display system 112 of device 100). Device 100 optionally includes one or more tactile output generators 167 for generating tactile outputs on device 100 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 112 of device 100 or touchpad 355 of device 300). These components optionally communicate over one or more communication buses or signal lines 103.
[0049] As used in the specification and claims, the term“intensity” of a contact on a touch- sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (proxy) for the force or pressure of a contact on the touch-sensitive surface. The intensity of a contact has a range of values that includes at least four distinct values and more typically includes hundreds of distinct values (e.g., at least 256). Intensity of a contact is, optionally, determined (or measured) using various approaches and various sensors or combinations of sensors. For example, one or more force sensors underneath or adjacent to the touch-sensitive surface are, optionally, used to measure force at various points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average) to determine an estimated force of a contact. Similarly, a pressure-sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive surface proximate to the contact and/or changes thereto are, optionally, used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the substitute measurements for contact force or pressure are used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the substitute measurements). In some implementations, the substitute measurements for contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). Using the intensity of a contact as an attribute of a user input allows for user access to additional device functionality that may otherwise not be accessible by the user on a reduced-size device with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch- sensitive surface, or a physical/mechanical control such as a knob or a button). [0050] As used in the specification and claims, the term“tactile output” refers to physical displacement of a device relative to a previous position of the device, physical displacement of a component (e.g., a touch-sensitive surface) of a device relative to another component (e.g., housing) of the device, or displacement of the component relative to a center of mass of the device that will be detected by a user with the user’s sense of touch. For example, in situations where the device or the component of the device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other part of a user’s hand), the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in physical characteristics of the device or the component of the device.
For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is, optionally, interpreted by the user as a“down click” or“up click” of a physical actuator button. In some cases, a user will feel a tactile sensation such as an“down click” or“up click” even when there is no movement of a physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user’s movements. As another example, movement of the touch-sensitive surface is, optionally, interpreted or sensed by the user as“roughness” of the touch-sensitive surface, even when there is no change in smoothness of the touch-sensitive surface. While such interpretations of touch by a user will be subject to the individualized sensory perceptions of the user, there are many sensory perceptions of touch that are common to a large majority of users. Thus, when a tactile output is described as corresponding to a particular sensory perception of a user (e.g., an“up click,” a“down click,” “roughness”), unless otherwise stated, the generated tactile output corresponds to physical displacement of the device or a component thereof that will generate the described sensory perception for a typical (or average) user.
[0051] It should be appreciated that device 100 is only one example of a portable
multifunction device, and that device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown in FIG. 1 A are implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application-specific integrated circuits. [0052] Memory 102 optionally includes high-speed random access memory and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller 122 optionally controls access to memory 102 by other components of device 100.
[0053] Peripherals interface 118 can be used to couple input and output peripherals of the device to CPU 120 and memory 102. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in memory 102 to perform various functions for device 100 and to process data. In some embodiments, peripherals interface 118, CPU 120, and memory controller 122 are, optionally, implemented on a single chip, such as chip 104. In some other embodiments, they are, optionally, implemented on separate chips.
[0054] RF (radio frequency) circuitry 108 receives and sends RF signals, also called electromagnetic signals. RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals. RF circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth. RF circuitry 108 optionally communicates with networks, such as the Internet, also referred to as the World Wide Web (WWW), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices by wireless communication. The RF circuitry 108 optionally includes well-known circuitry for detecting near field communication (NFC) fields, such as by a short-range communication radio. The wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies, including but not limited to Global System for Mobile
Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSUPA), Evolution, Data-Only (EV- DO), HSPA, HSPA+, Dual-Cell HSPA (DC-HSPDA), long term evolution (LTE), near field communication (NFC), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Bluetooth Low Energy (BTLE), Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.1 la, IEEE 802.1 lb, IEEE 802. l lg, IEEE 802.1 ln, and/or IEEE 802.1 lac), voice over Internet Protocol (VoIP), Wi-MAX, a protocol for e-mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant
Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document.
[0055] Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. Audio circuitry 110 receives audio data from peripherals interface 118, converts the audio data to an electrical signal, and transmits the electrical signal to speaker 111. Speaker 111 converts the electrical signal to human-audible sound waves. Audio circuitry 110 also receives electrical signals converted by microphone 113 from sound waves. Audio circuitry 110 converts the electrical signal to audio data and transmits the audio data to peripherals interface 118 for processing. Audio data is, optionally, retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripherals interface 118. In some embodiments, audio circuitry 110 also includes a headset jack (e.g., 212, FIG. 2). The headset jack provides an interface between audio circuitry 110 and removable audio input/output peripherals, such as output-only headphones or a headset with both output (e.g., a headphone for one or both ears) and input (e.g., a microphone).
[0056] I/O subsystem 106 couples input/output peripherals on device 100, such as touch screen 112 and other input control devices 116, to peripherals interface 118. I/O subsystem 106 optionally includes display controller 156, optical sensor controller 158, depth camera controller 169, intensity sensor controller 159, haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive/send electrical signals from/to other input control devices 116. The other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth. In some alternate embodiments, input controller(s) 160 are, optionally, coupled to any (or none) of the following: a keyboard, an infrared port, a USB port, and a pointer device such as a mouse. The one or more buttons (e.g., 208, FIG. 2) optionally include an up/down button for volume control of speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206, FIG. 2).
[0057] A quick press of the push button optionally disengages a lock of touch screen 112 or optionally begins a process that uses gestures on the touch screen to unlock the device, as described in U.S. Patent Application 11/322,549,“Unlocking a Device by Performing Gestures on an Unlock Image,” filed December 23, 2005, U.S. Pat. No. 7,657,849, which is hereby incorporated by reference in its entirety. A longer press of the push button (e.g., 206) optionally turns power to device 100 on or off. The functionality of one or more of the buttons are, optionally, user-customizable. Touch screen 112 is used to implement virtual or soft buttons and one or more soft keyboards.
[0058] Touch-sensitive display 112 provides an input interface and an output interface between the device and a user. Display controller 156 receives and/or sends electrical signals from/to touch screen 112. Touch screen 112 displays visual output to the user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively termed“graphics”). In some embodiments, some or all of the visual output optionally corresponds to user-interface objects.
[0059] Touch screen 112 has a touch-sensitive surface, sensor, or set of sensors that accepts input from the user based on haptic and/or tactile contact. Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch screen 112 and convert the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages, or images) that are displayed on touch screen 112. In an exemplary
embodiment, a point of contact between touch screen 112 and the user corresponds to a finger of the user.
[0060] Touch screen 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch screen 112 and display controller
156 optionally detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112. In an exemplary embodiment, projected mutual capacitance sensing technology is used, such as that found in the iPhone® and iPod Touch® from Apple Inc. of Cupertino, California.
[0061] A touch-sensitive display in some embodiments of touch screen 112 is, optionally, analogous to the multi-touch sensitive touchpads described in the following U.S. Patents:
6,323,846 (Westerman et al.), 6,570,557 (Westerman et al.), and/or 6,677,932 (Westerman), and/or U.S. Patent Publication 2002/0015024A1, each of which is hereby incorporated by reference in its entirety. However, touch screen 112 displays visual output from device 100, whereas touch-sensitive touchpads do not provide visual output.
[0062] A touch-sensitive display in some embodiments of touch screen 112 is described in the following applications: (1) U.S. Patent Application No. 11/381,313,“Multipoint Touch Surface Controller,” filed May 2, 2006; (2) U.S. Patent Application No. 10/840,862,“Multipoint Touchscreen,” filed May 6, 2004; (3) U.S. Patent Application No. 10/903,964,“Gestures For Touch Sensitive Input Devices,” filed July 30, 2004; (4) U.S. Patent Application No. 11/048,264, “Gestures For Touch Sensitive Input Devices,” filed January 31, 2005; (5) U.S. Patent
Application No. 11/038,590,“Mode-Based Graphical User Interfaces For Touch Sensitive Input Devices,” filed January 18, 2005; (6) U.S. Patent Application No. 11/228,758,“Virtual Input Device Placement On A Touch Screen User Interface,” filed September 16, 2005; (7) U.S. Patent Application No. 11/228,700,“Operation Of A Computer With A Touch Screen Interface,” filed September 16, 2005; (8) U.S. Patent Application No. 11/228,737,“Activating Virtual Keys Of A Touch-Screen Virtual Keyboard,” filed September 16, 2005; and (9) U.S. Patent Application No. 11/367,749,“Multi-Functional Hand-Held Device,” filed March 3, 2006. All of these applications are incorporated by reference herein in their entirety.
[0063] Touch screen 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen has a video resolution of approximately 160 dpi. The user optionally makes contact with touch screen 112 using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which can be less precise than stylus-based input due to the larger area of contact of a finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user.
[0064] In some embodiments, in addition to the touch screen, device 100 optionally includes a touchpad for activating or deactivating particular functions. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output. The touchpad is, optionally, a touch-sensitive surface that is separate from touch screen 112 or an extension of the touch-sensitive surface formed by the touch screen.
[0065] Device 100 also includes power system 162 for powering the various components. Power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in portable devices.
[0066] Device 100 optionally also includes one or more optical sensors 164. FIG. 1 A shows an optical sensor coupled to optical sensor controller 158 in EO subsystem 106. Optical sensor 164 optionally includes charge-coupled device (CCD) or complementary metal-oxide
semiconductor (CMOS) phototransistors. Optical sensor 164 receives light from the
environment, projected through one or more lenses, and converts the light to data representing an image. In conjunction with imaging module 143 (also called a camera module), optical sensor 164 optionally captures still images or video. In some embodiments, an optical sensor is located on the back of device 100, opposite touch screen display 112 on the front of the device so that the touch screen display is enabled for use as a viewfinder for still and/or video image acquisition. In some embodiments, an optical sensor is located on the front of the device so that the user’s image is, optionally, obtained for video conferencing while the user views the other video conference participants on the touch screen display. In some embodiments, the position of optical sensor 164 can be changed by the user (e.g., by rotating the lens and the sensor in the device housing) so that a single optical sensor 164 is used along with the touch screen display for both video conferencing and still and/or video image acquisition. [0067] Device 100 optionally also includes one or more depth camera sensors 175. FIG. 1 A shows a depth camera sensor coupled to depth camera controller 169 in I/O subsystem 106. Depth camera sensor 175 receives data from the environment to create a three dimensional model of an object (e.g., a face) within a scene from a viewpoint (e.g., a depth camera sensor).
In some embodiments, in conjunction with imaging module 143 (also called a camera module), depth camera sensor 175 is optionally used to determine a depth map of different portions of an image captured by the imaging module 143. In some embodiments, a depth camera sensor is located on the front of device 100 so that the user’s image with depth information is, optionally, obtained for video conferencing while the user views the other video conference participants on the touch screen display and to capture selfies with depth map data. In some embodiments, the depth camera sensor 175 is located on the back of device, or on the back and the front of the device 100. In some embodiments, the position of depth camera sensor 175 can be changed by the user (e.g., by rotating the lens and the sensor in the device housing) so that a depth camera sensor 175 is used along with the touch screen display for both video conferencing and still and/or video image acquisition.
[0068] In some embodiments, a depth map (e.g., depth map image) contains information (e.g., values) that relates to the distance of objects in a scene from a viewpoint (e.g., a camera, an optical sensor, a depth camera sensor). In one embodiment of a depth map, each depth pixel defines the position in the viewpoint’s z-axis where its corresponding two-dimensional pixel is located. In some embodiments, a depth map is composed of pixels wherein each pixel is defined by a value (e.g., 0 - 255). For example, the“0” value represents pixels that are located at the most distant place in a“three dimensional” scene and the“255” value represents pixels that are located closest to a viewpoint (e.g., a camera, an optical sensor, a depth camera sensor) in the “three dimensional” scene. In other embodiments, a depth map represents the distance between an object in a scene and the plane of the viewpoint. In some embodiments, the depth map includes information about the relative depth of various features of an object of interest in view of the depth camera (e.g., the relative depth of eyes, nose, mouth, ears of a user’s face). In some embodiments, the depth map includes information that enables the device to determine contours of the object of interest in a z direction. [0069] Device 100 optionally also includes one or more contact intensity sensors 165.
FIG. 1A shows a contact intensity sensor coupled to intensity sensor controller 159 in I/O subsystem 106. Contact intensity sensor 165 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electric force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors used to measure the force (or pressure) of a contact on a touch-sensitive surface). Contact intensity sensor 165 receives contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. In some embodiments, at least one contact intensity sensor is collocated with, or proximate to, a touch-sensitive surface (e.g., touch- sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on the back of device 100, opposite touch screen display 112, which is located on the front of device 100.
[0070] Device 100 optionally also includes one or more proximity sensors 166. FIG. 1 A shows proximity sensor 166 coupled to peripherals interface 118. Alternately, proximity sensor 166 is, optionally, coupled to input controller 160 in I/O subsystem 106. Proximity sensor 166 optionally performs as described in U.S. Patent Application Nos. 11/241,839,“Proximity Detector In Handheld Device”; 11/240,788,“Proximity Detector In Handheld Device”;
11/620,702,“Using Ambient Light Sensor To Augment Proximity Sensor Output”; 11/586,862, “Automated Response To And Sensing Of User Activity In Portable Devices”; and 11/638,251, “Methods And Systems For Automatic Configuration Of Peripherals,” which are hereby incorporated by reference in their entirety. In some embodiments, the proximity sensor turns off and disables touch screen 112 when the multifunction device is placed near the user’s ear (e.g., when the user is making a phone call).
[0071] Device 100 optionally also includes one or more tactile output generators 167.
FIG. 1 A shows a tactile output generator coupled to haptic feedback controller 161 in I/O subsystem 106. Tactile output generator 167 optionally includes one or more electroacoustic devices such as speakers or other audio components and/or electromechanical devices that convert energy into linear motion such as a motor, solenoid, electroactive polymer, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component that converts electrical signals into tactile outputs on the device). Contact intensity sensor 165 receives tactile feedback generation instructions from haptic feedback module 133 and generates tactile outputs on device 100 that are capable of being sensed by a user of device 100. In some embodiments, at least one tactile output generator is collocated with, or proximate to, a touch- sensitive surface (e.g., touch-sensitive display system 112) and, optionally, generates a tactile output by moving the touch-sensitive surface vertically (e.g., in/out of a surface of device 100) or laterally (e.g., back and forth in the same plane as a surface of device 100). In some
embodiments, at least one tactile output generator sensor is located on the back of device 100, opposite touch screen display 112, which is located on the front of device 100.
[0072] Device 100 optionally also includes one or more accelerometers 168. FIG. 1 A shows accelerometer 168 coupled to peripherals interface 118. Alternately, accelerometer 168 is, optionally, coupled to an input controller 160 in I/O subsystem 106. Accelerometer 168 optionally performs as described in U.S. Patent Publication No. 20050190059,“Acceleration- based Theft Detection System for Portable Electronic Devices,” and U.S. Patent Publication No. 20060017692,“Methods And Apparatuses For Operating A Portable Device Based On An Accelerometer,” both of which are incorporated by reference herein in their entirety. In some embodiments, information is displayed on the touch screen display in a portrait view or a landscape view based on an analysis of data received from the one or more accelerometers. Device 100 optionally includes, in addition to accelerometer(s) 168, a magnetometer and a GPS (or GLONASS or other global navigation system) receiver for obtaining information concerning the location and orientation (e.g., portrait or landscape) of device 100.
[0073] In some embodiments, the software components stored in memory 102 include operating system 126, communication module (or set of instructions) 128, contact/motion module (or set of instructions) 130, graphics module (or set of instructions) 132, text input module (or set of instructions) 134, Global Positioning System (GPS) module (or set of instructions) 135, and applications (or sets of instructions) 136. Furthermore, in some embodiments, memory 102 (FIG. 1A) or 370 (FIG. 3) stores device/global internal state 157, as shown in FIGS. 1A and 3. Device/global internal state 157 includes one or more of: active application state, indicating which applications, if any, are currently active; display state, indicating what applications, views or other information occupy various regions of touch screen display 112; sensor state, including information obtained from the device’s various sensors and input control devices 116; and location information concerning the device’s location and/or attitude.
[0074] Operating system 126 (e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS,
WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
[0075] Communication module 128 facilitates communication with other devices over one or more external ports 124 and also includes various software components for handling data received by RF circuitry 108 and/or external port 124. External port 124 (e.g., Universal Serial Bus (USB), FIREWIRE, etc.) is adapted for coupling directly to other devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.). In some embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or similar to and/or compatible with, the 30-pin connector used on iPod® (trademark of Apple Inc.) devices.
[0076] Contact/motion module 130 optionally detects contact with touch screen 112 (in conjunction with display controller 156) and other touch-sensitive devices (e.g., a touchpad or physical click wheel). Contact/motion module 130 includes various software components for performing various operations related to detection of contact, such as determining if contact has occurred (e.g., detecting a finger-down event), determining an intensity of the contact (e.g., the force or pressure of the contact or a substitute for the force or pressure of the contact), determining if there is movement of the contact and tracking the movement across the touch- sensitive surface (e.g., detecting one or more finger-dragging events), and determining if the contact has ceased (e.g., detecting a finger-up event or a break in contact). Contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact, which is represented by a series of contact data, optionally includes determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact. These operations are, optionally, applied to single contacts (e.g., one finger contacts) or to multiple simultaneous contacts (e.g., “multitouch”/multiple finger contacts). In some embodiments, contact/motion module 130 and display controller 156 detect contact on a touchpad.
[0077] In some embodiments, contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether a user has“clicked” on an icon). In some embodiments, at least a subset of the intensity thresholds are determined in accordance with software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and can be adjusted without changing the physical hardware of device 100). For example, a mouse“click” threshold of a trackpad or touch screen display can be set to any of a large range of predefined threshold values without changing the trackpad or touch screen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more of the set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting a plurality of intensity thresholds at once with a system-level click “intensity” parameter).
[0078] Contact/motion module 130 optionally detects a gesture input by a user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, a gesture is, optionally, detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (liftoff) event at the same position (or substantially the same position) as the finger-down event (e.g., at the position of an icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (liftoff) event.
[0079] Graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual property) of graphics that are displayed. As used herein, the term“graphics” includes any object that can be displayed to a user, including, without limitation, text, web pages, icons (such as user-interface objects including soft keys), digital images, videos, animations, and the like. [0080] In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is, optionally, assigned a corresponding code. Graphics module 132 receives, from applications etc., one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller 156.
[0081] Haptic feedback module 133 includes various software components for generating instructions used by tactile output generator(s) 167 to produce tactile outputs at one or more locations on device 100 in response to user interactions with device 100.
[0082] Text input module 134, which is, optionally, a component of graphics module 132, provides soft keyboards for entering text in various applications (e.g., contacts 137, e-mail 140, IM 141, browser 147, and any other application that needs text input).
[0083] GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to telephone 138 for use in location-based dialing; to camera 143 as picture/video metadata; and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets).
[0084] Applications 136 optionally include the following modules (or sets of instructions), or a subset or superset thereof:
• Contacts module 137 (sometimes called an address book or contact list);
• Telephone module 138;
• Video conference module 139;
• E-mail client module 140;
• Instant messaging (IM) module 141;
• Workout support module 142;
• Camera module 143 for still and/or video images; Image management module 144;
• Video player module;
• Music player module;
• Browser module 147;
• Calendar module 148;
• Widget modules 149, which optionally include one or more of: weather widget 149-1, stocks widget 149-2, calculator widget 149-3, alarm clock widget 149-4, dictionary widget 149-5, and other widgets obtained by the user, as well as user-created widgets 149-6;
• Widget creator module 150 for making user-created widgets 149-6;
• Search module 151;
• Video and music player module 152, which merges video player module and music
player module;
• Notes module 153;
• Map module 154; and/or
• Online video module 155.
[0085] Examples of other applications 136 that are, optionally, stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
[0086] In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, contacts module 137 are, optionally, used to manage an address book or contact list (e.g., stored in application internal state 192 of contacts module 137 in memory 102 or memory 370), including: adding name(s) to the address book; deleting name(s) from the address book; associating telephone number(s), e-mail address(es), physical address(es) or other information with a name; associating an image with a name;
categorizing and sorting names; providing telephone numbers or e-mail addresses to initiate and/or facilitate communications by telephone 138, video conference module 139, e-mail 140, or IM 141; and so forth.
[0087] In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, telephone module 138 are optionally, used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contacts module 137, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation, and disconnect or hang up when the conversation is completed. As noted above, the wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies.
[0088] In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, text input module 134, contacts module 137, and telephone module 138, video conference module 139 includes executable instructions to initiate, conduct, and terminate a video conference between a user and one or more other participants in accordance with user instructions.
[0089] In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, e-mail client module 140 includes executable instructions to create, send, receive, and manage e-mail in response to user instructions. In conjunction with image management module 144, e-mail client module 140 makes it very easy to create and send e-mails with still or video images taken with camera module 143.
[0090] In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, the instant messaging module 141 includes executable instructions to enter a sequence of characters corresponding to an instant message, to modify previously entered characters, to transmit a respective instant message (for example, using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for telephony -based instant messages or using XMPP, SIMPLE, or IMPS for Internet-based instant messages), to receive instant messages, and to view received instant messages. In some embodiments, transmitted and/or received instant messages optionally include graphics, photos, audio files, video files and/or other attachments as are supported in an MMS and/or an Enhanced Messaging Service (EMS). As used herein,“instant messaging” refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and Internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
[0091] In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module, workout support module 142 includes executable instructions to create workouts (e.g., with time, distance, and/or calorie burning goals);
communicate with workout sensors (sports devices); receive workout sensor data; calibrate sensors used to monitor a workout; select and play music for a workout; and display, store, and transmit workout data.
[0092] In conjunction with touch screen 112, display controller 156, optical sensor(s) 164, optical sensor controller 158, contact/motion module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions to capture still images or video (including a video stream) and store them into memory 102, modify
characteristics of a still image or video, or delete a still image or video from memory 102.
[0093] In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions to arrange, modify (e.g., edit), or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album), and store still and/or video images.
[0094] In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, browser module
147 includes executable instructions to browse the Internet in accordance with user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages.
[0095] In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, e-mail client module 140, and browser module 147, calendar module 148 includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to- do lists, etc.) in accordance with user instructions.
[0096] In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, widget modules 149 are mini-applications that are, optionally, downloaded and used by a user (e.g., weather widget 149-1, stocks widget 149-2, calculator widget 149-3, alarm clock widget 149-4, and dictionary widget 149-5) or created by the user (e.g., user-created widget 149- 6). In some embodiments, a widget includes an HTML (Hypertext Markup Language) file, a CSS (Cascading Style Sheets) file, and a JavaScript file. In some embodiments, a widget includes an XML (Extensible Markup Language) file and a JavaScript file (e.g., Yahoo! Widgets).
[0097] In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, the widget creator module 150 are, optionally, used by a user to create widgets (e.g., turning a user-specified portion of a web page into a widget).
[0098] In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions to search for text, music, sound, image, video, and/or other files in memory 102 that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions.
[0099] In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, speaker 111, RF circuitry 108, and browser module 147, video and music player module 152 includes executable instructions that allow the user to download and play back recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, and executable instructions to display, present, or otherwise play back videos (e.g., on touch screen 112 or on an external, connected display via external port 124). In some embodiments, device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple Inc.).
[0100] In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, notes module 153 includes executable instructions to create and manage notes, to-do lists, and the like in accordance with user instructions.
[0101] In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 are, optionally, used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data on stores and other points of interest at or near a particular location, and other location-based data) in accordance with user instructions.
[0102] In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, speaker 111, RF circuitry 108, text input module 134, e-mail client module 140, and browser module 147, online video module 155 includes instructions that allow the user to access, browse, receive (e.g., by streaming and/or download), play back (e.g., on the touch screen or on an external, connected display via external port 124), send an e-mail with a link to a particular online video, and otherwise manage online videos in one or more file formats, such as H.264. In some embodiments, instant messaging module 141, rather than e-mail client module 140, is used to send a link to a particular online video.
Additional description of the online video application can be found in U.S. Provisional Patent Application No. 60/936,562,“Portable Multifunction Device, Method, and Graphical User Interface for Playing Online Videos,” filed June 20, 2007, and U.S. Patent Application No.
11/968,067,“Portable Multifunction Device, Method, and Graphical User Interface for Playing Online Videos,” filed December 31, 2007, the contents of which are hereby incorporated by reference in their entirety. [0103] Each of the above-identified modules and applications corresponds to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules are, optionally, combined or otherwise rearranged in various embodiments. For example, video player module is, optionally, combined with music player module into a single module (e.g., video and music player module 152, FIG. 1A). In some embodiments, memory 102 optionally stores a subset of the modules and data structures identified above. Furthermore, memory 102 optionally stores additional modules and data structures not described above.
[0104] In some embodiments, device 100 is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad. By using a touch screen and/or a touchpad as the primary input control device for operation of device 100, the number of physical input control devices (such as push buttons, dials, and the like) on device 100 is, optionally, reduced.
[0105] The predefined set of functions that are performed exclusively through a touch screen and/or a touchpad optionally include navigation between user interfaces. In some embodiments, the touchpad, when touched by the user, navigates device 100 to a main, home, or root menu from any user interface that is displayed on device 100. In such embodiments, a“menu button” is implemented using a touchpad. In some other embodiments, the menu button is a physical push button or other physical input control device instead of a touchpad.
[0106] FIG. 1B is a block diagram illustrating exemplary components for event handling in accordance with some embodiments. In some embodiments, memory 102 (FIG. 1 A) or 370 (FIG. 3) includes event sorter 170 (e.g., in operating system 126) and a respective application 136-1 (e.g., any of the aforementioned applications 137-151, 155, 380-390).
[0107] Event sorter 170 receives event information and determines the application 136-1 and application view 191 of application 136-1 to which to deliver the event information. Event sorter
170 includes event monitor 171 and event dispatcher module 174. In some embodiments, application 136-1 includes application internal state 192, which indicates the current application view(s) displayed on touch-sensitive display 112 when the application is active or executing. In some embodiments, device/global internal state 157 is used by event sorter 170 to determine which application(s) is (are) currently active, and application internal state 192 is used by event sorter 170 to determine application views 191 to which to deliver event information.
[0108] In some embodiments, application internal state 192 includes additional information, such as one or more of: resume information to be used when application 136-1 resumes execution, user interface state information that indicates information being displayed or that is ready for display by application 136-1, a state queue for enabling the user to go back to a prior state or view of application 136-1, and a redo/undo queue of previous actions taken by the user.
[0109] Event monitor 171 receives event information from peripherals interface 118. Event information includes information about a sub-event (e.g., a user touch on touch-sensitive display 112, as part of a multi-touch gesture). Peripherals interface 118 transmits information it receives from I/O subsystem 106 or a sensor, such as proximity sensor 166, accelerometer(s) 168, and/or microphone 113 (through audio circuitry 110). Information that peripherals interface 118 receives from I/O subsystem 106 includes information from touch-sensitive display 112 or a touch-sensitive surface.
[0110] In some embodiments, event monitor 171 sends requests to the peripherals interface 118 at predetermined intervals. In response, peripherals interface 118 transmits event information. In other embodiments, peripherals interface 118 transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or for more than a predetermined duration).
[0111] In some embodiments, event sorter 170 also includes a hit view determination module 172 and/or an active event recognizer determination module 173.
[0112] Hit view determination module 172 provides software procedures for determining where a sub-event has taken place within one or more views when touch-sensitive display 112 displays more than one view. Views are made up of controls and other elements that a user can see on the display. [0113] Another aspect of the user interface associated with an application is a set of views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. The application views (of a respective application) in which a touch is detected optionally correspond to programmatic levels within a programmatic or view hierarchy of the application. For example, the lowest level view in which a touch is detected is, optionally, called the hit view, and the set of events that are recognized as proper inputs are, optionally, determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture.
[0114] Hit view determination module 172 receives information related to sub-events of a touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 172 identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (e.g., the first sub-event in the sequence of sub-events that form an event or potential event). Once the hit view is identified by the hit view determination module 172, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
[0115] Active event recognizer determination module 173 determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some
embodiments, active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views.
[0116] Event dispatcher module 174 dispatches the event information to an event recognizer (e.g., event recognizer 180). In embodiments including active event recognizer determination module 173, event dispatcher module 174 delivers the event information to an event recognizer determined by active event recognizer determination module 173. In some embodiments, event dispatcher module 174 stores in an event queue the event information, which is retrieved by a respective event receiver 182.
[0117] In some embodiments, operating system 126 includes event sorter 170. Alternatively, application 136-1 includes event sorter 170. In yet other embodiments, event sorter 170 is a stand-alone module, or a part of another module stored in memory 102, such as contact/motion module 130.
[0118] In some embodiments, application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for handling touch events that occur within a respective view of the application’s user interface. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, a respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of event recognizers 180 are part of a separate module, such as a user interface kit or a higher level object from which application 136-1 inherits methods and other properties. In some embodiments, a respective event handler 190 includes one or more of: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or calls data updater 176, object updater 177, or GUI updater 178 to update the application internal state 192. Alternatively, one or more of the application views 191 include one or more respective event handlers 190. Also, in some embodiments, one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
[0119] A respective event recognizer 180 receives event information (e.g., event data 179) from event sorter 170 and identifies an event from the event information. Event recognizer 180 includes event receiver 182 and event comparator 184. In some embodiments, event recognizer 180 also includes at least a subset of: metadata 183, and event delivery instructions 188 (which optionally include sub-event delivery instructions).
[0120] Event receiver 182 receives event information from event sorter 170. The event information includes information about a sub-event, for example, a touch or a touch movement.
Depending on the sub-event, the event information also includes additional information, such as location of the sub-event. When the sub-event concerns motion of a touch, the event information optionally also includes speed and direction of the sub-event. In some embodiments, events include rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation (also called device attitude) of the device.
[0121] Event comparator 184 compares the event information to predefined event or sub- event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event. In some embodiments, event comparator 184 includes event definitions 186. Event definitions 186 contain definitions of events (e.g., predefined sequences of sub-events), for example, event 1 (187-1), event 2 (187-2), and others.
In some embodiments, sub-events in an event (187) include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching. In one example, the definition for event 1 (187-1) is a double tap on a displayed object. The double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first liftoff (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second liftoff (touch end) for a predetermined phase. In another example, the definition for event 2 (187-2) is a dragging on a displayed object. The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display 112, and liftoff of the touch (touch end).
In some embodiments, the event also includes information for one or more associated event handlers 190.
[0122] In some embodiments, event definition 187 includes a definition of an event for a respective user-interface object. In some embodiments, event comparator 184 performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on touch-sensitive display 112, when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test to determine which of the three user-interface objects is associated with the touch (sub event). If each displayed object is associated with a respective event handler 190, the event comparator uses the result of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects an event handler associated with the sub event and the object triggering the hit test. [0123] In some embodiments, the definition for a respective event (187) also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer’s event type.
[0124] When a respective event recognizer 180 determines that the series of sub-events do not match any of the events in event definitions 186, the respective event recognizer 180 enters an event impossible, event failed, or event ended state, after which it disregards subsequent sub- events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture.
[0125] In some embodiments, a respective event recognizer 180 includes metadata 183 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate how event recognizers interact, or are enabled to interact, with one another. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub events are delivered to varying levels in the view or programmatic hierarchy.
[0126] In some embodiments, a respective event recognizer 180 activates event handler 190 associated with an event when one or more particular sub-events of an event are recognized. In some embodiments, a respective event recognizer 180 delivers event information associated with the event to event handler 190. Activating an event handler 190 is distinct from sending (and deferred sending) sub-events to a respective hit view. In some embodiments, event recognizer 180 throws a flag associated with the recognized event, and event handler 190 associated with the flag catches the flag and performs a predefined process.
[0127] In some embodiments, event delivery instructions 188 include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process.
[0128] In some embodiments, data updater 176 creates and updates data used in application 136-1. For example, data updater 176 updates the telephone number used in contacts module 137, or stores a video file used in video player module. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, object updater 177 creates a new user-interface object or updates the position of a user-interface object. GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends it to graphics module 132 for display on a touch-sensitive display.
[0129] In some embodiments, event handler(s) 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
[0130] It shall be understood that the foregoing discussion regarding event handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices 100 with input devices, not all of which are initiated on touch screens. For example, mouse movement and mouse button presses, optionally coordinated with single or multiple keyboard presses or holds; contact movements such as taps, drags, scrolls, etc. on touchpads; pen stylus inputs; movement of the device; oral instructions; detected eye
movements; biometric inputs; and/or any combination thereof are optionally utilized as inputs corresponding to sub-events which define an event to be recognized.
[0131] FIG. 2 illustrates a portable multifunction device 100 having a touch screen 112 in accordance with some embodiments. The touch screen optionally displays one or more graphics within user interface (UI) 200. In this embodiment, as well as others described below, a user is enabled to select one or more of the graphics by making a gesture on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figure) or one or more styluses 203 (not drawn to scale in the figure). In some embodiments, selection of one or more graphics occurs when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (from left to right, right to left, upward and/or downward), and/or a rolling of a finger (from right to left, left to right, upward and/or downward) that has made contact with device 100. In some implementations or circumstances, inadvertent contact with a graphic does not select the graphic. For example, a swipe gesture that sweeps over an application icon optionally does not select the corresponding application when the gesture corresponding to selection is a tap.
[0132] Device 100 optionally also include one or more physical buttons, such as“home” or menu button 204. As described previously, menu button 204 is, optionally, used to navigate to any application 136 in a set of applications that are, optionally, executed on device 100.
Alternatively, in some embodiments, the menu button is implemented as a soft key in a GUI displayed on touch screen 112.
[0133] In some embodiments, device 100 includes touch screen 112, menu button 204, push button 206 for powering the device on/off and locking the device, volume adjustment button(s) 208, subscriber identity module (SIM) card slot 210, headset jack 212, and docking/charging external port 124. Push button 206 is, optionally, used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process. In an alternative embodiment, device 100 also accepts verbal input for activation or deactivation of some functions through microphone 113. Device 100 also, optionally, includes one or more contact intensity sensors 165 for detecting intensity of contacts on touch screen 112 and/or one or more tactile output generators 167 for generating tactile outputs for a user of device 100.
[0134] FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. Device 300 need not be portable. In some embodiments, device 300 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child’s learning toy), a gaming system, or a control device (e.g., a home or industrial controller). Device 300 typically includes one or more processing units (CPUs) 310, one or more network or other communications interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. Communication buses 320 optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. Device 300 includes input/output (I/O) interface 330 comprising display 340, which is typically a touch screen display. I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and touchpad 355, tactile output generator 357 for generating tactile outputs on device 300 (e.g., similar to tactile output generator(s) 167 described above with reference to FIG. 1 A), sensors 359 (e.g., optical, acceleration, proximity, touch- sensitive, and/or contact intensity sensors similar to contact intensity sensor(s) 165 described above with reference to FIG. 1 A). Memory 370 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 370 optionally includes one or more storage devices remotely located from CPU(s) 310. In some embodiments, memory 370 stores programs, modules, and data structures analogous to the programs, modules, and data structures stored in memory 102 of portable multifunction device 100 (FIG. 1 A), or a subset thereof. Furthermore, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk authoring module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (FIG. 1 A) optionally does not store these modules.
[0135] Each of the above-identified elements in FIG. 3 is, optionally, stored in one or more of the previously mentioned memory devices. Each of the above-identified modules corresponds to a set of instructions for performing a function described above. The above-identified modules or programs (e.g., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules are, optionally, combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures identified above. Furthermore, memory 370 optionally stores additional modules and data structures not described above. [0136] Attention is now directed towards embodiments of user interfaces that are, optionally, implemented on, for example, portable multifunction device 100.
[0137] FIG. 4A illustrates an exemplary user interface for a menu of applications on portable multifunction device 100 in accordance with some embodiments. Similar user interfaces are, optionally, implemented on device 300. In some embodiments, user interface 400 includes the following elements, or a subset or superset thereof:
• Signal strength indicator(s) 402 for wireless communication(s), such as cellular and Wi- Fi signals;
• Time 404;
• Bluetooth indicator 405;
• Battery status indicator 406;
• Tray 408 with icons for frequently used applications, such as: o Icon 416 for telephone module 138, labeled“Phone,” which optionally includes an indicator 414 of the number of missed calls or voicemail messages;
o Icon 418 for e-mail client module 140, labeled“Mail,” which optionally includes an indicator 410 of the number of unread e-mails;
o Icon 420 for browser module 147, labeled“Browser;” and
o Icon 422 for video and music player module 152, also referred to as iPod (trademark of Apple Inc.) module 152, labeled“iPod;” and
• Icons for other applications, such as: o Icon 424 for IM module 141, labeled“Messages;”
o Icon 426 for calendar module 148, labeled“Calendar;”
o Icon 428 for image management module 144, labeled“Photos;”
o Icon 430 for camera module 143, labeled“Camera;” o Icon 432 for online video module 155, labeled“Online Video;”
o Icon 434 for stocks widget 149-2, labeled“Stocks;”
o Icon 436 for map module 154, labeled“Maps;”
o Icon 438 for weather widget 149-1, labeled“Weather;”
o Icon 440 for alarm clock widget 149-4, labeled“Clock;”
o Icon 442 for workout support module 142, labeled“Workout Support;” o Icon 444 for notes module 153, labeled“Notes;” and
o Icon 446 for a settings application or module, labeled“Settings,” which provides access to settings for device 100 and its various applications 136.
[0138] It should be noted that the icon labels illustrated in FIG. 4A are merely exemplary.
For example, icon 422 for video and music player module 152 is labeled“Music” or“Music Player.” Other labels are, optionally, used for various application icons. In some embodiments, a label for a respective application icon includes a name of an application corresponding to the respective application icon. In some embodiments, a label for a particular application icon is distinct from a name of an application corresponding to the particular application icon.
[0139] FIG. 4B illustrates an exemplary user interface on a device (e.g., device 300, FIG. 3) with a touch-sensitive surface 451 (e.g., a tablet or touchpad 355, FIG. 3) that is separate from the display 450 (e.g., touch screen display 112). Device 300 also, optionally, includes one or more contact intensity sensors (e.g., one or more of sensors 359) for detecting intensity of contacts on touch-sensitive surface 451 and/or one or more tactile output generators 357 for generating tactile outputs for a user of device 300.
[0140] Although some of the examples that follow will be given with reference to inputs on touch screen display 112 (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects inputs on a touch-sensitive surface that is separate from the display, as shown in FIG. 4B. In some embodiments, the touch-sensitive surface (e.g., 451 in FIG. 4B) has a primary axis (e.g., 452 in FIG. 4B) that corresponds to a primary axis (e.g., 453 in FIG. 4B) on the display (e.g., 450). In accordance with these embodiments, the device detects contacts (e.g., 460 and 462 in FIG. 4B) with the touch-sensitive surface 451 at locations that correspond to respective locations on the display (e.g., in FIG. 4B, 460 corresponds to 468 and 462 corresponds to 470). In this way, user inputs (e.g., contacts 460 and 462, and movements thereof) detected by the device on the touch-sensitive surface (e.g., 451 in FIG. 4B) are used by the device to manipulate the user interface on the display (e.g., 450 in FIG. 4B) of the multifunction device when the touch-sensitive surface is separate from the display. It should be understood that similar methods are, optionally, used for other user interfaces described herein.
[0141] Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contacts, finger tap gestures, finger swipe gestures), it should be understood that, in some embodiments, one or more of the finger inputs are replaced with input from another input device (e.g., a mouse-based input or stylus input). For example, a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a contact) followed by movement of the cursor along the path of the swipe (e.g., instead of movement of the contact). As another example, a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact). Similarly, when multiple user inputs are simultaneously detected, it should be understood that multiple computer mice are, optionally, used simultaneously, or a mouse and finger contacts are, optionally, used simultaneously.
[0142] FIG. 5A illustrates exemplary personal electronic device 500. Device 500 includes body 502. In some embodiments, device 500 can include some or all of the features described with respect to devices 100 and 300 (e.g., FIGS. 1 A-4B). In some embodiments, device 500 has touch-sensitive display screen 504, hereafter touch screen 504. Alternatively, or in addition to touch screen 504, device 500 has a display and a touch-sensitive surface. As with devices 100 and 300, in some embodiments, touch screen 504 (or the touch-sensitive surface) optionally includes one or more intensity sensors for detecting intensity of contacts (e.g., touches) being applied. The one or more intensity sensors of touch screen 504 (or the touch-sensitive surface) can provide output data that represents the intensity of touches. The user interface of device 500 can respond to touches based on their intensity, meaning that touches of different intensities can invoke different user interface operations on device 500. [0143] Exemplary techniques for detecting and processing touch intensity are found, for example, in related applications: International Patent Application Serial No.
PCT/US2013/040061, titled“Device, Method, and Graphical User Interface for Displaying User Interface Objects Corresponding to an Application,” filed May 8, 2013, published as WIPO Publication No. WO/2013/169849, and International Patent Application Serial No.
PCT/US2013/069483, titled“Device, Method, and Graphical User Interface for Transitioning Between Touch Input to Display Output Relationships,” filed November 11, 2013, published as WIPO Publication No. WO/2014/105276, each of which is hereby incorporated by reference in their entirety.
[0144] In some embodiments, device 500 has one or more input mechanisms 506 and 508. Input mechanisms 506 and 508, if included, can be physical. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, device 500 has one or more attachment mechanisms. Such attachment mechanisms, if included, can permit attachment of device 500 with, for example, hats, eyewear, earrings, necklaces, shirts, jackets, bracelets, watch straps, chains, trousers, belts, shoes, purses, backpacks, and so forth. These attachment mechanisms permit device 500 to be worn by a user.
[0145] FIG. 5B depicts exemplary personal electronic device 500. In some embodiments, device 500 can include some or all of the components described with respect to FIGS. 1 A, 1B, and 3. Device 500 has bus 512 that operatively couples I/O section 514 with one or more computer processors 516 and memory 518. EO section 514 can be connected to display 504, which can have touch-sensitive component 522 and, optionally, intensity sensor 524 (e.g., contact intensity sensor). In addition, EO section 514 can be connected with communication unit 530 for receiving application and operating system data, using Wi-Fi, Bluetooth, near field communication (NFC), cellular, and/or other wireless communication techniques. Device 500 can include input mechanisms 506 and/or 508. Input mechanism 506 is, optionally, a rotatable input device or a depressible and rotatable input device, for example. Input mechanism 508 is, optionally, a button, in some examples.
[0146] Input mechanism 508 is, optionally, a microphone, in some examples. Personal electronic device 500 optionally includes various sensors, such as GPS sensor 532, accelerometer 534, directional sensor 540 (e.g., compass), gyroscope 536, motion sensor 538, and/or a combination thereof, all of which can be operatively connected to I/O section 514.
[0147] Memory 518 of personal electronic device 500 can include one or more non- transitory computer-readable storage mediums, for storing computer-executable instructions, which, when executed by one or more computer processors 516, for example, can cause the computer processors to perform the techniques described below, including processes 700, 900, and 1100 (FIGS. 7A-7B, 9A-9B, and 11). A computer-readable storage medium can be any medium that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. Examples of such storage include magnetic disks, optical discs based on CD, DVD, or Blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like. Personal electronic device 500 is not limited to the components and configuration of FIG. 5B, but can include other or additional components in multiple configurations.
[0148] As used here, the term“affordance” refers to a user-interactive graphical user interface object that is, optionally, displayed on the display screen of devices 100, 300, and/or 500 (FIGS. 1 A, 3, and 5A-5B). For example, an image (e.g., icon), a button, and text (e.g., hyperlink) each optionally constitute an affordance.
[0149] As used herein, the term“focus selector” refers to an input element that indicates a current part of a user interface with which a user is interacting. In some implementations that include a cursor or other location marker, the cursor acts as a“focus selector” so that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 355 in FIG. 3 or touch-sensitive surface 451 in FIG. 4B) while the cursor is over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations that include a touch screen display (e.g., touch-sensitive display system 112 in FIG. 1 A or touch screen 112 in FIG. 4 A) that enables direct interaction with user interface elements on the touch screen display, a detected contact on the touch screen acts as a“focus selector” so that when an input (e.g., a press input by the contact) is detected on the touch screen display at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations, focus is moved from one region of a user interface to another region of the user interface without corresponding movement of a cursor or movement of a contact on a touch screen display (e.g., by using a tab key or arrow keys to move focus from one button to another button); in these implementations, the focus selector moves in accordance with movement of focus between different regions of the user interface. Without regard to the specific form taken by the focus selector, the focus selector is generally the user interface element (or contact on a touch screen display) that is controlled by the user so as to communicate the user’s intended interaction with the user interface (e.g., by indicating, to the device, the element of the user interface with which the user is intending to interact). For example, the location of a focus selector (e.g., a cursor, a contact, or a selection box) over a respective button while a press input is detected on the touch-sensitive surface (e.g., a touchpad or touch screen) will indicate that the user is intending to activate the respective button (as opposed to other user interface elements shown on a display of the device).
[0150] As used in the specification and claims, the term“characteristic intensity” of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on multiple intensity samples. The
characteristic intensity is, optionally, based on a predefined number of intensity samples, or a set of intensity samples collected during a predetermined time period (e.g., 0.05, 0.1, 0.2, 0.5, 1, 2,
5, 10 seconds) relative to a predefined event (e.g., after detecting the contact, prior to detecting liftoff of the contact, before or after detecting a start of movement of the contact, prior to detecting an end of the contact, before or after detecting an increase in intensity of the contact, and/or before or after detecting a decrease in intensity of the contact). A characteristic intensity of a contact is, optionally, based on one or more of: a maximum value of the intensities of the contact, a mean value of the intensities of the contact, an average value of the intensities of the contact, a top 10 percentile value of the intensities of the contact, a value at the half maximum of the intensities of the contact, a value at the 90 percent maximum of the intensities of the contact, or the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether an operation has been performed by a user.
For example, the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold. In this example, a contact with a characteristic intensity that does not exceed the first threshold results in a first operation, a contact with a characteristic intensity that exceeds the first intensity threshold and does not exceed the second intensity threshold results in a second operation, and a contact with a characteristic intensity that exceeds the second threshold results in a third operation. In some embodiments, a comparison between the characteristic intensity and one or more thresholds is used to determine whether or not to perform one or more operations (e.g., whether to perform a respective operation or forgo performing the respective operation), rather than being used to determine whether to perform a first operation or a second operation.
[0151] In some embodiments, a portion of a gesture is identified for purposes of determining a characteristic intensity. For example, a touch-sensitive surface optionally receives a continuous swipe contact transitioning from a start location and reaching an end location, at which point the intensity of the contact increases. In this example, the characteristic intensity of the contact at the end location is, optionally, based on only a portion of the continuous swipe contact, and not the entire swipe contact (e.g., only the portion of the swipe contact at the end location). In some embodiments, a smoothing algorithm is, optionally, applied to the intensities of the swipe contact prior to determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of: an unweighted sliding-average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some circumstances, these smoothing algorithms eliminate narrow spikes or dips in the intensities of the swipe contact for purposes of
determining a characteristic intensity.
[0152] The intensity of a contact on the touch-sensitive surface is, optionally, characterized relative to one or more intensity thresholds, such as a contact-detection intensity threshold, a light press intensity threshold, a deep press intensity threshold, and/or one or more other intensity thresholds. In some embodiments, the light press intensity threshold corresponds to an intensity at which the device will perform operations typically associated with clicking a button of a physical mouse or a trackpad. In some embodiments, the deep press intensity threshold corresponds to an intensity at which the device will perform operations that are different from operations typically associated with clicking a button of a physical mouse or a trackpad. In some embodiments, when a contact is detected with a characteristic intensity below the light press intensity threshold (e.g., and above a nominal contact-detection intensity threshold below which the contact is no longer detected), the device will move a focus selector in accordance with movement of the contact on the touch-sensitive surface without performing an operation associated with the light press intensity threshold or the deep press intensity threshold.
Generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface figures.
[0153] An increase of characteristic intensity of the contact from an intensity below the light press intensity threshold to an intensity between the light press intensity threshold and the deep press intensity threshold is sometimes referred to as a“light press” input. An increase of characteristic intensity of the contact from an intensity below the deep press intensity threshold to an intensity above the deep press intensity threshold is sometimes referred to as a“deep press” input. An increase of characteristic intensity of the contact from an intensity below the contact- detection intensity threshold to an intensity between the contact-detection intensity threshold and the light press intensity threshold is sometimes referred to as detecting the contact on the touch- surface. A decrease of characteristic intensity of the contact from an intensity above the contact- detection intensity threshold to an intensity below the contact-detection intensity threshold is sometimes referred to as detecting liftoff of the contact from the touch-surface. In some embodiments, the contact-detection intensity threshold is zero. In some embodiments, the contact-detection intensity threshold is greater than zero.
[0154] In some embodiments described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting the respective press input performed with a respective contact (or a plurality of contacts), where the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or plurality of contacts) above a press-input intensity threshold. In some embodiments, the respective operation is performed in response to detecting the increase in intensity of the respective contact above the press-input intensity threshold (e.g., a“down stroke” of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press-input threshold (e.g., an“up stroke” of the respective press input).
[0155] In some embodiments, the device employs intensity hysteresis to avoid accidental inputs sometimes termed“jitter,” where the device defines or selects a hysteresis intensity threshold with a predefined relationship to the press-input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press-input intensity threshold or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press-input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the hysteresis intensity threshold that corresponds to the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., an“up stroke” of the respective press input). Similarly, in some embodiments, the press input is detected only when the device detects an increase in intensity of the contact from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press-input intensity threshold and, optionally, a subsequent decrease in intensity of the contact to an intensity at or below the hysteresis intensity, and the respective operation is performed in response to detecting the press input (e.g., the increase in intensity of the contact or the decrease in intensity of the contact, depending on the circumstances).
[0156] For ease of explanation, the descriptions of operations performed in response to a press input associated with a press-input intensity threshold or in response to a gesture including the press input are, optionally, triggered in response to detecting either: an increase in intensity of a contact above the press-input intensity threshold, an increase in intensity of a contact from an intensity below the hysteresis intensity threshold to an intensity above the press-input intensity threshold, a decrease in intensity of the contact below the press-input intensity threshold, and/or a decrease in intensity of the contact below the hysteresis intensity threshold corresponding to the press-input intensity threshold. Additionally, in examples where an operation is described as being performed in response to detecting a decrease in intensity of a contact below the press-input intensity threshold, the operation is, optionally, performed in response to detecting a decrease in intensity of the contact below a hysteresis intensity threshold corresponding to, and lower than, the press-input intensity threshold.
[0157] Attention is now directed towards embodiments of user interfaces (“UP’) and associated processes that are implemented on an electronic device, such as portable
multifunction device 100, device 300, or device 500.
[0158] FIGS. 6A-6T illustrate exemplary user interfaces for adjusting a simulated depth effect (e.g., a Bokeh effect), in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 7A- 7B.
[0159] FIG. 6A illustrates a front-view 600A and a rear-view 600B of an electronic device 600 (e.g., a smartphone). Electronic device 600 includes a display 602 (e.g., integrated with a touch-sensitive surface), an input device 604 (e.g., a mechanical input button, a press-able input button), a front-facing sensor 606 (e.g., including one or more front-facing cameras), and a rear facing sensor 608 (e.g., including one or more rear-facing cameras). In some embodiments, electronic device 600 also includes one or more biometric sensors (e.g., a fingerprint sensor, a facial recognition sensor, an iris/retina scanner).
[0160] Electronic device 600 optionally also includes one or more depth camera sensors
(e.g., similar to one or more depth camera sensors 175 described with reference to FIG. 1 A).
The one or more depth camera sensors receive data from the environment to create a three- dimensional model of an object (e.g., a face) within a scene from a viewpoint (e.g., a depth camera sensor). In some embodiments, in conjunction with an imaging module (e.g., similar to imaging module 143 described with reference to FIG. 1 A, and also called a camera module), the one or more depth camera sensors are optionally used to determine a depth map of different portions of an image captured by the imaging module. In some embodiments, one or more depth camera sensors are located on the front of device so that the user’s image with depth information is, optionally, obtained for video conferencing while the user views the other video conference participants on the touch screen display and to capture selfies with depth map data. In some embodiments, the one or more depth camera sensors are located on the back of device, or on the back and the front of the device. In some embodiments, the position(s) of the one or more depth camera sensors can be changed by the user (e.g., by rotating the lens and the sensor in the device housing) so that a depth camera sensor is used along with the touch screen display for both video conferencing and still and/or video image acquisition. In some embodiments, the one or more depth camera sensors are integrated with front-facing camera 606 and/or rear-facing camera 608.
[0161] In some embodiments, a depth map (e.g., depth map image) contains information (e.g., values) that relates to the distance of objects in a scene from a viewpoint (e.g., a camera, an optical sensor, a depth camera sensor). In one embodiment of a depth map, each depth pixel defines the position in the viewpoint’s z-axis where its corresponding two-dimensional pixel is located. In some embodiments, a depth map is composed of pixels wherein each pixel is defined by a value (e.g., 0 - 255). For example, the“0” value represents pixels that are located at the most distant place in a“three dimensional” scene and the“255” value represents pixels that are located closest to a viewpoint (e.g., a camera, an optical sensor, a depth camera sensor) in the “three dimensional” scene. In other embodiments, a depth map represents the distance between an object in a scene and the plane of the viewpoint. In some embodiments, the depth map includes information about the relative depth of various features of an object of interest in view of the depth camera (e.g., the relative depth of eyes, nose, mouth, ears of a user’s face). In some embodiments, the depth map includes information that enables the device to determine contours of the object of interest in a z direction.
[0162] In FIG. 6A, electronic device 600 displays, on display 602, a user interface 610 (e.g., a lockscreen user interface) that includes an affordance 612 for launching an image capture application (e.g., a camera application, an image/photo capturing and editing application). While displaying user interface 610, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) an activation 601 of affordance 612 (e.g., a tap gesture on affordance 612). [0163] In FIG. 6B, in response to detecting activation 601, electronic device 600 displays, on display 602, a user interface 614 of the image capture application. In this example, image capture application is in a photo mode. While displaying user interface 614 of the image capture application, electronic device 600 receives, via rear-facing camera 608, image data
corresponding to the environment within the field-of-view of rear-facing camera 608. In some examples, if the image capture application is in front-facing mode as opposed to rear-facing mode, electronic device 600 receives, via front-facing camera 606, image data corresponding to the environment within the field-of-view of front-facing camera 606.
[0164] Electronic device 600 displays, in an image display region 616 of user interface 614 of the image capture application, an image representation 618 of the image data received via rear-facing camera 608. In this example, image representation 618 includes a subject 620 (e.g., a view of a person that includes the face of the person and at least a portion of the upper body of the person). In this example, image representation 618 also includes a light-emitting object 622 A (corresponding to a real light-emitting object in the real environment), light-emitting objects 622B (corresponding to real light-emitting objects in the real environment), and light- emitting objects 622C (corresponding to real light-emitting objects in the real environment). In this example, image representation 618 also includes a non-light emitting object 624
(corresponding to a real non-light-emitting object in the real environment).
[0165] User interface 614 of the image capture application also includes a first menu region 628 A and a second menu region 628B. First menu region 628 A includes a plurality of affordances associated with adjusting image effects and/or properties. Second menu region 628B includes a plurality of image capture mode options (e.g., photo mode, video mode, portrait mode, square mode, slow-motion mode). In FIG. 6B, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) an activation 603 of a portrait mode affordance 626 corresponding to portrait mode.
[0166] In FIG. 6C, in response to detecting activation 603 of portrait mode affordance 626, electronic device 600 changes the current image capture mode of the image capture application from photo mode to portrait mode. In portrait mode, electronic device 600 displays, in first menu region 628A of user interface 614, a depth effect affordance 630 (e.g., for adjusting a depth-of-field of image representation 618 by adjusting a simulated f-number, also known as the f-stop, f-ratio, or focal ratio).
[0167] Further, in portrait mode, electronic device 600 applies a simulated depth effect (e.g., a Bokeh effect, a depth-of-field effect, with a default 4.5 f-number) to image representation 618 displayed in image display region 616. In some embodiments, the simulated depth effect is applied to the background of image representation 618, with subject 620 as the focal point. In some embodiments, the simulated depth effect is applied throughout image representation 618 based on a focal point within subject 620 (e.g., the center region of the face of subject 620, such as the nose of subject 620).
[0168] As shown in FIG. 6C, with the simulated depth effect applied, depth-of-field properties of an object within image representation 618 are adjusted based on one or more characteristics of the particular object (e.g., the type of object, such as whether the object corresponds to a light-emitting object or to a non-light-emitting object, the shape of the object, the distance of the object from the focal point). For example, the depth-of-field properties of light-emitting objects 622A, 622B, and 622C in image representation 618 are adjusted more drastically relative to non-light-emitting object 624 in image representation 618 (e.g., such that the light-emitting objects look more blurred, larger, brighter, more saturated, and/or with a more distorted shape than non-light-emitting objects). Adjustments to the depth-of-field properties of an object based on one or more characteristics of the object is described in greater detail below with reference to the user interfaces of FIGS. 8A-8R.
[0169] In FIG. 6D, while in portrait mode, electronic device 600 detects (e.g., via a touch- sensitive surface of display 602) an activation 605 of depth effect affordance 630 (e.g., a tap gesture on depth effect affordance 630). In some embodiments, electronic device 600 changes a visual characteristic of depth effect affordance (e.g., changes a color of the affordance) upon detecting activation of the affordance. Alternatively, in FIG. 6E, while in portrait mode, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) a swipe gesture 607 (e.g., a vertical swipe gesture, a swipe-up gesture) within image display region 616.
[0170] In FIG. 6F, in response to detecting activation 605 of depth effect affordance 630 or swipe gesture 607 on image display region 616, electronic device 600 shifts upwards image display region 616 within user interface 614 (such that first menu region 628A becomes vertically narrower and second menu region 628B becomes vertically wider) to display, in second menu region 628B, a depth adjustment slider 632.
[0171] Depth adjustment slider 632 includes a plurality of tickmarks 634 corresponding to f- numbers and a needle 636 indicating the currently-selected tickmark (and thus the currently- selected f-number). Depth adjustment slider 632 also includes a f-number indicator 638 (e.g., located over or adjacent to needle 636) indicating the value of the currently-selected f-number.
As previously mentioned, in some embodiments, the default f-number is 4.5. In some embodiments, in addition to displaying the current f-number in f-number indicator 638, electronic device 600 also displays the current f-number in depth effect affordance 630.
[0172] In FIG. 6G, while displaying depth adjustment slider 632, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) a swipe gesture 609 (e.g., a horizontal swipe gesture, a swipe-right gesture) on depth adjustment slider 632 (e.g., over tickmarks 634).
In some examples, tickmarks 634 are (horizontally) shifted in response to swipe gesture 609 and needle 636 remains affixed. In some examples, needle 636 is shifted over affixed tickmarks 634 in response to a swipe gesture on depth adjustment slider 632.
[0173] In FIG. 6H, in response to detecting swipe gesture 609, electronic device 600 adjusts, based on the focal point of image representation 618 (e.g., the nose of subject 620), the depth-of- field properties of the objects (e.g., light-emitting objects 622A, 622B, and 622C, and non-light- emitting object 624) within image representation 618.
[0174] As shown by f-number indicator 638 (and, in some embodiments, also by depth effect affordance 630), the current f-number (3.9) is decreased from the previous (default) f-number (4.5) as a result of swipe gesture 609. Light-emitting objects 622A, 622B, and 622C are more blurred, larger, brighter, more saturated, and/or with a more distorted shape in FIG. 6H (with a 3.9 f-number) than in FIG. 6G (with a 4.5 f-number) and, likewise, non-light-emitting object 624 is more blurred, larger, larger, more saturated, and/or with a more distorted shape in FIG. 6H than in FIG. 6G. The degree of change in the blurriness, the size, the degree of brightness, the degree of saturation, and/or the degree of shape-distortion of the objects from the previous f- number (4.5) to the lower f-number (3.9) is more drastic for light-emitting objects as compared to non -light-emitting objects.
[0175] Additionally, the shape of each object is further distorted based on each object’s distance from the focal point (e.g., the nose of subject 620) of image representation 618 (e.g., if image representation 618 is viewed as an x, y-plane with the focal point being the center of the plane, the distance is measured as the straight line distance from the center of an object to the center of the plane). For example, the degree of shape distortion of object 622B-1 is more drastic (e.g., such that the object is less circular and more oval / stretched) than the degree of shape distortion of object 622B-2. Similarly, the degree of shape distortion of object 622C-1 is more drastic (e.g., such that the object is less circular and more oval / stretched) than the degree of shape distortion of object 622C-2. As mentioned, the changes in the depth-of-field properties of objects within the image representation are described in greater detail below with reference to FIGS. 8A-8R.
[0176] In FIG. 6H, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602), a swipe gesture 611 (e.g., a continuation of swipe gesture 609) on depth adjustment slider 632.
[0177] In FIG. 61, in response to detecting swipe gesture 611, electronic device 600 further adjusts, based on the focal point of image representation 618 (e.g., the nose of subject 620), the depth-of-field properties of the objects (e.g., light-emitting objects 622A, 622B, and 622C, and non-light-emitting object 624) within image representation 618.
[0178] As shown by f-number indicator 638 (and, in some embodiments, also by depth effect affordance 630), the current f-number (1.6) is further decreased from the previous f-number (3.9) as a result of swipe gesture 611. Light-emitting objects 622A, 622B, and 622C are more blurred, larger, brighter, more saturated, and/or with a more distorted shape in FIG. 61 (with a 1.6 f- number) than in FIG. 6H (with a 3.9 f-number) and, likewise, non-light-emitting object 624 is more blurred, larger, brighter, more saturated, and/or with a more distorted shape in FIG. 61 than in FIG. 6H. The degree of change in the blurriness, the size, the degree of brightness, the degree of saturation, and/or the degree of shape-distortion of the objects from the previous f-number (3.9) to the lower f-number (1.6) is more drastic for light-emitting objects as compared to non- light-emitting objects.
[0179] In FIG. 6J, while displaying, in image display region 616, image representation 618 corresponding to image data detected via rear-facing camera 608, and while the simulated depth- of-field is set to a 1.6 f-number (as indicated by f-number indicator 1.6) as previously set in FIG. 61, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) an activation 613 of image capture affordance 640 (e.g., a tap gesture on image capture affordance 640).
[0180] In response to detecting activation 613 of image capture affordance 640, electronic device 600 stores (e.g., in a local memory of the device and/or a remote server accessible by the device) image data corresponding to image representation 618 with the simulated depth effect (with a 1.6 f-number) applied.
[0181] In FIG. 6K, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) an activation 615 of a stored images affordance 642 (e.g., a tap gesture on stored images affordance 642.
[0182] In FIG. 6L, in response to detecting activation 615 of stored images affordance 642, electronic device displays, on display 602, a user interface 644 of a stored images application. User interface 644 includes an image display region 646 for displaying a stored image. In FIG. 6L, electronic device 600 displays, in image display region 646, a stored image
representation 648 corresponding to image representation 618 captured in FIG. 6J. As with image representation 618, stored image representation 648 includes a subject 650 (corresponding to subject 620), a light-emitting object 652 A (corresponding to light-emitting object 622 A), light-emitting objects 652B (corresponding to light-emitting objects 622B), light-emitting objects 652C (corresponding to light-emitting objects 622C), and non-light-emitting object 654 corresponding to non-light-emitting object 624). Further, as with image representation 618 when captured (in FIG. 6J), stored image representation 648 is adjusted with a 1.6 f-number simulated depth-of-field setting. [0183] In FIG. 6L, while displaying stored image representation 648, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) an activation 617 of an edit affordance 656 of user interface 644 (e.g., a tap gesture on edit affordance 656).
[0184] In FIG. 6M, in response to detecting activation 617 of edit affordance 656, electronic device 600 displays (e.g., in a menu region of user interface 644 below image display region 646 showing the stored image representation) depth adjustment slider 632 (set to a 1.6 f-number, as indicated by f-number indicator 638). In some examples, image display region 646 shifts upwards within user interface 644 to display depth adjustment slider 632 (e.g., similar to image display region 616 shifting upwards, as described with reference to FIG. 6F). Electronic device 600 also displays (e.g., in a region of user interface 644 above image display region 646 showing the stored image representation), a depth effect indicator 658 indicating that the currently- displayed stored image representation (stored image representation 648) is adjusted with a simulated depth effect.
[0185] In FIG. 6N, while displaying depth adjustment slider 632, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602), a swipe gesture 619 (e.g., a horizontal swipe gesture, a swipe-left gesture) on depth adjustment slider 632 (e.g., over tickmarks 634). In some examples, tickmarks 634 are (horizontally) shifted in response to swipe gesture 619 and needle 636 remains affixed. In some examples, needle 636 is shifted over affixed tickmarks 634 in response to a swipe gesture on depth adjustment slider 632.
[0186] In FIG. 60, in response to detecting swipe gesture 619, electronic device 600 adjusts, based on the focal point of stored image representation 648 (e.g., the nose of subject 650), the depth-of-field properties of the objects (e.g., light-emitting objects 652A, 652B, and 652C, and non-light-emitting object 654) within stored image representation 648.
[0187] As shown by f-number indicator 638, the current f-number (4.9) is increased from the previous (stored) f-number (1.6) as a result of swipe gesture 619. As such, light-emitting objects
652A, 652B, and 652C are less blurred, smaller, less bright, less saturated, and/or with a less distorted shape (and more“sharp”) in FIG. 60 (with a 4.9 f-number) than in FIG. 6N (with a 1.6 f-number) and, likewise, non-light-emitting object 654 is less blurred, smaller, less bright, less saturated, and/or with a less distorted shape and instead sharper in FIG. 60 than in FIG. 6N. The degree of change in the blurriness, the size, the degree of brightness, the degree of saturation, and/or with the degree of shape-distortion (and an increase in sharpness) of the objects from the previous f-number (1.6) to the higher f-number (4.9) is more drastic for light-emitting objects as compared to non-light-emitting objects. As mentioned, the changes in the depth-of-field properties of objects within the image representation are described in greater detail below with reference to FIGS. 8A-8R.
[0188] In FIG. 60, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602), a swipe gesture 621 (e.g., a continuation of swipe gesture 619) on depth adjustment slider 632.
[0189] In FIG. 6P, in response to detecting swipe gesture 621, electronic device 600 further adjusts, based on the focal point of stored image representation 648 (e.g., the nose of subject 650), the depth-of-field properties of the objects (e.g., light-emitting objects 652A, 652B, and 652C, and non-light-emitting object 654) within stored image representation 648.
[0190] As shown by f-number indicator 638, the current f-number (8.7) is increased from the previous f-number (4.9) as a result of swipe gesture 621. As such, light-emitting objects 652A, 652B, and 652C are less blurred, smaller, less bright, less saturated, and/or with a less distorted shape (and sharper, and thus closer to its real shape without any image distortion) in FIG. 6P (with a 8.7 f-number) than in FIG. 60 (with a 4.9 f-number) and, likewise, non-light-emitting object 654 is less blurred, smaller, less bright, less saturated, and/or with a less distorted shape (and sharper, and thus closer to its real shape without any image distortion) in FIG. 6P than in FIG. 60. The degree of change in the blurriness, the size, the degree of brightness, the degree of saturation, and/or the degree of shape-distortion (and an increase in sharpness) of the objects from the previous f-number (5) to the higher f-number (10) is more drastic for light-emitting objects as compared to non-light-emitting objects. As mentioned, the changes in the depth-of- field properties of objects within the image representation are described in greater detail below with reference to FIGS. 8A-8R.
[0191] FIG. 6Q illustrates electronic device 600 displaying, in display 602, a settings user interface 660 of the image capture application. In FIG. 6Q, while displaying settings user interface 660, electronic device detects (e.g., via a touch-sensitive surface of display 602) an activation 623 of a preserve settings affordance 662 of settings user interface 660 (e.g., a tap gesture on preserve settings affordance 662).
[0192] In FIG. 6R, in response to detecting activation 623 of preserve settings affordance 662, electronic device 600 displays, on display 602, a preserve settings user interface 664 associated with the image capture application and the stored images application. Preserve settings user interface 664 includes a creative controls option 666 (e.g., with a corresponding toggle 668) for activating or de-activating creative controls. In some embodiments, when creative controls is active, electronic device 600 preserves previously-set image effects settings (e.g., including the simulated depth effect setting) when the image capture application and/or the stored images application are closed and re-launched (such that the previously-set image effects setting, such as the previously-set f-number, is automatically re-loaded and applied to the displayed image representation). In some embodiments, when creative controls is inactive, electronic device 600 does not preserve the previously-set image effects settings, and image effects settings (including the depth effect setting) is restored to default values when the image capture application and/or stored images application are re-launched.
[0193] FIG. 6S illustrates an electronic device 670 (e.g., a laptop computer) with a display 672 and a front-facing camera 674. In some embodiments, electronic device 670 also includes a rear-facing camera.
[0194] In FIG. 6S, electronic device 670 displays, on display 672, a user interface 676 of an image application (e.g., corresponding to the image capture application or the stored images application), where an image representation 678 corresponding to image representation 618 is displayed in user interface 676. Electronic device 670 also displays, within user interface 676 (e.g., below image representation 678), a depth adjustment slider 680 similar to depth adjustment slider 632. Depth adjustment slider 680 includes a plurality of tickmarks 682 corresponding to f- numbers and a needle 684 indicating the currently-selected tickmark (and thus the currently- selected f-number). Depth adjustment slider 680 also includes a f-number indicator 686 (e.g., located adjacent to the slider) indicating the value of the currently-selected f-number. In some examples, a cursor 688 can be used to navigate needle 684 over tickmarks 682, thereby changing the f-number to adjust the simulated depth effect of image representation 678. [0195] FIG. 6T illustrates an electronic device 690 (e.g., a tablet computer, a laptop computer with a touch-sensitive display) with a display 692. In some embodiments, electronic device 690 also includes a front-facing camera and/or a rear-facing camera.
[0196] In FIG. 6T, electronic device 690 displays, on display 692, a user interface 694 of an image application (e.g., corresponding to the image capture application or the stored images application), where an image representation 696 corresponding to image representation 618 is displayed in user interface 694. Electronic device 690 also displays, within user interface 694 (e.g., adjacent to image representation 696), a depth adjustment slider 698 (e.g., in a vertical direction) similar to depth adjustment slider 632. Depth adjustment slider 698 includes a plurality of tickmarks 699 corresponding to f-numbers and a needle 697 indicating the currently- selected tickmark (and thus the currently-selected f-number). Depth adjustment slider 698 also includes a f-number indicator 695 (e.g., located below or adjacent to the slider) indicating the value of the currently-selected f-number.
[0197] In some examples, depth adjustment slider 698 can be adjusted via vertical swipe gestures such that tickmarks 699 are moved relative to an affixed needle 697. In some examples, depth adjustment slider 698 can be adjusted via vertical swipe gestures such that needle 697 is moved relative to affixed tickmarks 699.
[0198] In some examples, electronic device 690 also displays (e.g., in a region of user interface 694 adjacent to image representation 696, in a region of user interface 694 adjacent to image representation 696 and opposite from depth adjustment slider 698), a plurality of lighting settings 693 corresponding to various lighting / light filtering options that can be applied to image representation 696, and can be changed via vertical swipe gestures. In some examples, depth adjustment slider 698 and lighting settings 693 can concurrently be adjusted and the concurrent adjustments can simultaneously be reflected in image representation 696.
[0199] FIGS. 7A-7B are a flow diagram illustrating a method for managing user interfaces for adjusting a simulated depth effect, in accordance with some embodiments. Method 700 is performed at a device (e.g., 100, 300, 500, 600) with a display and one or more input devices (e.g., a touch-sensitive surface of the display, a mechanical input device). Some operations in method 700 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
[0200] As described below, method 700 provides an intuitive way for managing user interfaces for simulated depth effects. The method reduces the cognitive burden on a user for managing and navigating user interfaces for simulated depth effects, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to navigate user interfaces faster and more efficiently by providing easy management of user interfaces for simulating depth effects conserves power and increases the time between battery charges.
[0201] The electronic device (e.g., 600) displays (702), on the display (e.g., 602), a representation of image data (e.g., 618, a displayed image corresponding to the image data, a portrait image of a person/subject).
[0202] In some embodiments, the representation of image data (e.g., 618) is a live-feed image currently being captured by one or more cameras of the electronic device (e.g., 600). In some embodiments, the representation of image data (e.g., 648) is a previously-taken image stored in and retrieved from memory (of the electronic device or an external server). In some embodiments, the depth data of the image can be adjusted / manipulated to apply a depth effect to the representation of image data.
[0203] In some embodiments, the image data includes at least two components: an RGB component that encodes the visual characteristics of a captured image, and depth data that encodes information about the relative spacing relationship of elements within the captured image (e.g., the depth data encodes that a user is in the foreground, and background elements, such as a tree positioned behind the user, are in the background).
[0204] In some embodiments, the depth data is a depth map. In some embodiments, a depth map (e.g., depth map image) contains information (e.g., values) that relates to the distance of objects in a scene from a viewpoint (e.g., a camera). In one embodiment of a depth map, each depth pixel defines the position in the viewpoint’s z-axis where its corresponding two- dimensional pixel is located. In some examples, a depth map is composed of pixels wherein each pixel is defined by a value (e.g., 0 - 255). For example, the“0” value represents pixels that are located at the most distant place in a“three dimensional” scene and the“255” value represents pixels that are located closest to a viewpoint (e.g., camera) in the“three dimensional” scene. In other examples, a depth map represents the distance between an object in a scene and the plane of the viewpoint. In some embodiments, the depth map includes information about the relative depth of various features of an object of interest in view of the depth camera (e.g., the relative depth of eyes, nose, mouth, ears of a user’s face). In some embodiments, the depth map includes information that enables the device to determine contours of the object of interest in a z direction. In some embodiments, the depth data has a second depth component (e.g., a second portion of depth data that encodes a spatial position of the background in the camera display region; a plurality of depth pixels that form a discrete portion of the depth map, such as a background), separate from the first depth component, the second depth aspect including the representation of the background in the camera display region. In some embodiments, the first depth aspect and second depth aspect are used to determine a spatial relationship between the subject in the camera display region and the background in the camera display region. This spatial relationship can be used to distinguish the subject from the background. This distinction can be exploited to, for example, apply different visual effects (e.g., visual effects having a depth component) to the subject and background. In some embodiments, all areas of the image data that do not correspond to the first depth component (e.g., areas of the image data that are out of range of the depth camera) are adjusted based on different degrees of blurriness/sharpness, size, brightness, saturation, and/or shape-distortion in order to simulate a depth effect, such as a Bokeh effect.
[0205] In some embodiments, displaying, on the display, the representation of image data further comprises, in accordance with a determination that the representation of image data corresponds to stored image data (e.g., that of a stored/saved image or a previously-captured image), displaying the representation of image data with a prior simulated depth effect as previously modified by a prior first value for the simulated depth effect. In some embodiments, the representation of image data (e.g., 648) corresponds to stored image data when a
camera/image application for displaying representations of image data is in an edit mode (e.g., a mode for editing existing / previously-captured images or photos). In some embodiments, if the representation of image data corresponds to stored image data with a prior simulated depth effect, the electronic device (e.g., 600) automatically displayed the adjustable slider upon (e.g., concurrently with) displaying the representation of image data (e.g., within a camera/image application). Thus, in some embodiments, the adjustable slider (e.g., 632) is displayed with the representation of image data without the first input. In some embodiments, whether the adjustable slider is automatically displayed upon displaying the representation of image data (if the image data is already associated with a prior simulated depth effect) depends on the type of the electronic device (e.g., whether the electronic device is a smartphone, a smartwatch, a laptop computer, or a desktop computer).
[0206] While displaying the representation of image data (e.g., 618, 648) with a simulated depth effect (e.g., a depth effect, such as a Bokeh effect, that is applied to the representation based on a manipulation of the underlying data to artificially generate the effect) as modified by a first value of a plurality of selectable values for the simulated depth effect, the electronic device (e.g., 600) detects (706), via the one or more input devices, a first input (e.g., 605, 607, an activation of an affordance displayed on the display, a gesture, such as a slide-up gesture on the image, detected via the touch-sensitive surface of the display).
[0207] In some embodiments, while displaying, on the display (e.g., 602), the representation of image data (e.g., 618, 648), the electronic device (e.g., 600) displays (704), on the display (e.g., in an affordances region (e.g., 628 A) corresponding to different types of effects that can be applied to the representation of image data), a simulated depth effect adjustment affordance (e.g., 630), wherein the first input is an activation (e.g., 605, a tap gesture) of the simulated depth effect adjustment affordance. In some embodiments, the simulated depth effect adjustment affordance includes a symbol indicating that the affordance relates to depth effects, such as a f- number symbol. Displaying the simulated depth effect adjustment affordance while displaying the representation of image data and including a symbol indicating that the affordance relates to depth effects improves visual feedback by enabling a user to quickly and easily recognize that adjustments to depth-of-field properties can be made to the representation of image data.
Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
[0208] In some embodiments, the simulated depth effect is“simulated” in that the effect is (artificially) generated based on a manipulation of the underlying image data to create and apply the effect to the corresponding representation of image data (e.g., 618, 648) (e.g., as opposed to being a“natural” effect that is based on underlying data as originally captured via one or more cameras).
[0209] In some embodiments, prior to detecting the first input (e.g., 605, 607), the simulated depth effect adjustment affordance (e.g., 630) is displayed with a first visual characteristic (e.g., a particular color indicating that the affordance is not currently selected, such as a default color or a white color). In some embodiments, after detecting the first input, the simulated depth effect adjustment affordance is displayed with a second visual characteristic (e.g., a particular color indicating that the affordance is currently selected, such as a highlight color or a yellow color) different from the first visual characteristic. Changing a visual characteristic of the simulated depth effect adjustment affordance improves visual feedback by enabling the user to quickly and easily recognize that the simulated depth effect feature is active. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
[0210] In some embodiments, displaying the simulated depth effect adjustment affordance (e.g., 630) comprises, in accordance with a determination that the currently-selected depth effect value corresponds to a default depth effect value (e.g., a default f-number value determined/set by the electronic device), forgoing displaying, in the simulated depth effect adjustment affordance, the currently-selected depth effect value. In some embodiments, the default depth effect value is a 4.5 f-number. In some embodiments, displaying the simulated depth effect adjustment affordance comprises, in accordance with a determination that the currently-selected depth effect value corresponds to a non-default depth effect value (e.g., any f-number value within a range of available f-number values that does not correspond to the default f-number value), displaying, in the simulated depth effect adjustment affordance (e.g., adjacent to a f- number symbol), the currently-selected depth effect value.
[0211] In some embodiments, prior to detecting the first input (e.g., 605, 607), the electronic device (e.g., 600) displays, on the display (e.g., 602), one or more mode selector affordances (e.g., a region with one or more affordances for changing a camera-related operation mode of the electronic device, such as a camera mode selector affordance), wherein displaying the adjustable slider (e.g., 632) comprises replacing display of the one or more mode selector affordances with the adjustable slider. Replacing display of the one or more mode selector affordances with the adjustable slider improves visual feedback and enabling the user to quickly and easily recognize that the device is now in a depth effect adjustment mode. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
[0212] In some embodiments, prior to detecting the first input, the electronic device (e.g., 600) displays, on the display (e.g., 602), a zoom control element (e.g., a region with one or more affordances for changing a zoom level of the camera), wherein displaying the adjustable slider (e.g., 632) comprises replacing display of the zoom control element.
[0213] In some embodiments, the first input (e.g., 607) is a swipe gesture in a first direction in a first portion of the user interface (e.g., 614, a swipe-up gesture on the touch-sensitive surface of the display). In some embodiments, the swipe gesture is a swipe-up gesture on a region of the display corresponding to the representation of image data. In some embodiments, the swipe gesture is a swipe-up gesture on a region of the display corresponding to a bottom edge of the representation image data (e.g., 618). In some embodiments, if the swipe is in a second direction, the adjustable slider is not displayed and, optionally, a different operation is performed (e.g., switching camera modes or performing a zoom operation). In some embodiments, if the swipe is in a second portion of the user interface, the adjustable slider is not displayed and, optionally, a different operation is performed. Providing additional control options (without cluttering the user interface with additional displayed controls) enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
[0214] In response to detecting the first input (e.g., 605, 607), the electronic device (e.g.,
600) displays (708), on the display (e.g., 602) (e.g., below the representation of image data, adjacent to the representation of image data), an adjustable slider (e.g., 632) (e.g., a horizontal or vertical slider comprising a plurality of tick marks and a needle) associated with manipulating the representation of image data (e.g., manipulating a depth effect of the representation of image data, a depth-of-field effect of the representation of image data). The adjustable slider includes (710) a plurality of option indicators (e.g., 634, represented as tick marks, gauge marks) corresponding to a plurality of the selectable values for the simulated depth effect (e.g.,
(simulated) depth-of-field, f-number / f-stop). In some embodiments, the plurality of option indicators are slidable (e.g., horizontally or vertically) within the adjustable slider. The adjustable slider also includes (712) a selection indicator (e.g., 636, represented as a needle) indicating that the first value is a currently-selected simulated depth effect value.
[0215] In some embodiments, the position of the selection indicator (e.g., 636, needle) is fixed and the plurality of option indicators (e.g., 634, tickmarks) are adjustable within the slider (e.g., 632) such that the plurality of option indicators are moved relative to the selection indicator to adjust the currently-selected depth-of-field value. In some embodiments, only a subset of all of the available option indicators are concurrently displayed within the slider— option indicators that are not displayed are displayed within the slider in response to an adjustment of the slider (e.g., a user input moving the option indicators in a horizontal or vertical direction).
[0216] In some embodiments, the plurality of option indicators (e.g., 634) are fixed and the position of the selection indicator (e.g., 636) is adjustable within the slider such that the selection indicator is moved relative to the plurality of option indicators to adjust the currently-selected depth-of-field value. [0217] In some embodiments, in response to detecting the first input (e.g., 605, 607), the electronic device (e.g., 600) slides (714) (e.g., vertically, sliding up by a predetermined amount) the representation of image data (e.g., 618) on the display (e.g., 602) to display (e.g., reveal) the adjustable slider (e.g., 632) (e.g., sliding the representation of the image data in a direction corresponding to a direction of a swipe input).
[0218] While displaying the adjustable slider (e.g., 632), the electronic device (e.g., 600) detects (716) via the one or more input devices, an input directed to the adjustable slider.
[0219] In some embodiments, the input (e.g., 609, 611, 619, 621) directed to the adjustable slider (e.g., 632) is a (horizontal) swipe gesture (e.g., a swipe-left gesture or a swipe-right gesture) on the adjustable slider, wherein the swipe gesture includes a user movement (e.g., using a finger) in a first direction having at least a first velocity (greater than a threshold velocity) at an end of the swipe gesture (e.g., a velocity of movement of a contact performing the swipe gesture at or near when the contact is lifted-off from the touch-sensitive surface).
[0220] In response to detecting (718) the input (e.g., 609, 611, 619, 621) directed to the adjustable slider (e.g., 632) (e.g., a tap or swipe at a location corresponding to the adjustable slider), the electronic device (e.g., 600) moves (720) the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently- selected simulated depth effect value.
[0221] In response to detecting (718) the input directed to the adjustable slider (e.g., a tap or swipe at a location corresponding to the adjustable slider), the electronic device (e.g., 600) changes (722) an appearance of the representation of image data (e.g., 618, 648) in accordance with the simulated depth effect as modified by the second value. Changing an appearance of the representation of image data in response to detecting the input directed to the adjustable slider improves visual feedback by enabling the user to quickly and easily view changes to the representation of image data that is caused by the user’s input. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
[0222] In some embodiments, moving the adjustable slider (e.g., 632) comprises moving the plurality of option indicators (e.g., 634, represented as tick marks) while the selection indicator (e.g., 636, represented as a needle) remains fixed. Thus, in some embodiments, moving the adjustable slider comprises sliding the plurality of tick marks corresponding to f-values while the needle stays fixed in the same location within the slider. In some embodiments, moving the adjustable slider comprises moving the selection indicator (e.g., represented as a needle) while the plurality of option indicators remain fixed (e.g., represented as tick marks). Thus, in some embodiments, moving the adjustable slider comprises sliding the needle back and forth over the plurality of tick marks corresponding to f-values while the tick marks stay fixed in the same location within the slider.
[0223] In some embodiments, while moving the adjustable slider (e.g., 632) (e.g., by moving the plurality of option indicators relative to a fixed selection indicator, or by moving the selection indicator relative to fixed option indicators), the electronic device (e.g., 600) generates (724)
(e.g., via one or more tactile output generators and/or one or more speakers of the electronic device) a first type of output (e.g., tactile output, audio output) in sync with the movement of the adjustable slider as different values are selected for a parameter controlled by the adjustable slider. In some embodiments, the electronic device generates a discrete output (e.g., a discrete tactile output, a discrete audio output) each time the selection indicator aligns with or passes an option indicator of the plurality of option indicators. Generating a first type of output (e.g., tactile output, audio output) in sync with the movement of the adjustable slider as different values are selected for a parameter controlled by the adjustable slider improves feedback by providing a coordinated response to the user’s input. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. [0224] In some embodiments, while moving the adjustable slider (e.g., 632), in accordance with a determination that the representation of image data (e.g., 618, 648) corresponds to stored image data (e.g., that of a stored/saved image or a previously-captured image), the first type of output includes (726) audio output (e.g., generated via one or more speakers of the electronic device and/or generated via one or more tactile output generators of the electronic device). In some embodiments, while moving the adjustable slider, in accordance with a determination that the representation of image data corresponds to a live preview of image data being captured by the one or more cameras, the first type of output does not include (728) audio output (e.g., generated via one or more speakers of the electronic device and/or generated via one or more tactile output generators of the electronic device). In some embodiments, the representation of image data corresponds to stored image data when a camera/image application for displaying representations of image data is in an edit mode (e.g., a mode for editing existing / previously- captured images or photos).
[0225] Note that details of the processes described above with respect to method 700 (e.g., FIGS. 7A-7B) are also applicable in an analogous manner to the methods described below. For example, method 900 optionally includes one or more of the characteristics of the various methods described above with reference to method 700. For example, the simulated depth effect applied to an image representation, as described in method 900, can be adjusted using the depth adjustment slider described in method 700. For another example, method 1100 optionally includes one or more of the characteristics of the various methods described above with reference to method 700. For example, the notification concerning detected interference, as described in method 1100, can be associated with detected magnetic interference that can impede with one or more depth sensors used for simulating depth effects. For brevity, these details are not repeated below.
[0226] FIGS. 8A-8R illustrate exemplary user interfaces for displaying adjustments to a simulated depth effect (e.g., a Bokeh effect), in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 9A-9B. [0227] FIG. 8A illustrates electronic device 600 as described above with reference to FIGS. 6A-6T. In FIG. 8A, electronic device 600 displays, on display 602, a user interface 804 of the image capture application, where the image capture application is in portrait mode. While in portrait mode, user interface 804 displays (e.g., above or adjacent to an image display region 806) a depth effect affordance 810 (e.g., corresponding to depth effect affordance 630).
[0228] Electronic device 600 also displays, in image display region 806, an image representation 808 of image data captured via rear-facing camera 608. In this example, image representation 808 does not include a subject (e.g., a person), as a subject is not within the field- of-view of rear-facing camera 608.
[0229] In portrait mode, electronic device 600 displays, in image representation 808, subject markers 812 indicating that a subject need to be placed within the general region of image representation 808 occupied by the markers to properly enable portrait mode. Because a subject is not currently detected, electronic device 600 displays (e.g., in a top portion of image display region 806), a message 814 requesting that a subject be placed in the environment corresponding to the region of image representation 808 occupied by subject markers 812.
[0230] In FIG. 8B, a real subject in the real environment is detected within the field-of-view of rear-facing camera 608. Upon detecting the real subject, electronic device 600 displays, in image representation 808, a subject 816 corresponding to the real subject detected within the field-of-view of rear-facing camera 608.
[0231] In FIG. 8C, in accordance with a determination that subject 816 is within the general region of image representation 808 indicated by subject markers 812, electronic device 600 provides, via subject markers 812 (e.g., by the markers“locking on” to the subject, by the markers changing a visual characteristic, such as changing to a different color), an indication that the subject is within the general region of image representation 808 occupied by subject markers 812 to properly enable portrait mode.
[0232] In some embodiments, if a subject is detected but is too far away from electronic device 600 (e.g., more than a predefined distance away from the device, such as more than 10 feet away from the device) to fully enable portrait mode, electronic device 600 displays a notification indicating that the subject be placed closer to the device. In some embodiments, if a subject is detected but is too close to electronic device 600 (e.g., less than a predefined distance away from the device, such as closer than 1 foot from the device) to fully enable portrait mode, electronic device 600 displays a notification indicating that the subject be placed farther away from the device.
[0233] Upon detecting subject 816 within the general region of image representation 808 indicated by subject markers 812, electronic device 600 activates portrait mode. Upon activation of portrait mode, electronic device 600 adjusts image representation 812 by applying, based on a focal point within image representation 808 (e.g. the nose of subject 816), a simulated depth effect (e.g., a Bokeh effect, the simulated depth effect described above with respect to image representation 618) to objects within image representation 808 with the default f-number (e.g., 4.5). In this example, image representation 808 includes light-emitting objects 818A, 818B, 818C, and 818D and non-light-emitting objects 820 A and 820B. In some embodiments, the simulated depth effect is also applied to portions of subject 816 that do not correspond to the focal point (e.g., portions of subject 816 other than the nose of the subject).
[0234] In FIG. 8D, while displaying image representation 808 with subject 816 detected, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) an activation 801 of depth effect affordance 810.
[0235] In FIG. 8E, in response to detecting activation 810 of depth effect affordance 810, electronic device 600 displays (e.g., within a menu region of user interface 804 below image display region 806, a depth adjustment slider 822 (corresponding to depth adjustment slider 632 described above with reference to FIGS. 6A-6R). As with depth adjustment slider 632, depth adjustment slider 822 includes a plurality of tickmarks 824 corresponding to f-numbers, a needle 824 indicating the currently-selected tickmark (and thus the currently-selected f-number), and a f-number indicator 828 (e.g., located below or adjacent to the slider) indicating the value of the currently-selected f-number. In FIG. 8E, because the current f-number is the default f-number, f- number indicator 828 indicates the default f-number value (e.g., of 4.5). In some embodiments, when depth adjustment slider 822 is activated, in addition to f-number indicator 828, depth effect affordance 810 also displays the current f-number. [0236] In FIG. 8E, while displaying depth adjustment slider 822, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) a swipe gesture 803 (e.g., a horizontal swipe gesture, a swipe-right gesture) on depth adjustment slider 822, thereby causing tickmarks 824 to horizontally slide relative to the affixed needle 826.
[0237] As shown in FIG. 8F, swipe gesture 803 causes depth adjustment slider 822 to slide such that a lower f-number (e.g., of 1.6) is set as the current f-number, as indicated by f-number indicator 828 (and, in some embodiments, also by depth effect affordance 810).
[0238] In FIG. 8F, electronic device 800 adjusts image representation 808 to reflect the new depth-of-field value (e.g., of 1.6). Specifically, because of the smaller simulated depth-of-field value, light-emitting object 818A is more distorted (e.g., blurrier, larger, brighter, more saturated, and/or with a more distorted shape) in FIG. 8F (with f-number 1.6) than in FIG. 8E (with f- number 4.5). Similarly, because of the smaller simulated depth-of-field value, light-emitting objects 818B are more distorted (e.g., blurrier, larger, brighter, more saturated, and/or with a more distorted shape) in FIG. 8F (with f-number 1.6) than in FIG. 8E (with f-number 4.5).
Similarly, because of the smaller simulated depth-of-field value, light-emitting objects 818C are more distorted (e.g., blurrier, larger, brighter, more saturated, and/or with a more distorted shape) in FIG. 8F (with f-number 1.6) than in FIG. 8E (with f-number 4.5). Similarly, because of the smaller simulated depth-of-field value, non-light-emitting object 820A is more distorted (e.g., blurrier, larger, brighter, more saturated, and/or with a more distorted shape) in FIG. 8F (with f- number 1.6) than in FIG. 8E (with f-number 4.5). Similarly, because of the smaller simulated depth-of-field value, non-light-emitting object 820B is more distorted (e.g., blurrier, larger, brighter, more saturated, and/or with a more distorted shape) in FIG. 8F (with f-number 1.6) than in FIG. 8E (with f-number 4.5).
[0239] Further, the degree of distortion (e.g., the degree of blurriness, the size, the degree of brightness, the degree of saturation, and/or the degree of distortion in the shape of the object relative to the focal point) of the objects differs based on the distance of each object to the focal point of image representation 808 (e.g., the nose of subject 816). Specifically, if each depth pixel (e.g., comprising a particular object) in image representation 808 defines the position in the viewpoint’s z-axis where its corresponding two-dimensional pixel is located, and each pixel is defined by a value (e.g., 0 - 255, where the“0” value represents pixels that are located at the most distant place in a“three dimensional” scene and the“255” value represents pixels that are located closest to a viewpoint (e.g., camera) in the“three dimensional” scene), then the degree of blurriness /sharpness, the size, the degree of brightness, the degree of saturation, and/or the degree of shape-distortion is dependent upon the distance in the z-axis direction (the value between 0 - 255). That is, the more distant depth pixels in an object are in the z-direction, the more“blurry” the object will appear in image representation 808, and closer depth pixels in an object are in the z-direction, the sharper the object will appear in image representation 808. Meanwhile, if image representation 808 is viewed as a two-dimensional x, y-plane with the focal point (e.g., the nose of subject 820) as the center (e.g., the origin) of the plane, the straight-line distance from the (x, y) point of the pixels constituting an object in image representation 808 to the center of the plane affects the degree of shape distortion of the object— the greater the distance of the pixels from the center (the focal point), the greater the degree of shape distortion, and the closer the distance of the pixels from the center, the more minimal the shape distortion.
[0240] For example, in FIG. 8F, the degree of distortion of object 818B-1 is greater than the change in the degree of distortion of object 818B-2 (e.g., object 818B-1 is relatively blurrier,, larger, brighter, more saturated, and/or more shape-distorted relative to the focal point than object 818B-2) because object 818B-1 is farther away from the focal point (e.g., the nose of subject 816) than object 818B-2. Similarly, in FIG. 8F, the degree of distortion of object 818C-1 is greater than the degree of distortion of object 818C-2 (e.g., object 818C-1 becomes relatively “blurrier” and more shape-distorted relative to the focal point than object 818C-2) because object 818C-1 is farther away from the focal point (e.g., the nose of subject 816) than object 818C-2. Differences in the degree of distortion based on the distance of an object to the focal point also applies to non-light-emitting objects (e.g., object 820A and 820B) and, in some embodiments, to portions of subject 816 not corresponding to the focal point (e.g., the upper body of the subjects, portions of the face and head of the subject surrounding the focal point).
[0241] Further, the degree of distortion (e.g., the degree of blurriness, difference in size, the degree of brightness, the degree of saturation, and/or the degree of distortion in the shape of the object relative to the focal point) of the objects differs based on the type of the object— whether the object corresponds to a light-emitting object or a non-light-emitting object. The resulting change in distortion is generally greater for light-emitting objects than for non-light-emitting objects for the same adjustment in depth-of-field.
[0242] In some embodiments, the depth-of-field characteristic of the objects are adjusted continuously as depth adjustment slider 822 is navigated (e.g., from 4.5 in FIG. 8E to 1.6 in FIG. 8F).
[0243] In FIG. 8G, while the f-number is set at 1.6, electronic device 600 detects (e.g., via a touch-sensitive surface of display 602) a swipe gesture 805 (e.g., a horizontal swipe gesture, a swipe-left gesture) on depth adjustment slider 822, thereby causing tickmarks 824 to horizontally slide in the opposite direction relative to the affixed needle 826.
[0244] As shown in FIG. 8H, swipe gesture 805 causes depth adjustment slider 822 to slide such that a higher f-number (e.g., of 8.7) is set as the current f-number, as indicated by f-number indicator 828 (and, in some embodiments, also by depth effect affordance 810).
[0245] In FIG. 8H, electronic device 800 adjusts image representation 808 to reflect the new depth-of-field value (e.g., of 8.7). Specifically, because of the larger simulated depth-of-field value, light-emitting object 818A is less distorted (e.g., sharper, closer to an accurate
representation of its real form) in FIG. 8H (with f-number 8.7) than in FIG. 8F (with f-number
1.6) and in FIG. 8E (with f-number 4.5). Similarly, because of the larger simulated depth-of- field value, light-emitting objects 818B is less distorted (e.g., sharper, closer to an accurate representation of its real form) in FIG. 8H (with f-number 8.7) than in FIG. 8F (with f-number
1.6) and in FIG. 8E (with f-number 4.5). Similarly, because of the larger simulated depth-of- field value, light-emitting objects 818C are less distorted (e.g., sharper, closer to an accurate representation of its real form) in FIG. 8H (with f-number 8.7) than in FIG. 8F (with f-number
1.6) and in FIG. 8E (with f-number 4.5). Similarly, because of the larger simulated depth-of- field value, non-light-emitting object 820A is less distorted (e.g., sharper, closer to an accurate representation of its real form) in FIG. 8H (with f-number 8.7) than in FIG. 8F (with f-number
1.6) and in FIG. 8E (with f-number 4.5). Similarly, because of the larger simulated depth-of- field value, non-light-emitting object 820B is less distorted (e.g., sharper, closer to an accurate representation of its real form) in FIG. 8H (with f-number 8.7) than in FIG. 8F (with f-number
1.6) and in FIG. 8E (with f-number 4.5). [0246] As already discussed above, the degree of distortion (e.g., the degree of blurriness, the difference in size, the degree of brightness, the degree of saturation, the degree of distortion in the shape of the object relative to the focal point) of the objects differs based on the distance of each object to the focal point of image representation 808 (e.g., the nose of subject 816). Thus, for example, in FIG. 8H, the degree of distortion of object 818B-1 is still greater than the degree of distortion of object 818B-2 (e.g., object 818B-1 is still relatively blurrier, larger, brighter, more saturated, and/or more shape-distorted relative to the focal point than object 818B-2) because object 818B-1 is farther away from the focal point (e.g., the nose of subject 816) than object 818B-2. Similarly, in FIG. 8H, the degree of distortion of object 818C-1 is still greater than the degree of distortion of object 818C-2 (e.g., object 818C-1 becomes relatively blurrier, larger, brighter, more saturated, and/or more shape-distorted relative to the focal point than object 818C-2) because object 818C-1 is farther away from the focal point (e.g., the nose of subject 816) than object 818C-2.
[0247] FIGS. 8I-8M illustrate a plurality of circular objects 830 (which can be light-emitting objects or non-light-emitting objects) arranged in a five-by-five gird-like pattern with the focal point at center object 832. FIGS. 8I-8M also illustrate a depth adjustment slider 834
corresponding to depth adjustment slider 822 described above with reference to FIGS. 8A-8H. FIGS. 8I-8M are provided to further illustrate, in one embodiment, the distortion of objects under different f-number settings, where the degree of distortion differs based on a distance of an object from the focal point.
[0248] In FIG. 81, as indicated by f-number indicator 836, the current f-number is set to 4.5 (e.g., the default f-number). FIG. 81 illustrates circular objects 830 adjusted, relative to object 832 as the focal point, with a 4.5 f-number. As shown in FIG. 81, objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape) than objects that are on or closer to the focal point.
[0249] In FIG. 8J, as indicated by f-number indicator 836, the current f-number is set to 2.8. FIG. 8J illustrates circular objects 830 adjusted, relative to object 832 as the focal point, with a 2.8 f-number. Objects 830 in FIG. 8J appear“larger” because, under a smaller f-number, the objects are more blurred, larger, brighter, more saturated, and/or with a more distorted shape than corresponding objects 830 in FIG. 81. As in FIG. 81, in FIG. 8J objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape) than objects that are on or closer to the focal point.
[0250] In FIG. 8K, as indicated by f-number indicator 836, the current f-number is set to 1.0. FIG. 8K illustrates circular objects 830 adjusted, relative to object 832 as the focal point, with a 1.0 f-number. Objects 830 in FIG. 8K appear even“larger” because, under an even smaller f- number, the objects are more blurred, larger, brighter, more saturated, and/or with a more distorted shape than corresponding objects 830 in FIG. 8J. As in FIG. 8I-8J, in FIG. 8K objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape ) than objects that are on or closer to the focal point.
[0251] In FIG. 8L, as indicated by f-number indicator 836, the current f-number is set to 7.6. FIG. 8L illustrates circular objects 830 adjusted, relative to object 832 as the focal point, with a 7.6 f-number. Objects 830 in FIG. 8K appear“smaller” than corresponding objects 830 in FIG. 81 because, under a larger f-number, the objects are less blurred, smaller, less bright, less saturated, and/or with a less distorted shape and instead sharper than corresponding objects 830 in FIG. 81. Still, as in FIG. 8I-8K, in FIG. 8L objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape) than objects that are on or closer to the focal point.
[0252] In FIG. 8M, as indicated by f-number indicator 836, the current f-number is set to 14. FIG. 8M illustrates circular objects 830 adjusted, relative to object 832 as the focal point, with a 14 f-number. Objects 830 in FIG. 8M appear even“smaller” than corresponding objects 830 in FIG. 8L because, under an even larger f-number, the objects are less blurred, smaller, less bright, less saturated, and/or with a less distorted shape and instead sharper than corresponding objects 830 in FIG. 8L. As such, objects 830 in FIG. 8M are more of“true” circles than objects 830 in FIGS. 8I-8L. Still, as in FIGS. 8I-8L, in FIG. 8M objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape) than objects that are on or closer to the focal point. [0253] FIGS. 8N-8R illustrate a plurality of circular objects 838 (which can be light-emitting objects or non-light-emitting objects) arranged in a five-by-five gird-like pattern with the focal point at center object 840 (similar to FIGS. 8I-8M). FIGS. 8N-8R also illustrate depth adjustment slider 834 corresponding to depth adjustment slider 822 described above with reference to FIGS. 8A-8H. FIGS. 8N-8R are provided to further illustrate, in another embodiment, the distortion of objects under different f-number settings, where the degree of distortion differs based on a distance of an object from the focal point.
[0254] In FIG. 8N, as indicated by f-number indicator 836, the current f-number is set to 4.5 (e.g., the default f-number). FIG. 8N illustrates circular objects 838 adjusted, relative to object 840 as the focal point, with a 4.5 f-number. As shown in FIG. 8N, objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape) than objects that are on or closer to the focal point.
[0255] In FIG. 80, as indicated by f-number indicator 836, the current f-number is set to 2.8. FIG. 80 illustrates circular objects 838 adjusted, relative to object 834 as the focal point, with a 2.8 f-number. Objects 838 in FIG. 80 appear“larger” because, under a smaller f-number, the objects are more blurred, larger, brighter, more saturated, and/or with a more distorted shape than corresponding objects 838 in FIG. 8N. As in FIG. 8N, in FIG. 80 objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape) than objects that are on or closer to the focal point.
[0256] In FIG. 8P, as indicated by f-number indicator 836, the current f-number is set to 1.0. FIG. 8P illustrates circular objects 838 adjusted, relative to object 840 as the focal point, with a 1.0 f-number. Objects 838 in FIG. 8P appear even“larger” because, under an even smaller f- number, the objects are more blurred, larger, brighter, more saturated, and/or with a more distorted shape than corresponding objects 838 in FIG. 80. As in FIG. 8N-80, in FIG. 8P objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape ) than objects that are on or closer to the focal point.
[0257] In FIG. 8Q, as indicated by f-number indicator 836, the current f-number is set to 7.6.
FIG. 8Q illustrates circular objects 838 adjusted, relative to object 840 as the focal point, with a 7.6 f-number. Objects 838 in FIG. 8Q appear“smaller” than corresponding objects 838 in FIG. 8N because, under a larger f-number, the objects are less blurred, smaller, less bright, less saturated, and/or with a less distorted shape and instead sharper than corresponding objects 838 in FIG. 8N. Still, as in FIG. 8N-8P, in FIG. 8Q objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape) than objects that are on or closer to the focal point.
[0258] In FIG. 8R, as indicated by f-number indicator 836, the current f-number is set to 14. FIG. 8R illustrates circular objects 838 adjusted, relative to object 840 as the focal point, with a 14 f-number. Objects 838 in FIG. 8R appear even“smaller” than corresponding objects 838 in FIG. 8Q because, under an even larger f-number, the objects are less blurred, smaller, less bright, less saturated, and/or with a less distorted shape and instead sharper than corresponding objects 838 in FIG. 8Q. As such, objects 838 in FIG. 8R are more of“true” circles than objects 838 in FIGS. 8N-8Q. Still, as in FIGS. 8N-8Q, in FIG. 8R objects that are farther away from the focal point are more distorted (e.g., more blurred, larger, brighter, more saturated, and/or with a more distorted shape) than objects that are on or closer to the focal point.
[0259] FIGS. 9A-9B are a flow diagram illustrating a method for managing user interfaces for displaying adjustments to a simulated depth effect, in accordance with some embodiments. Method 900 is performed at a device (e.g., 100, 300, 500, 600) with a display and one or more input devices (e.g., a touch-sensitive surface of the display, a mechanical input device). Some operations in method 900 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
[0260] As described below, method 900 provides an intuitive way for managing user interfaces for simulated depth effects. The method reduces the cognitive burden on a user for managing and navigating user interfaces for simulated depth effects, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to navigate user interfaces faster and more efficiently by providing easy management of user interfaces for simulating depth effects conserves power and increases the time between battery charges. [0261] The electronic device (e.g., 600) receives (902), via the one or more input devices, a request to apply a simulated depth effect to a representation of image data (e.g., 808, a displayed image corresponding to the image data, a portrait image of a person/subject), wherein depth data for a subject within the representation of image data is available.
[0262] In some embodiments, the representation of image data (e.g., 808) is a live-feed image currently being captured by one or more cameras of the electronic device. In some embodiments, the representation of image data is a previously-taken image stored in and retrieved from memory (of the electronic device or an external server). In some embodiments, the depth data of the image can be adjusted / manipulated to apply a depth effect to the representation of image data.
[0263] In some embodiments, the image data includes at least two components: an RGB component that encodes the visual characteristics of a captured image, and depth data that encodes information about the relative spacing relationship of elements within the captured image (e.g., the depth data encodes that a user is in the foreground, and background elements, such as a tree positioned behind the user, are in the background).
[0264] In some embodiments, the depth data is a depth map. In some embodiments, a depth map (e.g., depth map image) contains information (e.g., values) that relates to the distance of objects in a scene from a viewpoint (e.g., a camera). In one embodiment of a depth map, each depth pixel defines the position in the viewpoint’s z-axis where its corresponding two- dimensional pixel is located. In some examples, a depth map is composed of pixels wherein each pixel is defined by a value (e.g., 0 - 255). For example, the“0” value represents pixels that are located at the most distant place in a“three dimensional” scene and the“255” value represents pixels that are located closest to a viewpoint (e.g., camera) in the“three dimensional” scene. In other examples, a depth map represents the distance between an object in a scene and the plane of the viewpoint. In some embodiments, the depth map includes information about the relative depth of various features of an object of interest in view of the depth camera (e.g., the relative depth of eyes, nose, mouth, ears of a user’s face). In some embodiments, the depth map includes information that enables the device to determine contours of the object of interest in a z direction. In some embodiments, the depth data has a second depth component (e.g., a second portion of depth data that encodes a spatial position of the background in the camera display region; a plurality of depth pixels that form a discrete portion of the depth map, such as a background), separate from the first depth component, the second depth aspect including the representation of the background in the camera display region. In some embodiments, the first depth aspect and second depth aspect are used to determine a spatial relationship between the subject in the camera display region and the background in the camera display region. This spatial relationship can be used to distinguish the subject from the background. This distinction can be exploited to, for example, apply different visual effects (e.g., visual effects having a depth component) to the subject and background. In some embodiments, all areas of the image data that do not correspond to the first depth component (e.g., areas of the image data that are out of range of the depth camera) are adjusted based on different degrees of blurriness/sharpness, the size, the degree of brightness, the degree of saturation, and/or the degree of shape-distortion in order to simulate a depth effect, such as a Bokeh effect.
[0265] In some embodiments, the request corresponds to an adjustment (e.g., a sliding gesture in a horizontal or vertical direction) of an adjustable slider (e.g., 822) associated with modifying/adjusting the simulated depth effect applied to / being applied to the representation of image data (e.g., 808). Applying a simulated depth effect to a representation of image data using an adjustable slider enhances visual feedback by enabling the user to quickly and easily view adjustments being made by the user. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
[0266] In some embodiments, the simulated depth effect is“simulated” in that the effect is (artificially) generated based on a manipulation of the underlying image data to create and apply the effect to the corresponding representation of image data (e.g., 808) (e.g., as opposed to being a“natural” effect that is based on underlying data as originally captured via one or more cameras). [0267] In some embodiments, receiving, via the one or more input devices, the request to apply the simulated depth effect to the representation of image data (e.g., 808) comprises detecting, via the one or more input devices, one or more inputs selecting a value of an image distortion parameter, wherein distorting (a portion of) the representation of image data is based on (and is responsive to) one or more user inputs selecting a value of an image distortion parameter (e.g., via a movement of the adjustable slider for controlling the parameter). In some embodiments, the adjustable slider is adjusted to distort (e.g., apply a simulated depth effect to) the representation of image data, as described above with reference to FIGS. 6A-6T. Providing an adjustable slider to be used to distort the representation of image data enhances user convenience by enabling the user to easily and efficient make adjustments to the displayed representation of image data. Providing additional control options and reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user- device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
[0268] In some embodiments, selecting a different value for the image distortion parameter causes a first change to the first portion of the representation of the image data and causes a second change to the second portion of the representation of the image data, wherein the first change is different from the second change and the first change and the second change both include the same type of change (e.g., an increase or decrease in blurriness, size, brightness, saturation, and/or shape-distortion).
[0269] In response to receiving (904) the request to apply the simulated depth effect to the representation of image data (e.g., 808), the electronic device (e.g., 600) displays, on the display (e.g., 602), the representation of image data with the simulated depth effect. Displaying the representation of image data with the simulated depth effect in response to receiving the request to apply the simulated depth effect to the representation of image data enables a user to quickly and easily view and respond to the adjustments being made to the representation of image data. Providing convenient control options and reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
[0270] Displaying, on the display (e.g., 602), the representation of image data (e.g., 808) with the simulated depth effect includes distorting (906) a first portion of the representation of image data that has a first depth in a first manner (e.g., a first particular blurriness/sharpness, a first particular size, a first particular brightness, a first particular saturation, and/or a first particular shape), wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data (e.g., a center of a field of view of a camera or a point of focus of the camera). Enabling a user to adjust a representation of image data to apply an accurate simulated depth effect enhances user convenience/efficiency and operability and versatility of the device by allowing the user create a similar image/photo to what the user would have otherwise only been able to obtain using a larger and/or more expensive piece of hardware (e.g., a professional-level camera). That is, the simulated depth effect (a software effect) enables the user to utilize a device that is relatively smaller and less expensive to apply a depth effect to an image/photo (e.g., as opposed to if the user was using a camera sensor and lens included in / attached to the device that is capable of producing the depth effect via optical distortion). This is turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
[0271] Displaying, on the display (e.g., 602), the representation of image data (e.g., 808) with the simulated depth effect also includes distorting a second portion of the representation of image data that has the first depth in a second manner (e.g., a second particular
blurriness/sharpness, a second particular size, a second particular brightness, a second particular saturation, and/or a second particular shape) that is different from the first manner, wherein the second manner is determined based on a distance of the second portion from the predefined portion of the representation of image data. Enabling a user to adjust a representation of image data to apply an accurate simulated depth effect enhances user convenience/efficiency and operability and versatility of the device by allowing the user create a similar image/photo to what the user would have otherwise only been able to obtain using a larger and/or more expensive piece of hardware (e.g., a professional-level camera). That is, the simulated depth effect (a software effect) enables the user to utilize a device that is relatively smaller and less expensive to apply a depth effect to an image/photo (e.g., as opposed to if the user was using a camera sensor and lens included in / attached to the device that is capable of producing the depth effect via optical distortion). This is turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
[0272] In some embodiments, displaying, on the display (e.g., 602), the representation of image data (e.g., 808) with the simulated depth effect further includes distorting (910) a third portion of the representation of image data that is a same distance from the predefined portion as the first portion and has a second depth that is different from the first depth in the first manner with a magnitude (e.g., of blurriness/sharpness) determined based on the second depth (e.g., the depth of the third portion). Enabling a user to adjust a representation of image data to apply an accurate simulated depth effect enhances user convenience/efficiency and operability and versatility of the device by allowing the user create a similar image/photo to what the user would have otherwise only been able to obtain using a larger and/or more expensive piece of hardware (e.g., a professional-level camera). That is, the simulated depth effect (a software effect) enables the user to utilize a device that is relatively smaller and less expensive to apply a depth effect to an image/photo (e.g., as opposed to if the user was using a camera sensor and lens included in / attached to the device that is capable of producing the depth effect via optical distortion). This is turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when
operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
[0273] In some embodiments, displaying, on the display (e.g., 602), the representation of image data (e.g., 808) with the simulated depth effect further includes distorting (912) a fourth portion of the representation of image data that is a same distance from the predefined portion as the second portion and has the second depth in the second manner with a magnitude (e.g., of blurriness/sharpness) determined based on the second depth (e.g., the depth of the fourth portion). Enabling a user to adjust a representation of image data to apply an accurate simulated depth effect enhances user convenience/efficiency and operability and versatility of the device by allowing the user create a similar image/photo to what the user would have otherwise only been able to obtain using a larger and/or more expensive piece of hardware (e.g., a professional-level camera). That is, the simulated depth effect (a software effect) enables the user to utilize a device that is relatively smaller and less expensive to apply a depth effect to an image/photo (e.g., as opposed to if the user was using a camera sensor and lens included in / attached to the device that is capable of producing the depth effect via optical distortion). This is turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
[0274] In some embodiments, displaying, on the display (e.g., 602), the representation of image data (e.g., 808) with the simulated depth effect further includes distorting (914) one or more portions of the representation of image data, that is a same distance from the predefined portion (e.g., a reference point or focus point within the representation of image data) as the first potion and has the first depth, in the first manner. Thus, in some embodiments, portion of the representation of image data that have the same depth and are the same distance away from the predefined portion of the representation of image data are distorted in the same way. Enabling a user to adjust a representation of image data to apply an accurate simulated depth effect enhances user convenience/efficiency and operability and versatility of the device by allowing the user create a similar image/photo to what the user would have otherwise only been able to obtain using a larger and/or more expensive piece of hardware (e.g., a professional-level camera). That is, the simulated depth effect (a software effect) enables the user to utilize a device that is relatively smaller and less expensive to apply a depth effect to an image/photo (e.g., as opposed to if the user was using a camera sensor and lens included in / attached to the device that is capable of producing the depth effect via optical distortion). This is turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
[0275] In some embodiments, distorting the first portion of the representation of image data (e.g., 808) in the first manner comprises distorting the first portion based on (e.g., by applying) a first distortion shape (e.g., a circular shape or a lemon/oval-type shape). In some embodiments, distorting the second portion of the representation of image data in the second manner comprises distorting the second portion based on (e.g., by applying) a second distortion shape (e.g., a more circular shape or a more lemon/oval-type shape) different from the first distortion shape. In some embodiments, if the second portion is at a greater distance (farther) from the predefined portion than the first portion, one or more objects (e.g., light-emitting objects) within the second portion are shape-distorted to a more lemon/oval shape than one or more objects (e.g., light- emitting objects) within the first portion.
[0276] In some embodiments, distorting the first portion of the representation of image data (e.g., 808) in the first manner comprises distorting the first portion by a first degree of distortion (e.g., a degree of distortion of a shape of one or more objects within the first portion). In some embodiments, distorting the second portion of the representation of image data in the second manner comprises distorting the second portion by second degree of distortion (e.g., a degree of distortion of a shape of one or more objects within the second portion) that is greater than the first degree of distortion, wherein the second portion is at a greater distance (farther) from the predefined portion (e.g., a reference point or focus point within the representation of image data) than the first portion. In some embodiments, objects in the periphery of the representation of image data are distorted to be more lemon/oval in shape, whereas objects closer to the predefined portion (e.g., a center portion, a focus portion) are less distorted. In some embodiments, the degree of distortion changes (e.g., increases or decreases) gradually as the distance from the predefined portion of the changes.
[0277] In some embodiments, distorting the first portion in the first manner comprises blurring (e.g., asymmetrically blurring / changing the sharpness of) the first portion by a first magnitude. In some embodiments, distorting the first portion in the first manner comprises distorting the second portion in the second manner comprises blurring (e.g., asymmetrically blurring / changing the sharpness of) the second portion by a second magnitude. In some embodiments, in accordance with a determination that the first portion is a greater distance from the predefined portion than the second distance is from the predefined portion (e.g., a reference point or focus point within the representation of image data), the first magnitude is greater than the second magnitude. In some embodiments, in accordance with a determination that the second portion is a greater distance from the predefined portion than the first portion is from the predefined portion, the second magnitude is greater than the first magnitude.
[0278] In some embodiments, prior to receiving the request to apply the simulated depth effect to the representation of image data (e.g., 808), the electronic device (e.g., 600) displays, on the display (e.g., 602), the representation of image data. In some embodiments, while displaying the representation of image data, the electronic device (e.g., 600) detects, using the image data (e.g., via an analysis of the image data and/or based on a user input identifying that the region of the representation of image data includes a subject, such as a tap input in a live preview of camera data), a presence of the subject (e.g., a person, at least a portion of the person, such as the face of a person or a face and upper body of a person) within the representation of image data.
[0279] In some embodiments, displaying, on the display (e.g., 602), the representation of image data (e.g., 808) with the simulated depth effect further comprises distorting the first portion of the image and the second portion of the image without distorting (916) a portion of the representation of image data corresponding to (a center portion/region of) the subject. In some embodiments, the portion of the representation of image data corresponding to the subject is distorted less than the first portion of the image and the second portion of the image.
[0280] In some embodiments, distorting the first portion of the representation of image data includes distorting the first portion in accordance with a determination that the first portion does not correspond to (a center portion/region of) the subject. In some embodiments, distorting the second portion of the representation of image data includes distorting the second portion in accordance with a determination that the second portion does not correspond to (a center portion/region of) the subject. [0281] In some embodiments, in response to receiving the request to apply the simulated depth effect to the representation of image data (e.g., 808), the electronic device (e.g., 600) identifies (918), based on the image data (e.g., via an analysis of the image data), one or more objects within the representation of image data that are associated with light-emitting objects (e.g., 818A, 818B, 818C, 818D) (e.g., as opposed to those that are not associated with light- emitting objects).
[0282] In some embodiments, displaying, on the display (e.g., 602), the representation of image data (e.g., 808) with the simulated depth effect further comprises changing (920) an appearance of the one or more portions of the representation of image data that are associated with (e.g., are identified as) light-emitting objects (e.g., 818A, 818B, 818C, 818D) in a third manner relative to one or more portions of the representation of image data that are not associated with (e.g., are not identified as) light-emitting objects (e.g., 820A, 820B). In some embodiments, the third manner involves blurring/sharpening the objects by a greater magnitude compared to the fourth manner. In some embodiments, the third manner involves distorting the shape of the objects by a greater degree compared to the fourth manner.
[0283] In some embodiments, changing the appearance of objects in the representation of image data (e.g., 808) that are associated with light-emitting objects (e.g., 818A, 818B, 818C, 818D) in the third manner includes one or more of: increasing (922) a brightness of the one or more portions of the representation of image data that are associated with light-emitting objects relative to other portions of the representation of image data that are not associated with light- emitting objects, increasing (924) a saturation of the one or more portions of the representation of image data that are associated with light-emitting objects relative to other portions of the representation of image data that are not associated with light-emitting objects, and increasing (926) a size of the one or more portions of the representation of image data that are associated with light-emitting objects relative to other portions of the representation of image data that are not associated with light-emitting objects (e.g., 820A, 820B).
[0284] In some embodiments, the electronic device (e.g., 600) detects (928), via the one or more input devices, one or more inputs changing a value of an image distortion parameter, wherein distorting (a portion of) the representation of image data (e.g., 808) is based on (and is responsive to) one or more user inputs selecting a value of an image distortion parameter (e.g., via a movement of the adjustable slider for controlling the parameter). In some embodiments, the adjustable slider (e.g., 822) is adjusted to distort (e.g., apply a simulated depth effect to) the representation of image data. In some embodiments, providing an adjustable slider to distort the representation of image data enables a user to quickly and easily provide one or more inputs to change a value of an image distortion parameter to distort the representation of image data. Providing additional control options and reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some embodiments, in response to detecting the one or more inputs (e.g., 803, 805) changing the value of the image distortion parameter, changing (930) the magnitude of change of the appearance of one or more portions of the representation of image data that are associated with light-emitting objects (e.g., 818A, 818B, 818C, 818D) relative to other portions of the representation of image data that are not associated with light-emitting objects (e.g., 820A,
820B) (e.g., gradually increasing a brightness, size, and/or saturation of the objects associated with light-emitting sources relative to other portions of the representation of data as the distortion parameter gradually increases (and the blurriness of regions of time image outside of the simulated focal plane gradually increases), and gradually decreasing a brightness, size, and/or saturation of the objects associated with light-emitting sources relative to other portions of the representation of data as the distortion parameter gradually decreases (and the blurriness of regions of time image outside of the simulated focal plane gradually decreases)).
[0285] Note that details of the processes described above with respect to method 900 (e.g., FIGS. 9A-9B are also applicable in an analogous manner to the methods described above and below. For example, method 700 optionally includes one or more of the characteristics of the various methods described above with reference to method 900. For example, the depth adjustment slider described in method 700 can be used to apply the simulated depth effect to objects within an image representation. For another example, method 1100 optionally includes one or more of the characteristics of the various methods described above with reference to method 900. For example, the notification concerning detected interference, as described in method 1100, can be associated with detected magnetic interference that can impede with one or more depth sensors used for simulating depth effects. For brevity, these details are not repeated below.
[0286] FIGS. 10A-10F illustrate exemplary user interfaces for indicating an interference to adjusting simulated image effects (e.g., simulated depth effects, such as a Bokeh effect), in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in FIG. 11.
[0287] FIG. 10A illustrates a rear- view of electronic device 600. In some embodiments, electronic device 600 includes one or more rear-facing cameras 608 and one or more rear depth camera sensors 1002 (e.g., similar to depth camera sensors 175). In some embodiments, one or more rear-facing cameras 608 are integrated with one or more rear depth camera sensors 1002.
[0288] FIG. 10B illustrates a front- view of electronic device 600 with display 602. In some embodiments, electronic device 600 includes one or more front-facing cameras 606 and one or more front depth camera sensors 1004. In some embodiments, one or more front-facing cameras 606 are integrated with one or more rear depth camera sensors 1004.
[0289] In FIG. 10B, electronic device 600 displays, on display 602, an affordance 1006 for launching the image capture application. Further in FIG. 10B, while displaying affordance 1006, electronic device detects (e.g., via a touch-sensitive surface of display 602) an activation 1001 of affordance 1006.
[0290] In FIG. 10C, in response to detecting activation 1001 of affordance 1006 for launching the image capture application, electronic device 600 displays, on display 602, a user interface 1008 of the image capture application (e.g., corresponding to user interface 614 and user interface 804). Upon (or prior to / in response to) launching the image capture application, electronic device 600 does not detect an interference (e.g., a magnetic interference or other external interference, such as from an accessory of the device) that may impede with or hinder the operation of one or more sensors (e.g., one or more depth sensors 1002 and 1004 of the device) that are used to perform a simulated image effect function of image capture application (e.g., the simulated depth effect descried above with reference to FIGS. 6A-6T and 8A-8M). As such, electronic device 600 does not display a notification indicative of the presence of an interference.
[0291] FIG. 10D illustrates a rear- view of electronic device 600, where the device is at least partially covered by a protective case 1010 (e.g., a smartphone case). Protective case 1010 includes a magnetic component 1012 (e.g., for securing the case and device to a holder, such as a car mount; a magnetic component that is part of an external battery case) detectable by one or more sensors of electronic device 600.
[0292] FIG. 10E illustrates a front- view of electronic device 600 at least partially covered by protective case 1010. In FIG. 10E, electronic device 600 displays, on display 602, affordance 1006 for launching the image capture application. Further in FIG. 10B, while displaying affordance 1006, electronic device detects (e.g., via a touch-sensitive surface of display 602) an activation 1003 of affordance 1006.
[0293] In FIG. 10F, in response to detecting activation 1003 of affordance 1006 for launching the image capture application, electronic device 600 displays, on display 602, user interface 1008 of the image capture application (e.g., corresponding to user interface 614 and user interface 804). ETpon (or prior to / in response to) launching the image capture application, electronic device 600 detects an interference (e.g., a magnetic interference) from magnetic component 1012 of protective case 1010.
[0294] As shown in FIG. 10F, in response to detecting the interference, electronic device 600 displays (e.g., over user interface 1008 of the image capture application) a notification 1014 indicating that an interference has been detected and, because of the interference, one or more simulated image effects features (e.g., including the simulated depth effect feature described above with reference to FIGS. 6A-6T and 8A-8M) may be affected by the detected interference. In some embodiments, notification 1014 also includes an affordance 1016 for closing the notification and continuing with the use of the simulated image effects features despite the presence of the interference. [0295] In some embodiments, electronic device 600 displays notification 1014 after having previously detected the presence of the interference (e.g., from magnetic component 1012 of protective case 1010) in a predetermined number of instances (e.g., after having launched the image capture application and detected the interference for 3, 5, or 7 times). Thus, in some embodiments, if there were no previous instances of detection of the interference, electronic device 600 forgoes displaying notification 1014 upon launching the image capture application despite having detected the interference from magnetic component 1012 of protective case 1010.
[0296] In some embodiments, if notification 1014 has already previously been presented on the device, electronic device 600 displays a new notification 1014 after detecting the presence of the interference (e.g., from magnetic component 1012 of protective case 1010) in a greater number of instances than when notification 1014 was previously displayed. For example, if previous notification 1014 was displayed after having detected the interference upon 3 previous launches of the image capture application, electronic device 600 forgoes displaying new notification 1014 until having detected the interference in 5 previous launches of the image capture application.
[0297] In some embodiments, if notification 1014 has already been presented on the device a predetermined number of times, electronic device 600 forgoes presenting the notification despite subsequent instances of detection of the interference.
[0298] In some embodiments, in response to detecting an activation of affordance 1016, electronic device 600 changes a mode of one or more simulated image effects (e.g., including the simulated depth effect) such that one or more features of an image effect becomes unavailable or stripped down for use.
[0299] FIG. 11 is a flow diagram illustrating a method for managing user interfaces for indicating an interference to adjusting simulated image effects, in accordance with some embodiments. Method 1100 is performed at a device (e.g., 100, 300, 500, 600) with a display and one or more sensors (e.g., one or more cameras, an interference detector capable of detecting an interference, such as magnetic interference, originating from a source that is external to the electronic device), including one or more cameras. Some operations in method 1100 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
[0300] As described below, method 1100 provides an intuitive way for managing user interfaces for simulated depth effects. The method reduces the cognitive burden on a user for managing and navigating user interfaces for simulated depth effects, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to navigate user interfaces faster and more efficiently by providing easy management of user interfaces for simulating depth effects conserves power and increases the time between battery charges.
[0301] While displaying, on the display (e.g., 602), a user interface of a camera application (e.g., 1008), the electronic device (e.g., 600) detects (1102), via the one or more sensors, external interference (e.g., from 1012) that will impair operation of a respective function of the one or more cameras (e.g., 606, 608) (e.g., magnetic interference; an interference that affects one or more camera related functions of the electronic device (e.g., one or more depth effect-related functions)) (e.g., from an accessory attached to, affixed to, covering, or placed near the electronic device, such as a protective case of the device or an external attachment on the device).
Automatically detecting the external interference that will impair operation of a respective function of the one or more cameras reduces the number of inputs required from the user to control the device by enabling the user to bypass having to manually check whether there are external interferences affecting one or more functionality of the device. Reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user- device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. Further, automatically detecting the external interference that will impair operation of a respective function of the one or more cameras and notifying the user of the detection provides the user with the option to correct the issue while still allowing the device to continue to operate at a reduced level of operation. This in turn enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
[0302] In some embodiments, the respective function is (1104) a focus function of the one or more cameras (e.g., 606, 608) of the electronic device (e.g., 600).
[0303] In some embodiments, the interference is (1106) magnetic interference (e.g., from 1012).
[0304] In some embodiments, the interference is (1108) from (e.g., is caused by or is detected because of) an accessory (e.g., 1010) of the electronic device (e.g., 600) (e.g., a protective outer case or cover (e.g., a case or cover that incorporates a battery) for the electronic device, a magnetic sticker or attachment piece affixed to / attached to the electronic device).
[0305] In some embodiments, detecting the external interference (e.g. from 1012) that will impair the operation of the respective function of the one or more cameras (e.g., 606, 608) includes detecting the external interference upon displaying a user interface (e.g., 1008) for the camera application (e.g., in response to a user request to display a user interface for the camera application) on the electronic device. In some embodiments, the electronic device (e.g., 600) detects for the external interference that will impair the operation of the respective function of the one or more cameras only when the user interface for the camera application is displayed, and does not detect for the external interference after the user interface for the camera application has been displayed or when the user interface for the camera application is not displayed on the electronic device. Detecting for the external interference only when the user interface for the camera application is displayed, and not detecting for the external interference after the user interface for the camera application has been displayed or when the user interface for the camera application is not displayed reduces power consumption by detecting for the external interference when the functionality that may be affected by the external interference may be used on the device. Reducing power consumption enhances the operability of the device by improving the battery life of the device.
[0306] In response to detecting (1110) the interference (e.g., from 1012) external to the electronic device (e.g., 600), in accordance with a determination that a first criteria has been satisfied (e.g., including the current occurrence, at least a predetermined number of previous occurrences of the interference has been detected, such as occurrences detected when the camera application was previously launched on the electronic device), the electronic device displays (1112), on the display (e.g., 602), a notification (e.g., 1014) indicating that an operation mode (e.g., a depth effect mode) of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras (e.g., 606, 608). Displaying a notification indicating that an operation mode (e.g., a depth effect mode) of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras improves visual feedback by enabling the user to quickly and easily recognize that the device has changed an operation mode (e.g., a depth effect mode) of the one or more cameras to reduce an impact of the external interference. Providing improved visual feedback to the user enhances the operability of the device and makes the user- device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
[0307] In response to detecting (1110) the interference external to the electronic device (e.g., 600), in accordance with a determination that the first criteria has not been satisfied (e.g., including the current occurrence, fewer than the predetermined number of previous occurrences of the interference has been detected), the electronic device (e.g., 600) forgoes displaying (1120), on the display (e.g., 602), the notification (e.g., 1014) indicating that the operation mode (e.g., a depth effect mode) of the one or more cameras (e.g., 606, 608) has been changed. Forgoing displaying the notification if fewer than the predetermined number of previous occurrences of the interference has been detected enhances improves device functionality by forgoing providing notifications for one-off events of interference detection (as opposed to persistent interference detection from, for example, an accessory of the device). Forgoing providing unnecessary notifications enhances user convenience and the operability of the device and makes the user- device interface more efficient which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. [0308] In some embodiments, the first criteria includes (1114) a requirement that is met when a first predetermined amount (e.g., 5, 7, 11) of (discrete instances of) occurrences of detecting the external interference (e.g., from 1012) by the electronic device (e.g., 600). Thus, in some embodiments, the predetermined number of discrete detections of the external interface is required to trigger display of the notification. In some embodiments, a discrete occurrence of detection of the external interference occurs when the user attempts to use the camera application in a manner that would make use of the respective function of the one or more cameras and the device checks for external interference to determine whether the device is able to use the respective function of the one or more cameras and determines that the external interference is present. In some embodiments, the device checks for the external interference at predetermined intervals (e.g., once per hour, once per day, the first time each day that the camera application is used).
[0309] In some embodiments, the first predetermined number is (1116) dependent on (e.g., changes based on) the number of times the notification (e.g., 1014) has previously been displayed on the electronic device (e.g., 600). In some embodiments, the first predetermined number of detections of the external interface required to trigger the notification progressively increases based on the number of notifications that have already been displayed by the electronic device. For example, if a particular number (e.g., 3) of discrete detections of the external interference is required to trigger display of the first notification, a larger number (e.g., 5) of discrete detections of the external interference is required to trigger display of the second notification, and a yet greater number (e.g., 7 of discrete detections of the external interference is required to trigger display of the third notification. Progressively increasing the first
predetermined number of detections of the external interface required to trigger the notification enhances user convenience by forgoing displaying the notification too frequently even when the user may already be aware of the interference (based on the previous notification) but is choosing to ignore the interference. Enhancing user convenience enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. [0310] In some embodiments, displaying, on the display (e.g., 602), the notification (e.g., 1014) includes displaying the notification in accordance with a determination that less than a second predetermined number of the notifications has previously been displayed on the electronic device (e.g., 600). In some embodiments, if at least the second predetermined number of notifications has previously been displayed on the electronic device, the electronic device forgoes displaying the notification (regardless of whether the first criteria has been satisfied).
[0311] In some embodiments, the change (1118) to the operation mode of the one or more cameras to reduce the impact of the external interference (e.g., from 1012) on the respective function of the one or more cameras (e.g., 606, 608) includes reducing (or lower, diminishing) the responsiveness of one or more functions (e.g., simulated depth effect-related functions, optical image stabilization, autofocus, and/or operations that require precise movements of mechanical components that can be adversely affected by the presence of strong magnetic fields in the proximity of the mechanical components) of the one or more cameras (or disabling one or more of the functions altogether), wherein the one or more functions correspond to functions that cannot be reliably executed by the one or more cameras while the external interference is being detected by the electronic device.
[0312] Note that details of the processes described above with respect to method 1100 (e.g., FIG. 11) are also applicable in an analogous manner to the methods described above and below. For example, method 700 optionally includes one or more of the characteristics of the various methods described above with reference to method 1100. For example, adjusting a simulated depth effect using a depth adjustment slider, as described in method 700, can be affected by magnetic interference, which can impede with one or more depth sensors used for simulating depth effects. For another example, method 900 optionally includes one or more of the characteristics of the various methods described above with reference to method 1100. For example, applying a simulated depth effect to objects within an image representation, as described in method 900, can be affected by magnetic interference, which can impede with one or more depth sensors used for simulating depth effects. For brevity, these details are not repeated below. [0313] The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the techniques and their practical applications. Others skilled in the art are thereby enabled to best utilize the techniques and various
embodiments with various modifications as are suited to the particular use contemplated.
[0314] Although the disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the disclosure and examples as defined by the claims.
[0315] As described above, one aspect of the present technology is the gathering and use of data available from various sources to improve the functionality and versatility of simulated image effect features that can be applied to live feed and/or stored photos and images. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies or can be used to contact or locate a specific person. Such personal information data can include demographic data, location-based data, telephone numbers, email addresses, twitter IDs, home addresses, data or records relating to a user’s health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other identifying or personal information.
[0316] The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to recognize a person or subject within a captured image or photo. Accordingly, use of such personal information data enables users to more easily recognize the content of a captured image or photo and to organize such captures images or photos. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure. For instance, health and fitness data may be used to provide insights into a user’s general wellness, or may be used as positive feedback to individuals using technology to pursue wellness goals. [0317] The present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and
Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country.
[0318] Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of detection and recognition of a person or subject within an image or photo, the present technology can be configured to allow users to select to“opt in” or“opt out” of participation in the collection of personal information data during registration for services or anytime thereafter. In addition to providing“opt in” and“opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app.
[0319] Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user’s privacy. De-identification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods.
[0320] Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, images or photos can be organized based on non-personal information data or a bare minimum amount of personal information or publicly available information, such as the date and time associated with the image or photo.

Claims

CLAIMS What is claimed is:
1. A method, comprising:
at an electronic device with a display and one or more input devices:
displaying, on the display, a representation of image data;
while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input;
in response to detecting the first input, displaying, on the display, an adjustable slider associated with manipulating the representation of image data, wherein the adjustable slider includes:
a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and
a selection indicator indicating that the first value is a currently-selected simulated depth effect value;
while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider; and
in response to detecting the input directed to the adjustable slider:
moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently-selected simulated depth effect value; and
changing an appearance of the representation of image data in accordance with the simulated depth effect as modified by the second value.
2. The method of claim 1, further comprising:
while displaying, on the display, the representation of image data, displaying, on the display, a simulated depth effect adjustment affordance, wherein the first input is an activation of the simulated depth effect adjustment affordance.
3. The method of claim 2, wherein: prior to detecting the first input, the simulated depth effect adjustment affordance is displayed with a first visual characteristic, and
after detecting the first input, the simulated depth effect adjustment affordance is displayed with a second visual characteristic different from the first visual characteristic.
4. The method of any one of claims 2-3, wherein displaying the simulated depth effect adjustment affordance comprises:
in accordance with a determination that the currently-selected depth effect value corresponds to a default depth effect value, forgoing displaying, in the simulated depth effect adjustment affordance, the currently-selected depth effect value; and
in accordance with a determination that the currently-selected depth effect value corresponds to a non-default depth effect value, displaying, in the simulated depth effect adjustment affordance, the currently-selected depth effect value.
5. The method of claim 1, wherein the first input is a swipe gesture in a first direction in a first portion of the user interface.
6. The method of any one of claims 1-5, further comprising:
in response to detecting the first input, sliding the representation of image data on the display to display the adjustable slider.
7. The method of any one of claims 1-6, further comprising:
prior to detecting the first input, displaying, on the display, one or more mode selector affordances, wherein displaying the adjustable slider comprises replacing display of the one or more mode selector affordances with the adjustable slider.
8. The method of any one of claims 1-6, further comprising:
prior to detecting the first input, displaying, on the display, a zoom control element, wherein displaying the adjustable slider comprises replacing display of the zoom control element.
9. The method of any one of claims 1-8, wherein the input directed to the adjustable slider is a swipe gesture on the adjustable slider, wherein the swipe gesture includes a user movement in a first direction having at least a first velocity at an end of the swipe gesture.
10. The method of any one of claims 1-9, wherein moving the adjustable slider comprises moving the plurality of option indicators while the selection indicator remains fixed.
11. The method of any one of claims 1-9, wherein moving the adjustable slider comprises moving the selection indicator while the plurality of option indicators remain fixed.
12. The method of any one of claims 1-11, further comprising:
while moving the adjustable slider, generating a first type of output in sync with the movement of the adjustable slider as different values are selected for a parameter controlled by the adjustable slider.
13. The method of claim 12, wherein, while moving the adjustable slider:
in accordance with a determination that the representation of image data corresponds to stored image data, the first type of output includes audio output; and
in accordance with a determination that the representation of image data corresponds to a live preview of image data being captured by the one or more cameras, the first type of output does not include audio output.
14. The method of any one of claims 1-13, wherein displaying, on the display, the representation of image data further comprises:
in accordance with a determination that the representation of image data corresponds to stored image data, displaying the representation of image data with a prior simulated depth effect as previously modified by a prior first value for the simulated depth effect.
15. A computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for performing the method of any of claims 1-14.
16. An electronic device, comprising:
a display;
one or more input devices;
one or more processors; and
memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any of claims 1-14.
17. An electronic device, comprising:
a display;
one or more input devices; and
means for performing the method of any of claims 1-14.
18. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for:
displaying, on the display, a representation of image data;
while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input; and
in response to detecting the first input, displaying, on the display, an adjustable slider associated with manipulating the representation of image data, wherein the adjustable slider includes:
a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and
a selection indicator indicating that the first value is a currently-selected simulated depth effect value; while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider; and
in response to detecting the input directed to the adjustable slider:
moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently-selected simulated depth effect value; and
changing an appearance of the representation of image data in accordance with the simulated depth effect as modified by the second value.
19. An electronic device, comprising:
a display;
one or more input devices;
one or more processors; and
memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for:
displaying, on the display, a representation of image data;
while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, detecting, via the one or more input devices, a first input; and
in response to detecting the first input, displaying, on the display, an adjustable slider associated with manipulating the representation of image data, wherein the adjustable slider includes:
a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and
a selection indicator indicating that the first value is a currently-selected simulated depth effect value;
while displaying the adjustable slider, detecting, via the one or more input devices, an input directed to the adjustable slider; and
in response to detecting the input directed to the adjustable slider: moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently-selected simulated depth effect value; and
changing an appearance of the representation of image data in accordance with the simulated depth effect as modified by the second value.
20. An electronic device, comprising:
a display;
one or more input devices;
means for displaying, on the display, a representation of image data;
means, while displaying the representation of image data with a simulated depth effect as modified by a first value of a plurality of selectable values for the simulated depth effect, for detecting, via the one or more input devices, a first input; and
means, in response to detecting the first input, for displaying, on the display, an adjustable slider associated with manipulating the representation of image data, wherein the adjustable slider includes:
a plurality of option indicators corresponding to a plurality of the selectable values for the simulated depth effect; and
a selection indicator indicating that the first value is a currently-selected simulated depth effect value;
means, while displaying the adjustable slider, for detecting, via the one or more input devices, an input directed to the adjustable slider; and
means, in response to detecting the input directed to the adjustable slider, for:
moving the adjustable slider to indicate that a second value, of the plurality of selectable values for the simulated depth effect, is the currently-selected simulated depth effect value; and
changing an appearance of the representation of image data in accordance with the simulated depth effect as modified by the second value.
21. A method, comprising:
at an electronic device with a display and one or more input devices: receiving, via the one or more input devices, a request to apply a simulated depth effect to a representation of image data, wherein depth data for a subject within the representation of image data is available; and
in response to receiving the request to apply the simulated depth effect to the
representation of image data, displaying, on the display, the representation of image data with the simulated depth effect, including:
distorting a first portion of the representation of image data that has a first depth in a first manner, wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data; and
distorting a second portion of the representation of image data that has the first depth in a second manner that is different from the first manner, wherein the second manner is determined based on a distance of the second portion from the predefined portion of the representation of image data.
22. The method of claim 21, wherein displaying, on the display, the representation of image data with the simulated depth effect further includes:
distorting a third portion of the representation of image data that is a same distance from the predefined portion as the first portion and has a second depth that is different from the first depth in the first manner with a magnitude determined based on the second depth; and
distorting a fourth portion of the representation of image data that is a same distance from the predefined portion as the second portion and has the second depth in the second manner with a magnitude determined based on the second depth.
23. The method of any one of claims 21-22, wherein displaying, on the display, the
representation of image data with the simulated depth effect further includes:
distorting one or more portions of the representation of image data, that is a same distance from the predefined portion as the first potion and has the first depth, in the first manner.
24. The method of any one of claims 21-23, wherein: distorting the first portion of the representation of image data in the first manner comprises distorting the first portion based on a first distortion shape; and
distorting the second portion of the representation of image data in the second manner comprises distorting the second portion based on a second distortion shape different from the first distortion shape.
25. The method of any one of claims 21-24, wherein:
distorting the first portion of the representation of image data in the first manner comprises distorting the first portion by a first degree of distortion; and
distorting the second portion of the representation of image data in the second manner comprises distorting the second portion by second degree of distortion that is greater than the first degree of distortion, wherein the second portion is at a greater distance from the predefined portion than the first portion.
26. The method of any one of claims 21-25, wherein receiving, via the one or more input devices, the request to apply the simulated depth effect to the representation of image data comprises:
detecting, via the one or more input devices, one or more inputs selecting a value of an image distortion parameter, wherein distorting the representation of image data is based on one or more user inputs selecting a value of an image distortion parameter.
27. The method of claim 26, wherein selecting a different value for the image distortion parameter causes a first change to the first portion of the representation of the image data and causes a second change to the second portion of the representation of the image data, wherein the first change is different from the second change and the first change and the second change both include the same type of change.
28. The method of any one of claims 21-27, wherein:
distorting the first portion in the first manner comprises blurring the first portion by a first magnitude; distorting the second portion in the second manner comprises blurring the second portion by a second magnitude;
in accordance with a determination that the first portion is a greater distance from the predefined portion than the second distance is from the predefined portion, the first magnitude is greater than the second magnitude; and
in accordance with a determination that the second portion is a greater distance from the predefined portion than the first portion is from the predefined portion, the second magnitude is greater than the first magnitude.
29. The method of any one of claims 21-28, further comprising:
prior to receiving the request to apply the simulated depth effect to the representation of image data, displaying, on the display, the representation of image data; and
while displaying the representation of image data, detecting, using the image data, a presence of the subject within the representation of image data.
30. The method of any one of claims 21-29, wherein displaying, on the display, the
representation of image data with the simulated depth effect further comprises:
distorting the first portion of the image and the second portion of the image without distorting a portion of the representation of image data corresponding to the subject.
31. The method of any one of claims 21-30, wherein:
distorting the first portion of the representation of image data includes distorting the first portion in accordance with a determination that the first portion does not correspond to the subject; and
distorting the second portion of the representation of image data includes distorting the second portion in accordance with a determination that the second portion does not correspond to the subject.
32. The method of any one of claims 21-31, further comprising: in response to receiving the request to apply the simulated depth effect to the representation of image data, identifying, based on the image data, one or more objects within the representation of image data that are associated with light-emitting objects.
33. The method of claim 32, wherein displaying, on the display, the representation of image data with the simulated depth effect further comprises:
changing an appearance of the one or more portions of the representation of image data that are associated with light-emitting objects in a third manner relative to one or more portions of the representation of image data that are not associated with light-emitting objects.
34. The method of claim 33, wherein changing the appearance of objects in the representation of image data that are associated with light-emitting objects in the third manner includes one or more of:
increasing a brightness of the one or more portions of the representation of image data that are associated with light-emitting objects relative to other portions of the representation of image data that are not associated with light-emitting objects;
increasing a saturation of the one or more portions of the representation of image data that are associated with light-emitting objects relative to other portions of the representation of image data that are not associated with light-emitting objects; and
increasing a size of the one or more portions of the representation of image data that are associated with light-emitting objects relative to other portions of the representation of image data that are not associated with light-emitting objects.
35. The method of claim 33, including:
detecting, via the one or more input devices, one or more inputs changing a value of an image distortion parameter, wherein distorting the representation of image data is based on one or more user inputs selecting a value of an image distortion parameter; and
in response to detecting the one or more inputs changing the value of the image distortion parameter, changing the magnitude of change of the appearance of one or more portions of the representation of image data that are associated with light-emitting objects relative to other portions of the representation of image data that are not associated with light-emitting objects.
36. A computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for performing the method of any of claims 21-35.
37. An electronic device, comprising:
a display;
one or more input devices;
one or more processors; and
memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any of claims 21-35.
38. An electronic device, comprising:
a display;
one or more input devices; and
means for performing the method of any of claims 21-35.
39. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for:
receiving, via the one or more input devices, a request to apply a simulated depth effect to a representation of image data, wherein depth data for a subject within the representation of image data is available; and
in response to receiving the request to apply the simulated depth effect to the
representation of image data, displaying, on the display, the representation of image data with the simulated depth effect, including:
distorting a first portion of the representation of image data that has a first depth in a first manner, wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data; and distorting a second portion of the representation of image data that has the first depth in a second manner that is different from the first manner, wherein the second manner is determined based on a distance of the second portion from the predefined portion of the representation of image data.
40. An electronic device, comprising:
a display;
one or more input devices;
one or more processors; and
memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving, via the one or more input devices, a request to apply a simulated depth effect to a representation of image data, wherein depth data for a subject within the representation of image data is available; and
in response to receiving the request to apply the simulated depth effect to the
representation of image data, displaying, on the display, the representation of image data with the simulated depth effect, including:
distorting a first portion of the representation of image data that has a first depth in a first manner, wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data; and
distorting a second portion of the representation of image data that has the first depth in a second manner that is different from the first manner, wherein the second manner is determined based on a distance of the second portion from the predefined portion of the representation of image data.
41. An electronic device, comprising:
a display;
one or more input devices;
means for receiving, via the one or more input devices, a request to apply a simulated depth effect to a representation of image data, wherein depth data for a subject within the representation of image data is available; and means, in response to receiving the request to apply the simulated depth effect to the representation of image data, for displaying, on the display, the representation of image data with the simulated depth effect, including:
distorting a first portion of the representation of image data that has a first depth in a first manner, wherein the first manner is determined based on a distance of the first portion from a predefined portion of the representation of image data; and
distorting a second portion of the representation of image data that has the first depth in a second manner that is different from the first manner, wherein the second manner is determined based on a distance of the second portion from the predefined portion of the representation of image data.
42. A method, comprising:
at an electronic device with a display and one or more sensors, including one or more cameras: while displaying, on the display, a user interface of a camera application, detecting, via the one or more sensors, external interference that will impair operation of a respective function of the one or more cameras; and
in response to detecting the interference external to the electronic device:
in accordance with a determination that a first criteria has been satisfied, displaying, on the display, a notification indicating that an operation mode of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras; and
in accordance with a determination that the first criteria has not been satisfied, forgoing displaying, on the display, the notification indicating that the operation mode of the one or more cameras has been changed.
43. The method of claim 42, wherein the first criteria includes a requirement that is met when a first predetermined amount of occurrences of detecting the external interference by the electronic device.
44. The method of claim 43, wherein the first predetermined number is dependent on the number of times the notification has previously been displayed on the electronic device.
45. The method of any one of claims 42-44, wherein displaying, on the display, the notification includes displaying the notification in accordance with a determination that less than a second predetermined number of the notifications has previously been displayed on the electronic device.
46. The method of any one of claims 42-45, wherein the change to the operation mode of the one or more cameras to reduce the impact of the external interference on the respective function of the one or more cameras includes reducing the responsiveness of one or more functions of the one or more cameras, wherein the one or more functions correspond to functions that cannot be reliably executed by the one or more cameras while the external interference is being detected by the electronic device.
47. The method of any one of claims 42-46, wherein detecting the external interference that will impair the operation of the respective function of the one or more cameras includes detecting the external interference upon displaying a user interface for the camera application on the electronic device.
48. The method of any one of claims 42-47, wherein the respective function is a focus function of the one or more cameras of the electronic device.
49. The method of any one of claims 42-48, wherein the interference is magnetic interference.
50. The method of any one of claims 42-49, wherein the interference is from an accessory of the electronic device.
51. A computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more sensors, including one or more cameras, the one or more programs including instructions for performing the method of any of claims 42-50.
52. An electronic device, comprising:
a display;
one or more sensors, including one or more cameras;
one or more processors; and
memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any of claims 42-50.
53. An electronic device, comprising:
a display;
one or more sensors, including one or more cameras; and
means for performing the method of any of claims 42-50.
54. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device with a display and one or more sensors, including one or more cameras, the one or more programs including instructions for:
while displaying, on the display, a user interface of a camera application, detecting, via the one or more sensors, external interference that will impair operation of a respective function of the one or more cameras; and
in response to detecting the interference external to the electronic device:
in accordance with a determination that a first criteria has been satisfied, displaying, on the display, a notification indicating that an operation mode of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras; and
in accordance with a determination that the first criteria has not been satisfied, forgoing displaying, on the display, the notification indicating that the operation mode of the one or more cameras has been changed.
55. An electronic device, comprising:
a display; one or more sensors, including one or more cameras;
one or more processors; and
memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for:
while displaying, on the display, a user interface of a camera application, detecting, via the one or more sensors, external interference that will impair operation of a respective function of the one or more cameras; and
in response to detecting the interference external to the electronic device:
in accordance with a determination that a first criteria has been satisfied, displaying, on the display, a notification indicating that an operation mode of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras; and
in accordance with a determination that the first criteria has not been satisfied, forgoing displaying, on the display, the notification indicating that the operation mode of the one or more cameras has been changed.
56. An electronic device, comprising:
a display;
one or more sensors, including one or more cameras;
means, while displaying, on the display, a user interface of a camera application, for detecting, via the one or more sensors, external interference that will impair operation of a respective function of the one or more cameras; and
means, in response to detecting the interference external to the electronic device, for: in accordance with a determination that a first criteria has been satisfied, displaying, on the display, a notification indicating that an operation mode of the one or more cameras has been changed to reduce an impact of the external interference on the respective function of the one or more cameras; and
in accordance with a determination that the first criteria has not been satisfied, forgoing displaying, on the display, the notification indicating that the operation mode of the one or more cameras has been changed.
PCT/US2019/049101 2018-09-11 2019-08-30 User interfaces for simulated depth effects WO2020055613A1 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
KR1020217006145A KR102534596B1 (en) 2018-09-11 2019-08-30 User Interfaces for Simulated Depth Effects
JP2021510849A JP7090210B2 (en) 2018-09-11 2019-08-30 User interface for simulated depth effects
EP19769316.1A EP3827334A1 (en) 2018-09-11 2019-08-30 User interfaces for simulated depth effects
AU2019338180A AU2019338180B2 (en) 2018-09-11 2019-08-30 User interfaces for simulated depth effects
CN201980056883.9A CN112654956A (en) 2018-09-11 2019-08-30 User interface for simulating depth effects
KR1020237016569A KR20230071201A (en) 2018-09-11 2019-08-30 User interfaces for simulated depth effects
JP2022095182A JP7450664B2 (en) 2018-09-11 2022-06-13 User interface for simulated depth effects
AU2022228121A AU2022228121B2 (en) 2018-09-11 2022-09-07 User interfaces for simulated depth effects

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201862729926P 2018-09-11 2018-09-11
US62/729,926 2018-09-11
DKPA201870623A DK201870623A1 (en) 2018-09-11 2018-09-24 User interfaces for simulated depth effects
DKPA201870623 2018-09-24
US16/144,629 US11468625B2 (en) 2018-09-11 2018-09-27 User interfaces for simulated depth effects
US16/144,629 2018-09-27

Publications (1)

Publication Number Publication Date
WO2020055613A1 true WO2020055613A1 (en) 2020-03-19

Family

ID=69777821

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/049101 WO2020055613A1 (en) 2018-09-11 2019-08-30 User interfaces for simulated depth effects

Country Status (1)

Country Link
WO (1) WO2020055613A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114281285A (en) * 2021-07-14 2022-04-05 海信视像科技股份有限公司 Display device and display method for stably presenting depth data
JP7385052B2 (en) 2020-09-30 2023-11-21 北京字跳▲網▼絡技▲術▼有限公司 Photography methods, equipment, electronic equipment and storage media
US11893668B2 (en) 2021-03-31 2024-02-06 Leica Camera Ag Imaging system and method for generating a final digital image via applying a profile to image information
US11956528B2 (en) 2020-09-30 2024-04-09 Beijing Zitiao Network Technology Co., Ltd. Shooting method using target control, electronic device, and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3859005A (en) 1973-08-13 1975-01-07 Albert L Huebner Erosion reduction in wet turbines
US4826405A (en) 1985-10-15 1989-05-02 Aeroquip Corporation Fan blade fabrication system
US6323846B1 (en) 1998-01-26 2001-11-27 University Of Delaware Method and apparatus for integrating manual input
US6570557B1 (en) 2001-02-10 2003-05-27 Finger Works, Inc. Multi-touch system and method for emulating modifier keys via fingertip chords
US6677932B1 (en) 2001-01-28 2004-01-13 Finger Works, Inc. System and method for recognizing touch typing under limited tactile feedback conditions
US20050190059A1 (en) 2004-03-01 2005-09-01 Apple Computer, Inc. Acceleration-based theft detection system for portable electronic devices
US20060017692A1 (en) 2000-10-02 2006-01-26 Wehrenberg Paul J Methods and apparatuses for operating a portable device based on an accelerometer
US7657849B2 (en) 2005-12-23 2010-02-02 Apple Inc. Unlocking a device by performing gestures on an unlock image
US20130165186A1 (en) * 2011-12-27 2013-06-27 Lg Electronics Inc. Mobile terminal and controlling method thereof
WO2013169849A2 (en) 2012-05-09 2013-11-14 Industries Llc Yknots Device, method, and graphical user interface for displaying user interface objects corresponding to an application
WO2014105276A1 (en) 2012-12-29 2014-07-03 Yknots Industries Llc Device, method, and graphical user interface for transitioning between touch input to display output relationships
US20160026371A1 (en) * 2014-07-23 2016-01-28 Adobe Systems Incorporated Touch-based user interface control tiles

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3859005A (en) 1973-08-13 1975-01-07 Albert L Huebner Erosion reduction in wet turbines
US4826405A (en) 1985-10-15 1989-05-02 Aeroquip Corporation Fan blade fabrication system
US6323846B1 (en) 1998-01-26 2001-11-27 University Of Delaware Method and apparatus for integrating manual input
US20020015024A1 (en) 1998-01-26 2002-02-07 University Of Delaware Method and apparatus for integrating manual input
US20060017692A1 (en) 2000-10-02 2006-01-26 Wehrenberg Paul J Methods and apparatuses for operating a portable device based on an accelerometer
US6677932B1 (en) 2001-01-28 2004-01-13 Finger Works, Inc. System and method for recognizing touch typing under limited tactile feedback conditions
US6570557B1 (en) 2001-02-10 2003-05-27 Finger Works, Inc. Multi-touch system and method for emulating modifier keys via fingertip chords
US20050190059A1 (en) 2004-03-01 2005-09-01 Apple Computer, Inc. Acceleration-based theft detection system for portable electronic devices
US7657849B2 (en) 2005-12-23 2010-02-02 Apple Inc. Unlocking a device by performing gestures on an unlock image
US20130165186A1 (en) * 2011-12-27 2013-06-27 Lg Electronics Inc. Mobile terminal and controlling method thereof
WO2013169849A2 (en) 2012-05-09 2013-11-14 Industries Llc Yknots Device, method, and graphical user interface for displaying user interface objects corresponding to an application
WO2014105276A1 (en) 2012-12-29 2014-07-03 Yknots Industries Llc Device, method, and graphical user interface for transitioning between touch input to display output relationships
US20160026371A1 (en) * 2014-07-23 2016-01-28 Adobe Systems Incorporated Touch-based user interface control tiles

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CARLOS HERNÁNDEZ: "Google AI Blog: Lens Blur in the new Google Camera app", GOOGLE AI BLOG, 16 April 2014 (2014-04-16), XP055632707, Retrieved from the Internet <URL:https://ai.googleblog.com/2014/04/lens-blur-in-new-google-camera-app.html> [retrieved on 20191016] *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7385052B2 (en) 2020-09-30 2023-11-21 北京字跳▲網▼絡技▲術▼有限公司 Photography methods, equipment, electronic equipment and storage media
US11956528B2 (en) 2020-09-30 2024-04-09 Beijing Zitiao Network Technology Co., Ltd. Shooting method using target control, electronic device, and storage medium
US11893668B2 (en) 2021-03-31 2024-02-06 Leica Camera Ag Imaging system and method for generating a final digital image via applying a profile to image information
CN114281285A (en) * 2021-07-14 2022-04-05 海信视像科技股份有限公司 Display device and display method for stably presenting depth data

Similar Documents

Publication Publication Date Title
AU2022228121B2 (en) User interfaces for simulated depth effects
US11895391B2 (en) Capturing and displaying images with multiple focal planes
JP7247390B2 (en) user interface camera effect
US11669985B2 (en) Displaying and editing images with depth information
US20230401032A1 (en) Audio assisted enrollment
DK179635B1 (en) USER INTERFACE FOR CAMERA EFFECTS
US11921998B2 (en) Editing features of an avatar
US11363071B2 (en) User interfaces for managing a local network
US11670144B2 (en) User interfaces for indicating distance
WO2020055613A1 (en) User interfaces for simulated depth effects
US20240080543A1 (en) User interfaces for camera management
KR20240005977A (en) User interfaces for wide angle video conference

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19769316

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217006145

Country of ref document: KR

Kind code of ref document: A

Ref document number: 2021510849

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019769316

Country of ref document: EP

Effective date: 20210226

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019338180

Country of ref document: AU

Date of ref document: 20190830

Kind code of ref document: A