CN117421087A - Time-dependent user interface - Google Patents

Time-dependent user interface Download PDF

Info

Publication number
CN117421087A
CN117421087A CN202311634654.5A CN202311634654A CN117421087A CN 117421087 A CN117421087 A CN 117421087A CN 202311634654 A CN202311634654 A CN 202311634654A CN 117421087 A CN117421087 A CN 117421087A
Authority
CN
China
Prior art keywords
user interface
displaying
displayed
computer system
media item
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311634654.5A
Other languages
Chinese (zh)
Inventor
K·W·陈
G·M·阿戈诺利
E·查奥
G·R·克拉克
A·P·克莱默
D·P·恩迪科特
A·古兹曼
K·T·豪沃斯
P·佩奇
A·W·罗戈斯基
D·A·希蒙
A·苏扎多斯桑托斯
W·A·索伦帝诺三世
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/738,940 external-priority patent/US11921992B2/en
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN117421087A publication Critical patent/CN117421087A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to a time-dependent user interface. The present disclosure relates generally to a method and user interface for managing dial user interfaces. In some embodiments, methods and user interfaces for managing dials based on depth data of previously captured media items are described. In some embodiments, methods and user interfaces for managing a clock face based on geographic data are described. In some embodiments, methods and user interfaces for managing a clock face based on state information of a computer system are described. In some embodiments, methods and user interfaces related to time management are described. In some implementations, methods and user interfaces for editing a user interface based on depth data of previously captured media items are described.

Description

Time-dependent user interface
The present application is a divisional application of the inventive patent application with application number 202280026198.3, application date 2022, 5, 13, entitled "time dependent user interface".
Cross Reference to Related Applications
The present application claims priority from: U.S. application Ser. No. 17/738,940, entitled "USER INTERFACES RELATED TO TIME", filed 5/6/2022; U.S. provisional application Ser. No. 63/197,447, entitled "USER INTERFACES RELATED TO TIME", filed 6/2021; and U.S. provisional application Ser. No. 63/188,801, entitled "USER INTERFACES RELATED TO TIME", filed on 5/14/2021, the entire contents of each of which are hereby incorporated by reference.
Technical Field
The present disclosure relates generally to computer user interfaces, and more particularly to techniques for managing dials.
Background
Smart watch devices and other personal electronic devices allow a user to manipulate the appearance of a dial. The user can select various options to manage the appearance of the dial.
Disclosure of Invention
However, some techniques for managing dials using electronic devices are often cumbersome and inefficient. For example, some prior art techniques use complex and time consuming user interfaces that may include multiple key presses or keystrokes. The prior art requires more time than is necessary, which results in wasted user time and device energy. This latter consideration is particularly important in battery-powered devices.
Thus, the present technology provides faster, more efficient methods and interfaces for electronic devices to manage dials. Such methods and interfaces optionally supplement or replace other methods for managing dials. Such methods and interfaces reduce the cognitive burden on the user and result in a more efficient human-machine interface. For battery-powered computing devices, such methods and interfaces conserve power and increase the time interval between battery charges.
According to some embodiments, a method is described. The method is performed at a computer system in communication with a display generation component and one or more input devices. The method comprises the following steps: receiving, via the one or more input devices, input corresponding to a request to display a media item-based user interface; and responsive to receiving the input, displaying a user interface via the display generating component, wherein displaying the user interface includes simultaneously displaying: a media item including a background element and a foreground element segmented from the background element based on depth information; and system text, wherein the system text is displayed in front of the background element and behind the foreground element and has content dynamically selected based on the context of the computer system.
According to some embodiments, a non-transitory computer readable storage medium is described. The non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system, wherein the computer system is in communication with a display generating component and one or more input devices, the one or more programs comprising instructions for: receiving, via the one or more input devices, input corresponding to a request to display a media item-based user interface; and responsive to receiving the input, displaying a user interface via the display generating component, wherein displaying the user interface includes simultaneously displaying: a media item including a background element and a foreground element segmented from the background element based on depth information; and system text, wherein the system text is displayed in front of the background element and behind the foreground element and has content dynamically selected based on the context of the computer system.
According to some embodiments, a transitory computer readable storage medium is described. The transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system, wherein the computer system is in communication with a display generating component and one or more input devices, the one or more programs comprising instructions for: receiving, via the one or more input devices, input corresponding to a request to display a media item-based user interface; and responsive to receiving the input, displaying a user interface via the display generating component, wherein displaying the user interface includes simultaneously displaying: a media item including a background element and a foreground element segmented from the background element based on depth information; and system text, wherein the system text is displayed in front of the background element and behind the foreground element and has content dynamically selected based on the context of the computer system.
According to some embodiments, a computer system is described. The computer system includes: one or more processors, wherein the computer system is in communication with the display generation component and the one or more input devices; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving, via the one or more input devices, input corresponding to a request to display a media item-based user interface; and responsive to receiving the input, displaying a user interface via the display generating component, wherein displaying the user interface includes simultaneously displaying: a media item including a background element and a foreground element segmented from the background element based on depth information; and system text, wherein the system text is displayed in front of the background element and behind the foreground element and has content dynamically selected based on the context of the computer system.
According to some embodiments, a computer system is described. The computer system is in communication with the display generation component and one or more input devices. The computer system includes: means for receiving, via the one or more input devices, input corresponding to a request to display a media item-based user interface; and means for displaying a user interface via the display generating component in response to receiving the input, wherein displaying the user interface includes simultaneously displaying: a media item including a background element and a foreground element segmented from the background element based on depth information; and system text, wherein the system text is displayed in front of the background element and behind the foreground element and has content dynamically selected based on the context of the computer system.
According to some embodiments, a computer program product is described. The computer program product includes one or more programs configured to be executed by one or more processors of a computer system in communication with a display generating component and one or more input devices, the one or more programs including instructions for: receiving, via the one or more input devices, input corresponding to a request to display a media item-based user interface; and displaying a user interface via the display generating component in response to receiving the input, wherein displaying the user interface includes simultaneously displaying: a media item including a background element and a foreground element segmented from the background element based on depth information; and system text, wherein the system text is displayed in front of the background element and behind the foreground element and has content dynamically selected based on the context of the computer system.
According to some embodiments, a method is described. The method is performed at a computer system in communication with a display generation component and one or more input devices. The method comprises the following steps: receiving, via the one or more input devices, a request to display a clock face; in response to receiving the request to display the clock face, displaying, via the display generating component, a clock face including names of one or more different cities, the displaying including simultaneously displaying: a current time indication of a current time zone associated with the computer system; the names of one or more different cities, wherein the one or more different cities comprise a first city, and displaying the names of the one or more cities comprises displaying the first city name, wherein: in accordance with a determination that the computer system is associated with a first time zone, displaying the first city name in text at a first location in the clock face, the text oriented such that a bottom of letters in the first city name are closer to the current time indication than a top of the letters in the first city name; and in accordance with a determination that the computer system is associated with a second time zone different from the first time zone, displaying the first city name in text at a second location in the clock face, the text oriented such that the tops of the letters in the first city name are closer to the current time indication than the bottoms of the letters in the first city name.
According to some embodiments, a non-transitory computer readable storage medium is described. The non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system, wherein the computer system is in communication with a display generating component and one or more input devices, the one or more programs comprising instructions for: receiving, via the one or more input devices, a request to display a clock face; in response to receiving the request to display the clock face, displaying, via the display generating component, a clock face including names of one or more different cities, the displaying including simultaneously displaying: a current time indication of a current time zone associated with the computer system; the names of one or more different cities, wherein the one or more different cities comprise a first city, and displaying the names of the one or more cities comprises displaying the first city name, wherein: in accordance with a determination that the computer system is associated with a first time zone, displaying the first city name in text at a first location in the clock face, the text oriented such that a bottom of letters in the first city name are closer to the current time indication than a top of the letters in the first city name; and in accordance with a determination that the computer system is associated with a second time zone different from the first time zone, displaying the first city name in text at a second location in the clock face, the text oriented such that the tops of the letters in the first city name are closer to the current time indication than the bottoms of the letters in the first city name.
According to some embodiments, a transitory computer readable storage medium is described. The transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system, wherein the computer system is in communication with a display generating component and one or more input devices, the one or more programs comprising instructions for: receiving, via the one or more input devices, a request to display a clock face; in response to receiving the request to display the clock face, displaying, via the display generating component, a clock face including names of one or more different cities, the displaying including simultaneously displaying: a current time indication of a current time zone associated with the computer system; the names of one or more different cities, wherein the one or more different cities comprise a first city, and displaying the names of the one or more cities comprises displaying the first city name, wherein: in accordance with a determination that the computer system is associated with a first time zone, displaying the first city name in text at a first location in the clock face, the text oriented such that a bottom of letters in the first city name are closer to the current time indication than a top of the letters in the first city name; and in accordance with a determination that the computer system is associated with a second time zone different from the first time zone, displaying the first city name in text at a second location in the clock face, the text oriented such that the tops of the letters in the first city name are closer to the current time indication than the bottoms of the letters in the first city name.
According to some embodiments, a computer system is described. The computer system includes: one or more processors, wherein the computer system is in communication with the display generation component and the one or more input devices; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving, via the one or more input devices, a request to display a clock face; in response to receiving the request to display the clock face, displaying, via the display generating component, a clock face including names of one or more different cities, the displaying including simultaneously displaying: a current time indication of a current time zone associated with the computer system; the names of one or more different cities, wherein the one or more different cities comprise a first city, and displaying the names of the one or more cities comprises displaying the first city name, wherein: in accordance with a determination that the computer system is associated with a first time zone, displaying the first city name in text at a first location in the clock face, the text oriented such that a bottom of letters in the first city name are closer to the current time indication than a top of the letters in the first city name; and in accordance with a determination that the computer system is associated with a second time zone different from the first time zone, displaying the first city name in text at a second location in the clock face, the text oriented such that the tops of the letters in the first city name are closer to the current time indication than the bottoms of the letters in the first city name.
According to some embodiments, a computer system is described. The computer system is in communication with the display generation component and one or more input devices. The computer system includes: means for receiving a request to display a clock face via the one or more input devices; means for displaying, via the display generating component, a clock face including names of one or more different cities in response to receiving the request to display the clock face, the displaying comprising simultaneously displaying: a current time indication of a current time zone associated with the computer system; the names of one or more different cities, wherein the one or more different cities comprise a first city, and displaying the names of the one or more cities comprises displaying the first city name, wherein: in accordance with a determination that the computer system is associated with a first time zone, displaying the first city name in text at a first location in the clock face, the text oriented such that a bottom of letters in the first city name are closer to the current time indication than a top of the letters in the first city name; and in accordance with a determination that the computer system is associated with a second time zone different from the first time zone, displaying the first city name in text at a second location in the clock face, the text oriented such that the tops of the letters in the first city name are closer to the current time indication than the bottoms of the letters in the first city name.
According to some embodiments, a computer program product is described. The computer program product includes one or more programs configured to be executed by one or more processors of a computer system in communication with a display generating component and one or more input devices, the one or more programs including instructions for: receiving, via the one or more input devices, a request to display a clock face; in response to receiving the request to display the clock face, displaying, via the display generating component, a clock face including names of one or more different cities, the displaying including simultaneously displaying: a current time indication of a current time zone associated with the computer system; the names of one or more different cities, wherein the one or more different cities comprise a first city, and displaying the names of the one or more cities comprises displaying the first city name, wherein: in accordance with a determination that the computer system is associated with a first time zone, displaying the first city name in text at a first location in the clock face, the text oriented such that a bottom of letters in the first city name are closer to the current time indication than a top of the letters in the first city name; and in accordance with a determination that the computer system is associated with a second time zone different from the first time zone, displaying the first city name in text at a second location in the clock face, the text oriented such that the tops of the letters in the first city name are closer to the current time indication than the bottoms of the letters in the first city name.
According to some embodiments, a method is described. The method is performed at a computer system in communication with a display generation component. The method comprises the following steps: displaying, via the display generating component, a first user interface comprising an analog dial while the computer system is in a first state, wherein displaying the analog dial while the computer system is in the first state comprises simultaneously displaying: a time indicator on the analog scale indicating a current time; and hour indicators displayed around the analog scale, wherein the hour indicators include a first hour indicator displayed at a first size and a second hour indicator displayed at a second size different from the first size; and detecting a request to display the analog scale while the computer system is in a second state different from the first state after displaying the analog scale in which the first hour indicator is displayed in the first size and the second hour indicator is displayed in the second size; and in response to detecting a change in state of the computer system, displaying the first user interface updated to reflect the second state, the displaying including displaying the simulated dial, wherein displaying the simulated dial while the computer system is in the second state includes simultaneously displaying: a time indicator on the analog scale indicating the current time; and hour indicators displayed around the analog scale, wherein the hour indicators include the first hour indicator displayed at a third size different from the first size and the second hour indicator displayed at a fourth size different from the second size.
According to some embodiments, a non-transitory computer readable storage medium is described. The non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system, wherein the computer system is in communication with a display generation component, the one or more programs comprising instructions for: displaying, via the display generating component, a first user interface comprising an analog dial while the computer system is in a first state, wherein displaying the analog dial while the computer system is in the first state comprises simultaneously displaying: a time indicator on the analog scale indicating a current time; and hour indicators displayed around the analog scale, wherein the hour indicators include a first hour indicator displayed at a first size and a second hour indicator displayed at a second size different from the first size; and detecting a request to display the analog scale while the computer system is in a second state different from the first state after displaying the analog scale in which the first hour indicator is displayed in the first size and the second hour indicator is displayed in the second size; and in response to detecting a change in state of the computer system, displaying the first user interface updated to reflect the second state, the displaying including displaying the simulated dial, wherein displaying the simulated dial while the computer system is in the second state includes simultaneously displaying: a time indicator on the analog scale indicating the current time; and hour indicators displayed around the analog scale, wherein the hour indicators include the first hour indicator displayed at a third size different from the first size and the second hour indicator displayed at a fourth size different from the second size.
According to some embodiments, a transitory computer readable storage medium is described. The transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system, wherein the computer system is in communication with a display generation component, the one or more programs comprising instructions for: displaying, via the display generating component, a first user interface comprising an analog dial while the computer system is in a first state, wherein displaying the analog dial while the computer system is in the first state comprises simultaneously displaying: a time indicator on the analog scale indicating a current time; and hour indicators displayed around the analog scale, wherein the hour indicators include a first hour indicator displayed at a first size and a second hour indicator displayed at a second size different from the first size; and detecting a request to display the analog scale while the computer system is in a second state different from the first state after displaying the analog scale in which the first hour indicator is displayed in the first size and the second hour indicator is displayed in the second size; and in response to detecting a change in state of the computer system, displaying the first user interface updated to reflect the second state, the displaying including displaying the simulated dial, wherein displaying the simulated dial while the computer system is in the second state includes simultaneously displaying: a time indicator on the analog scale indicating the current time; and hour indicators displayed around the analog scale, wherein the hour indicators include the first hour indicator displayed at a third size different from the first size and the second hour indicator displayed at a fourth size different from the second size.
According to some embodiments, a computer system is described. The computer system includes: one or more processors, wherein the computer system is in communication with the display generation component; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display generating component, a first user interface comprising an analog dial while the computer system is in a first state, wherein displaying the analog dial while the computer system is in the first state comprises simultaneously displaying: a time indicator on the analog scale indicating a current time; and hour indicators displayed around the analog scale, wherein the hour indicators include a first hour indicator displayed at a first size and a second hour indicator displayed at a second size different from the first size; and detecting a request to display the analog scale while the computer system is in a second state different from the first state after displaying the analog scale in which the first hour indicator is displayed in the first size and the second hour indicator is displayed in the second size; and in response to detecting a change in state of the computer system, displaying the first user interface updated to reflect the second state, the displaying including displaying the simulated dial, wherein displaying the simulated dial while the computer system is in the second state includes simultaneously displaying: a time indicator on the analog scale indicating the current time; and hour indicators displayed around the analog scale, wherein the hour indicators include the first hour indicator displayed at a third size different from the first size and the second hour indicator displayed at a fourth size different from the second size.
According to some embodiments, a computer system is described. The computer system is in communication with a display generation component. The computer system includes: means for displaying a first user interface comprising an analog dial via the display generating means when the computer system is in a first state, wherein displaying the analog dial when the computer system is in the first state comprises simultaneously displaying: a time indicator on the analog scale indicating a current time; and hour indicators displayed around the analog scale, wherein the hour indicators include a first hour indicator displayed at a first size and a second hour indicator displayed at a second size different from the first size; means for: after displaying the analog scale in which the first hour indicator is displayed in the first size and the second hour indicator is displayed in the second size, detecting a request to display the analog scale while the computer system is in a second state different from the first state; and means for displaying the first user interface updated to reflect the second state in response to detecting a change in state of the computer system, the displaying including displaying the analog dial, wherein displaying the analog dial while the computer system is in the second state includes simultaneously displaying: a time indicator on the analog scale indicating the current time; and hour indicators displayed around the analog scale, wherein the hour indicators include the first hour indicator displayed at a third size different from the first size and the second hour indicator displayed at a fourth size different from the second size.
According to some embodiments, a computer program product is described. The computer program product includes one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component, the one or more programs including instructions for: displaying, via the display generating component, a first user interface comprising an analog dial while the computer system is in a first state, wherein displaying the analog dial while the computer system is in the first state comprises simultaneously displaying: a time indicator on the analog scale indicating a current time; and hour indicators displayed around the analog scale, wherein the hour indicators include a first hour indicator displayed at a first size and a second hour indicator displayed at a second size different from the first size; and detecting a request to display the analog scale while the computer system is in a second state different from the first state after displaying the analog scale in which the first hour indicator is displayed in the first size and the second hour indicator is displayed in the second size; and in response to detecting a change in state of the computer system, displaying the first user interface updated to reflect the second state, the displaying including displaying the simulated dial, wherein displaying the simulated dial while the computer system is in the second state includes simultaneously displaying: a time indicator on the analog scale indicating the current time; and hour indicators displayed around the analog scale, wherein the hour indicators include the first hour indicator displayed at a third size different from the first size and the second hour indicator displayed at a fourth size different from the second size.
According to some embodiments, a method is described. The method is performed at a computer system in communication with a display generating component and one or more input devices including a rotatable input mechanism. The method comprises the following steps: displaying a selection user interface via the display generating means; detecting rotation of the rotatable input mechanism about an axis of rotation while the selection user interface is displayed; in response to detecting the rotation of the rotatable input mechanism, displaying a graphical indication of the selection focus, the graphical indication changing as the selection focus moves between the plurality of selectable objects; detecting a press input on the rotatable input mechanism after changing the selection focus throughout the plurality of selectable objects; and in response to detecting the press input, selecting one of the plurality of selectable objects, the selecting comprising: selecting a first selectable object of the plurality of selectable objects in accordance with a determination that the first selectable object has a selection focus when the press input is detected; and in accordance with a determination that a second selectable object of the plurality of selectable objects that is different from the first selectable object has a selection focus when the press input is detected, selecting the second selectable object.
According to some embodiments, a non-transitory computer readable storage medium is described. The non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system, wherein the computer system is in communication with a display generating component and one or more input devices comprising a rotatable input mechanism, the one or more programs comprising instructions for: displaying a selection user interface via the display generating means; detecting rotation of the rotatable input mechanism about an axis of rotation while the selection user interface is displayed; in response to detecting the rotation of the rotatable input mechanism, displaying a graphical indication of the selection focus, the graphical indication changing as the selection focus moves between the plurality of selectable objects; detecting a press input on the rotatable input mechanism after changing the selection focus throughout the plurality of selectable objects; and in response to detecting the press input, selecting one of the plurality of selectable objects, the selecting comprising: selecting a first selectable object of the plurality of selectable objects in accordance with a determination that the first selectable object has a selection focus when the press input is detected; and in accordance with a determination that a second selectable object of the plurality of selectable objects that is different from the first selectable object has a selection focus when the press input is detected, selecting the second selectable object.
According to some embodiments, a transitory computer readable storage medium is described. The transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system, wherein the computer system is in communication with a display generating component and one or more input devices comprising a rotatable input mechanism, the one or more programs comprising instructions for: displaying a selection user interface via the display generating means; detecting rotation of the rotatable input mechanism about an axis of rotation while the selection user interface is displayed; in response to detecting the rotation of the rotatable input mechanism, displaying a graphical indication of the selection focus, the graphical indication changing as the selection focus moves between the plurality of selectable objects; detecting a press input on the rotatable input mechanism after changing the selection focus throughout the plurality of selectable objects; and in response to detecting the press input, selecting one of the plurality of selectable objects, the selecting comprising: selecting a first selectable object of the plurality of selectable objects in accordance with a determination that the first selectable object has a selection focus when the press input is detected; and in accordance with a determination that a second selectable object of the plurality of selectable objects that is different from the first selectable object has a selection focus when the press input is detected, selecting the second selectable object.
According to some embodiments, a computer system is described. The computer system includes: one or more processors, wherein the computer system is in communication with the display generating component and one or more input devices comprising a rotatable input mechanism; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying a selection user interface via the display generating means; detecting rotation of the rotatable input mechanism about an axis of rotation while the selection user interface is displayed; in response to detecting the rotation of the rotatable input mechanism, displaying a graphical indication of the selection focus, the graphical indication changing as the selection focus moves between the plurality of selectable objects; detecting a press input on the rotatable input mechanism after changing the selection focus throughout the plurality of selectable objects; and in response to detecting the press input, selecting one of the plurality of selectable objects, the selecting comprising: selecting a first selectable object of the plurality of selectable objects in accordance with a determination that the first selectable object has a selection focus when the press input is detected; and in accordance with a determination that a second selectable object of the plurality of selectable objects that is different from the first selectable object has a selection focus when the press input is detected, selecting the second selectable object.
According to some embodiments, a computer system is described. The computer system communicates with a display generating component and one or more input devices that include a rotatable input mechanism. The computer system includes: means for displaying a selection user interface via the display generating means; means for detecting rotation of the rotatable input mechanism about an axis of rotation while the selection user interface is displayed; means for displaying a graphical indication of the selection focus in response to detecting the rotation of the rotatable input mechanism, the graphical indication changing as the selection focus moves between the plurality of selectable objects; means for detecting a press input on the rotatable input mechanism after changing the selection focus across the plurality of selectable objects; and means for selecting one of the plurality of selectable objects in response to detecting the press input, the selecting comprising: selecting a first selectable object of the plurality of selectable objects in accordance with a determination that the first selectable object has a selection focus when the press input is detected; and in accordance with a determination that a second selectable object of the plurality of selectable objects that is different from the first selectable object has a selection focus when the press input is detected, selecting the second selectable object.
According to some embodiments, a computer program product is described. The computer program product includes one or more programs configured to be executed by one or more processors of a computer system in communication with a display generating component and one or more input devices including a rotatable input mechanism, the one or more programs including instructions for: displaying a selection user interface via the display generating means; detecting rotation of the rotatable input mechanism about an axis of rotation while the selection user interface is displayed; in response to detecting the rotation of the rotatable input mechanism, displaying a graphical indication of the selection focus, the graphical indication changing as the selection focus moves between the plurality of selectable objects; detecting a press input on the rotatable input mechanism after changing the selection focus throughout the plurality of selectable objects; and in response to detecting the press input, selecting one of the plurality of selectable objects, the selecting comprising: selecting a first selectable object of the plurality of selectable objects in accordance with a determination that the first selectable object has a selection focus when the press input is detected; and in accordance with a determination that a second selectable object of the plurality of selectable objects that is different from the first selectable object has a selection focus when the press input is detected, selecting the second selectable object.
According to some embodiments, a method performed at a computer system in communication with a display generation component and one or more input devices is described. The method comprises the following steps: detecting, via the one or more input devices, an input corresponding to a request to display an editing user interface; displaying an editing user interface via the display generating component in response to detecting the input, wherein displaying the editing user interface includes simultaneously displaying: a media item including a background element and a foreground element segmented from the background element based on depth information; and system text, wherein: the system text is displayed in a first layer arrangement relative to the foreground element based on the depth information; and the foreground element of the media item is displayed at a first position relative to the system text; detecting a user input directed to the editing user interface; and in response to detecting the user input directed to the editing user interface: in accordance with a determination that the user input is a first type of user input, updating the system text to be displayed in a second layer arrangement relative to the foreground element segmented based on the depth information of the media item; and in accordance with a determination that the user input is a second type of user input different from the first type of user input, updating the media item such that the foreground element of the media item is displayed at a second location relative to the system text, wherein the second location is different from the first location.
According to some embodiments, a non-transitory computer readable storage medium is described. The non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with the display generating component and the one or more input devices, the one or more programs comprising instructions for: detecting, via the one or more input devices, an input corresponding to a request to display an editing user interface; displaying an editing user interface via the display generating component in response to detecting the input, wherein displaying the editing user interface includes simultaneously displaying: a media item including a background element and a foreground element segmented from the background element based on depth information; and system text, wherein: the system text is displayed in a first layer arrangement relative to the foreground element based on the depth information; and the foreground element of the media item is displayed at a first position relative to the system text; detecting a user input directed to the editing user interface; and in response to detecting the user input directed to the editing user interface: in accordance with a determination that the user input is a first type of user input, updating the system text to be displayed in a second layer arrangement relative to the foreground element segmented based on the depth information of the media item; and in accordance with a determination that the user input is a second type of user input different from the first type of user input, updating the media item such that the foreground element of the media item is displayed at a second location relative to the system text, wherein the second location is different from the first location.
According to some embodiments, a transitory computer readable storage medium is described. The transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with a display generating component and one or more input devices, the one or more programs comprising instructions for: detecting, via the one or more input devices, an input corresponding to a request to display an editing user interface; displaying an editing user interface via the display generating component in response to detecting the input, wherein displaying the editing user interface includes simultaneously displaying: a media item including a background element and a foreground element segmented from the background element based on depth information; and system text, wherein: the system text is displayed in a first layer arrangement relative to the foreground element based on the depth information; and the foreground element of the media item is displayed at a first position relative to the system text; detecting a user input directed to the editing user interface; and in response to detecting the user input directed to the editing user interface: in accordance with a determination that the user input is a first type of user input, updating the system text to be displayed in a second layer arrangement relative to the foreground element segmented based on the depth information of the media item; and in accordance with a determination that the user input is a second type of user input different from the first type of user input, updating the media item such that the foreground element of the media item is displayed at a second location relative to the system text, wherein the second location is different from the first location.
According to some embodiments, a computer system configured to communicate with a display generation component and one or more input devices is described. The computer system includes: one or more processors; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: detecting, via the one or more input devices, an input corresponding to a request to display an editing user interface; displaying an editing user interface via the display generating component in response to detecting the input, wherein displaying the editing user interface includes simultaneously displaying: a media item including a background element and a foreground element segmented from the background element based on depth information; and system text, wherein: the system text is displayed in a first layer arrangement relative to the foreground element based on the depth information; and the foreground element of the media item is displayed at a first position relative to the system text; detecting a user input directed to the editing user interface; and in response to detecting the user input directed to the editing user interface: in accordance with a determination that the user input is a first type of user input, updating the system text to be displayed in a second layer arrangement relative to the foreground element segmented based on the depth information of the media item; and in accordance with a determination that the user input is a second type of user input different from the first type of user input, updating the media item such that the foreground element of the media item is displayed at a second location relative to the system text, wherein the second location is different from the first location.
According to some embodiments, a computer system configured to communicate with a display generation component and one or more input devices is described. The computer system includes: means for detecting, via the one or more input devices, an input corresponding to a request to display an editing user interface; means for displaying an editing user interface via the display generating component in response to detecting the input, wherein displaying the editing user interface includes simultaneously displaying: a media item including a background element and a foreground element segmented from the background element based on depth information; and system text, wherein: the system text is displayed in a first layer arrangement relative to the foreground element based on the depth information; and the foreground element of the media item is displayed at a first position relative to the system text; means for detecting user input directed to the editing user interface; and means for, in response to detecting the user input directed to the editing user interface: in accordance with a determination that the user input is a first type of user input, updating the system text to be displayed in a second layer arrangement relative to the foreground element segmented based on the depth information of the media item; and in accordance with a determination that the user input is a second type of user input different from the first type of user input, updating the media item such that the foreground element of the media item is displayed at a second location relative to the system text, wherein the second location is different from the first location.
According to some embodiments, a computer program product is described. The computer program product includes one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component and one or more input devices. The one or more programs include instructions for: detecting, via the one or more input devices, an input corresponding to a request to display an editing user interface; displaying an editing user interface via the display generating component in response to detecting the input, wherein displaying the editing user interface includes simultaneously displaying: a media item including a background element and a foreground element segmented from the background element based on depth information; and system text, wherein: the system text is displayed in a first layer arrangement relative to the foreground element based on the depth information; and the foreground element of the media item is displayed at a first position relative to the system text; detecting a user input directed to the editing user interface; and in response to detecting the user input directed to the editing user interface: in accordance with a determination that the user input is a first type of user input, updating the system text to be displayed in a second layer arrangement relative to the foreground element segmented based on the depth information of the media item; and in accordance with a determination that the user input is a second type of user input different from the first type of user input, updating the media item such that the foreground element of the media item is displayed at a second location relative to the system text, wherein the second location is different from the first location.
Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors. Executable instructions for performing these functions are optionally included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
Thus, a faster, more efficient method and interface for managing dials is provided for devices, thereby improving the effectiveness, efficiency, and user satisfaction of such devices. Such methods and interfaces may supplement or replace other methods for managing dials.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following detailed description taken in conjunction with the following drawings, in which like reference numerals designate corresponding parts throughout the several views.
Fig. 1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
FIG. 1B is a block diagram illustrating exemplary components for event processing according to some embodiments.
Fig. 2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
Fig. 4A illustrates an exemplary user interface for a menu of applications on a portable multifunction device in accordance with some embodiments.
Fig. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface separate from a display in accordance with some embodiments.
Fig. 5A illustrates a personal electronic device according to some embodiments.
Fig. 5B is a block diagram illustrating a personal electronic device, according to some embodiments.
Fig. 5C-5D illustrate exemplary components of a personal electronic device having a touch sensitive display and an intensity sensor, according to some embodiments.
Fig. 5E-5H illustrate exemplary components and user interfaces of a personal electronic device according to some embodiments.
Fig. 6A-6U illustrate an exemplary user interface for managing dials based on depth data of previously captured media items.
Fig. 7 is a flow chart illustrating a method for managing dials based on depth data of previously captured media items.
Fig. 8A-8M illustrate an exemplary user interface for managing a clock face based on geographic data.
Fig. 9 is a flow chart illustrating a method for managing a clock face based on geographic data.
Fig. 10A-10W illustrate an exemplary user interface for managing a clock face based on state information of a computer system.
FIG. 11 is a flow chart illustrating a method for managing a clock face based on state information of a computer system.
Fig. 12A-12W illustrate exemplary user interfaces related to time management.
Fig. 13 is a flow chart illustrating a method associated with a user interface for managing time.
Fig. 14A-14R illustrate an exemplary user interface for editing a user interface based on depth data of previously captured media items.
FIG. 15 is a flow chart illustrating a method associated with editing a user interface based on depth data of a previously captured media item.
Detailed Description
The following description sets forth exemplary methods, parameters, and the like. However, it should be recognized that such description is not intended as a limitation on the scope of the present disclosure, but is instead provided as a description of exemplary embodiments.
There is a need for an electronic device that provides an efficient method and interface for managing a clock face. For example, there is a need for a device that enables an intuitive and efficient method for displaying dials based on previously captured media items that include depth data. As another example, there is a need for an apparatus that enables an intuitive and efficient method for displaying dials that include information based on geographic location data. As another example, there is a need for a device that enables an intuitive and efficient method for a dial that provides an indication of the current time in a striking manner. As another example, there is a need for devices that enable adjustment and modification of the background and/or complex functional blocks of a dial in an intuitive and efficient manner. Such techniques may alleviate the cognitive burden on users managing the clock face, thereby improving productivity. Further, such techniques may reduce processor power and battery power that would otherwise be wasted on redundant user inputs.
1A-1B, 2, 3, 4A-4B, and 5A-5H below provide a description of an exemplary device for performing techniques for managing event notifications.
Fig. 6A-6U illustrate an exemplary user interface for managing dials based on depth data of previously captured media items. Fig. 7 is a flow chart illustrating a method of managing dials based on depth data of previously captured media items, according to some embodiments. The user interfaces in fig. 6A-6U are used to illustrate the processes described below, including the process in fig. 7.
Fig. 8A-8M illustrate an exemplary user interface for managing a clock face based on geographic data. Fig. 9 is a flow chart illustrating a method of managing a clock face based on geographic data, according to some embodiments. The user interfaces in fig. 8A to 8M are used to illustrate a process including the process in fig. 9 described below.
Fig. 10A-10W illustrate an exemplary user interface for managing a clock face based on state information of a computer system. FIG. 11 is a flow chart illustrating a method of managing a clock face based on state information of a computer system, according to some embodiments. The user interfaces in fig. 10A to 10W are used to illustrate the processes described below, including the process in fig. 11.
Fig. 12A-12W illustrate exemplary user interfaces related to time management. Fig. 1 3 is a flow chart illustrating a method associated with a user interface for managing time. The user interfaces in fig. 12A-12T are used to illustrate the processes described below, including the process in fig. 1 3.
Fig. 14A-14R illustrate an exemplary user interface for editing a user interface based on depth data of previously captured media items. The user interfaces in fig. 14A to 14R are used to illustrate the processes described below, including the process in fig. 1 5.
Furthermore, in a method described herein in which one or more steps are dependent on one or more conditions having been met, it should be understood that the method may be repeated in multiple iterations such that during the iteration, all conditions that determine steps in the method have been met in different iterations of the method. For example, if a method requires performing a first step (if a condition is met) and performing a second step (if a condition is not met), one of ordinary skill will know that the stated steps are repeated until both the condition and the condition are not met (not sequentially). Thus, a method described as having one or more steps depending on one or more conditions having been met may be rewritten as a method that repeats until each of the conditions described in the method have been met. However, this does not require the system or computer-readable medium to claim that the system or computer-readable medium contains instructions for performing the contingent operation based on the satisfaction of the corresponding condition or conditions, and thus is able to determine whether the contingent situation has been met without explicitly repeating the steps of the method until all conditions to decide on steps in the method have been met. It will also be appreciated by those of ordinary skill in the art that, similar to a method with optional steps, a system or computer readable storage medium may repeat the steps of the method as many times as necessary to ensure that all optional steps have been performed.
Although the following description uses the terms "first," "second," etc. to describe various elements, these elements should not be limited by the terms. In some embodiments, these terms are used to distinguish one element from another element. For example, a first touch may be named a second touch and similarly a second touch may be named a first touch without departing from the scope of the various described embodiments. In some embodiments, the first touch and the second touch are two separate references to the same touch. In some implementations, both the first touch and the second touch are touches, but they are not the same touch.
The terminology used in the description of the various illustrated embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and in the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Depending on the context, the term "if" is optionally interpreted to mean "when … …", "at … …" or "in response to a determination" or "in response to detection". Similarly, the phrase "if determined … …" or "if detected [ stated condition or event ]" is optionally interpreted to mean "upon determining … …" or "in response to determining … …" or "upon detecting [ stated condition or event ]" or "in response to detecting [ stated condition or event ]" depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described herein. In some embodiments, the device is a portable communication device, such as a mobile phone, that also includes other functions, such as PDA and/or music player functions. Exemplary embodiments of the portable multifunction device include, but are not limited to, those from Apple inc (Cupertino, california)Device, iPod->Device, and->An apparatus. Other portable electronic devices, such as a laptop or tablet computer having a touch-sensitive surface (e.g., a touch screen display and/or a touchpad), are optionally used. It should also be appreciated that in some embodiments, the device is not a portable communication device, but rather a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad). In some embodiments, the electronic device is a computer system in communication (e.g., via wireless communication, via wired communication) with the display generation component. The display generation component is configured to provide visual output, such as display via a CRT display, display via an LED display, or display via image projection. In some embodiments, the display generating component is integrated with the computer system. In some implementations In an embodiment, the display generating component is separate from the computer system. As used herein, "displaying" content includes displaying content (e.g., video data rendered or decoded by display controller 156) by transmitting data (e.g., image data or video data) to an integrated or external display generation component via a wired or wireless connection to visually produce the content.
In the following discussion, an electronic device including a display and a touch-sensitive surface is described. However, it should be understood that the electronic device optionally includes one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.
The device typically supports various applications such as one or more of the following: drawing applications, presentation applications, word processing applications, website creation applications, disk editing applications, spreadsheet applications, gaming applications, telephony applications, video conferencing applications, email applications, instant messaging applications, fitness support applications, photo management applications, digital camera applications, digital video camera applications, web browsing applications, digital music player applications, and/or digital video player applications.
The various applications executing on the device optionally use at least one generic physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device are optionally adjusted and/or changed for different applications and/or within the respective applications. In this way, the common physical architecture of the devices (such as the touch-sensitive surface) optionally supports various applications with a user interface that is intuitive and transparent to the user.
Attention is now directed to embodiments of a portable device having a touch sensitive display. Fig. 1A is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display 112 is sometimes referred to as a "touch screen" for convenience and is sometimes referred to or referred to as a "touch-sensitive display system". Device 100 includes memory 102 (which optionally includes one or more computer-readable storage media), memory controller 122, one or more processing units (CPUs) 120, peripheral interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, input/output (I/O) subsystem 106, other input control devices 116, and external ports 124. The apparatus 100 optionally includes one or more optical sensors 164. The device 100 optionally includes one or more contact intensity sensors 165 for detecting the intensity of a contact on the device 100 (e.g., a touch-sensitive surface, such as the touch-sensitive display system 112 of the device 100). Device 100 optionally includes one or more tactile output generators 167 (e.g., generating tactile output on a touch-sensitive surface, such as touch-sensitive display system 112 of device 100 or touch pad 355 of device 300) for generating tactile output on device 100. These components optionally communicate via one or more communication buses or signal lines 103.
As used in this specification and the claims, the term "intensity" of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of the contact on the touch-sensitive surface (e.g., finger contact), or to an alternative to the force or pressure of the contact on the touch-sensitive surface (surrogate). The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., at least 256). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., weighted average) to determine an estimated contact force. Similarly, the pressure sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch sensitive surface. Alternatively, the size of the contact area and/or its variation detected on the touch-sensitive surface, the capacitance of the touch-sensitive surface and/or its variation in the vicinity of the contact and/or the resistance of the touch-sensitive surface and/or its variation in the vicinity of the contact are optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, surrogate measurements of contact force or pressure are directly used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to surrogate measurements). In some implementations, surrogate measurements of contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). The intensity of the contact is used as an attribute of the user input, allowing the user to access additional device functions that are not otherwise accessible to the user on a smaller sized device of limited real estate for displaying affordances and/or receiving user input (e.g., via a touch-sensitive display, touch-sensitive surface, or physical/mechanical control, such as a knob or button).
As used in this specification and in the claims, the term "haptic output" refers to a physical displacement of a device relative to a previous position of the device, a physical displacement of a component of the device (e.g., a touch sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a centroid of the device, to be detected by a user with a user's feel. For example, in the case where the device or component of the device is in contact with a touch-sensitive surface of the user (e.g., a finger, palm, or other portion of the user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a haptic sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or touch pad) is optionally interpreted by a user as a "press click" or "click-down" of a physically actuated button. In some cases, the user will feel a tactile sensation, such as "press click" or "click down", even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moved. As another example, movement of the touch-sensitive surface may optionally be interpreted or sensed by a user as "roughness" of the touch-sensitive surface, even when the smoothness of the touch-sensitive surface is unchanged. While such interpretation of touches by a user will be limited by the user's individualized sensory perception, many sensory perceptions of touches are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click down," "click up," "roughness"), unless stated otherwise, the haptic output generated corresponds to a physical displacement of the device or component thereof that would generate that sensory perception of a typical (or ordinary) user.
It should be understood that the device 100 is merely one example of a portable multifunction device, and that the device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown in fig. 1A are implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application specific integrated circuits.
Memory 102 optionally includes high-speed random access memory, and also optionally includes non-volatile memory, such as one or more disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller 122 optionally controls access to memory 102 by other components of device 100.
Peripheral interface 118 may be used to couple input and output peripherals of the device to CPU 120 and memory 102. The one or more processors 120 run or execute various software programs, such as computer programs (e.g., including instructions), and/or sets of instructions stored in the memory 102 to perform various functions of the device 100 and process data. In some embodiments, peripheral interface 118, CPU 120, and memory controller 122 are optionally implemented on a single chip, such as chip 104. In some other embodiments, they are optionally implemented on separate chips.
The RF (radio frequency) circuit 108 receives and transmits RF signals, also referred to as electromagnetic signals. RF circuitry 108 converts/converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 108 optionally includes well known circuitry for performing these functions including, but not limited to, an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and the like. RF circuitry 108 optionally communicates via wireless communication with networks such as the internet (also known as the World Wide Web (WWW)), intranets, and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), and other devices. The RF circuitry 108 optionally includes well-known circuitry for detecting a Near Field Communication (NFC) field, such as by a short-range communication radio. Wireless communications optionally use any of a variety of communication standards, protocols, and technologies including, but not limited to, global system for mobile communications (GSM), enhanced Data GSM Environment (EDGE), high Speed Downlink Packet Access (HSDPA), high Speed Uplink Packet Access (HSUPA), evolution, pure data (EV-DO), HSPA, hspa+, dual element HSPA (DC-HSPDA), long Term Evolution (LTE), near Field Communications (NFC), wideband code division multiple access (W-CDMA), code Division Multiple Access (CDMA), time Division Multiple Access (TDMA), bluetooth low energy (BTLE), wireless fidelity (Wi-Fi) (e.g., IEEE802.11 a, IEEE802.11 b, IEEE802.11g, IEEE802.11 n, and/or IEEE802.11 ac), voice over internet protocol (VoIP), wi-MAX, email protocols (e.g., internet Message Access Protocol (IMAP), and/or Post Office Protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), session initiation protocol (sime), messaging and presence protocol (IMPS) for instant messaging and presence with extension, instant messaging and presence, or SMS (SMS) protocols, or any other communications protocol not yet developed on an appropriate date.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between the user and device 100. Audio circuitry 110 receives audio data from peripheral interface 118, converts the audio data to electrical signals, and transmits the electrical signals to speaker 111. The speaker 111 converts electrical signals into sound waves that are audible to humans. The audio circuit 110 also receives electrical signals converted from sound waves by the microphone 113. The audio circuitry 110 converts the electrical signals into audio data and transmits the audio data to the peripheral interface 118 for processing. The audio data is optionally retrieved from and/or transmitted to the memory 102 and/or the RF circuitry 108 by the peripheral interface 118. In some embodiments, the audio circuit 110 also includes a headset jack (e.g., 212 in fig. 2). The headset jack provides an interface between the audio circuit 110 and removable audio input/output peripherals such as output-only headphones or a headset having both an output (e.g., a monaural or binaural) and an input (e.g., a microphone).
I/O subsystem 106 couples input/output peripheral devices on device 100, such as touch screen 112 and other input control devices 116, to peripheral interface 118. The I/O subsystem 106 optionally includes a display controller 156, an optical sensor controller 158, a depth camera controller 169, an intensity sensor controller 159, a haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive electrical signals from/transmit electrical signals to other input control devices 116. The other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click-type dials, and the like. In some implementations, the input controller 160 is optionally coupled to (or not coupled to) any of the following: a keyboard, an infrared port, a USB port, and a pointing device such as a mouse. One or more buttons (e.g., 208 in fig. 2) optionally include an up/down button for volume control of speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206 in fig. 2). In some embodiments, the electronic device is a computer system that communicates (e.g., via wireless communication, via wired communication) with one or more input devices. In some implementations, the one or more input devices include a touch-sensitive surface (e.g., a touch pad as part of a touch-sensitive display). In some implementations, the one or more input devices include one or more camera sensors (e.g., one or more optical sensors 164 and/or one or more depth camera sensors 175) such as for tracking gestures (e.g., hand gestures and/or air gestures) of the user as input. In some embodiments, one or more input devices are integrated with the computer system. In some embodiments, one or more input devices are separate from the computer system. In some embodiments, the air gesture is a gesture that is detected without the user touching an input element that is part of the device (or independent of an input element that is part of the device) and based on a detected movement of a portion of the user's body through the air, including a movement of the user's body relative to an absolute reference (e.g., an angle of the user's arm relative to the ground or a distance of the user's hand relative to the ground), a movement relative to another portion of the user's body (e.g., a movement of the user's hand relative to the user's shoulder, a movement of the user's hand relative to the other hand of the user, and/or a movement of the user's finger relative to the other finger or part of the hand of the user), and/or an absolute movement of a portion of the user's body (e.g., a flick gesture that includes a predetermined amount and/or speed of movement of the hand in a predetermined gesture that includes a predetermined gesture of the hand, or a shake gesture that includes a predetermined speed or amount of rotation of a portion of the user's body).
The quick press of the push button optionally disengages the lock of the touch screen 112 or optionally begins the process of unlocking the device using gestures on the touch screen, as described in U.S. patent application 11/322,549 (i.e., U.S. patent No. 7,657,849), entitled "Unlocking a Device by Performing Gestures on an Unlock Image," filed on even 23, 12/2005, which is hereby incorporated by reference in its entirety. Long presses of a button (e.g., 206) optionally cause the device 100 to power on or off. The function of the one or more buttons is optionally customizable by the user. Touch screen 112 is used to implement virtual buttons or soft buttons and one or more soft keyboards.
The touch sensitive display 112 provides an input interface and an output interface between the device and the user. Display controller 156 receives electrical signals from touch screen 112 and/or transmits electrical signals to touch screen 112. Touch screen 112 displays visual output to a user. Visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively, "graphics"). In some embodiments, some or all of the visual output optionally corresponds to a user interface object.
Touch screen 112 has a touch-sensitive surface, sensor, or set of sensors that receives input from a user based on haptic and/or tactile contact. Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or interruption of the contact) on touch screen 112 and translate the detected contact into interactions with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on touch screen 112. In an exemplary embodiment, the point of contact between touch screen 112 and the user corresponds to a user's finger.
Touch screen 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, but in other embodiments other display technologies are used. Touch screen 112 and display controller 156 optionally detect contact and any movement or interruption thereof using any of a variety of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112. In an exemplary embodiment, a projected mutual capacitance sensing technique is used, such as that described in the text from Apple inc (Cupertino, california) And iPod->Techniques used in the above.
The touch sensitive display in some implementations of touch screen 112 is optionally similar to the multi-touch sensitive touch pad described in the following U.S. patents: 6,323,846 (Westerman et al), 6,570,557 (Westerman et al) and/or 6,677,932 (Westerman et al) and/or U.S. patent publication 2002/0015024A1, each of which is hereby incorporated by reference in its entirety. However, touch screen 112 displays visual output from device 100, while touch sensitive touchpads do not provide visual output.
Touch sensitive displays in some implementations of touch screen 112 are described in the following applications: (1) U.S. patent application Ser. No. 11/381,313, "Multipoint Touch Surface Controller", filed on 5/2/2006; (2) U.S. patent application Ser. No. 10/840,862, "Multipoint Touchscreen", filed 5/6/2004; (3) U.S. patent application Ser. No. 10/903,964, "Gestures For Touch Sensitive Input Devices", filed on 7 months and 30 days 2004; (4) U.S. patent application Ser. No. 11/048,264, "Gestures For Touch Sensitive Input Devices", filed 1/31/2005; (5) U.S. patent application Ser. No. 11/038,590, "Mode-Based Graphical User Interfaces For Touch Sensitive Input Devices", filed 1/18/2005; (6) U.S. patent application Ser. No. 11/228,758, "Virtual Input Device Placement On A Touch Screen User Interface", filed 9/16/2005; (7) U.S. patent application Ser. No. 11/228,700, "Operation Of A Computer With A Touch Screen Interface", filed 9/16/2005; (8) U.S. patent application Ser. No. 11/228,737, "Activating Virtual Keys Of A Touch-Screen Virtual Keyboard", filed on 9/16/2005; and (9) U.S. patent application Ser. No. 11/367,749, "Multi-Functional Hand-Held Device," filed 3/2006. All of these applications are incorporated by reference herein in their entirety.
Touch screen 112 optionally has a video resolution in excess of 100 dpi. In some implementations, the touch screen has a video resolution of about 160 dpi. The user optionally uses any suitable object or appendage, such as a stylus, finger, or the like, to make contact with touch screen 112. In some embodiments, the user interface is designed to work primarily through finger-based contact and gestures, which may not be as accurate as stylus-based input due to the large contact area of the finger on the touch screen. In some embodiments, the device translates the finger-based coarse input into a precise pointer/cursor position or command for performing the action desired by the user.
In some embodiments, the device 100 optionally includes a touch pad for activating or deactivating a particular function in addition to the touch screen. In some embodiments, the touch pad is a touch sensitive area of the device that, unlike the touch screen, does not display visual output. The touch pad is optionally a touch sensitive surface separate from the touch screen 112 or an extension of the touch sensitive surface formed by the touch screen.
The apparatus 100 also includes a power system 162 for powering the various components. The power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in the portable device.
The apparatus 100 optionally further comprises one or more optical sensors 164. FIG. 1A shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The optical sensor 164 optionally includes a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The optical sensor 164 receives light projected through one or more lenses from the environment and converts the light into data representing an image. In conjunction with imaging module 143 (also called a camera module), optical sensor 164 optionally captures still images or video. In some embodiments, the optical sensor is located on the rear of the device 100, opposite the touch screen display 112 on the front of the device, so that the touch screen display can be used as a viewfinder for still image and/or video image acquisition. In some embodiments, the optical sensor is located on the front of the device such that the user's image is optionally acquired for video conferencing while viewing other video conference participants on the touch screen display. In some implementations, the position of the optical sensor 164 may be changed by the user (e.g., by rotating a lens and sensor in the device housing) such that a single optical sensor 164 is used with the touch screen display for both video conferencing and still image and/or video image acquisition.
The device 100 optionally further includes one or more depth camera sensors 175. FIG. 1A shows a depth camera sensor coupled to a depth camera controller 169 in the I/O subsystem 106. The depth camera sensor 175 receives data from the environment to create a three-dimensional model of objects (e.g., faces) within the scene from a point of view (e.g., depth camera sensor). In some implementations, in conjunction with the imaging module 143 (also referred to as a camera module), the depth camera sensor 175 is optionally used to determine a depth map of different portions of the image captured by the imaging module 143. In some embodiments, a depth camera sensor is located at the front of the device 100 such that a user image with depth information is optionally acquired for a video conference while the user views other video conference participants on a touch screen display, and a self-photograph with depth map data is captured. In some embodiments, the depth camera sensor 175 is located at the back of the device, or at the back and front of the device 100. In some implementations, the position of the depth camera sensor 175 can be changed by the user (e.g., by rotating a lens and sensor in the device housing) such that the depth camera sensor 175 is used with a touch screen display for both video conferencing and still image and/or video image acquisition.
In some implementations, a depth map (e.g., a depth map image) includes information (e.g., values) related to a distance of an object in a scene from a viewpoint (e.g., camera, optical sensor, depth camera sensor). In one embodiment of the depth map, each depth pixel defines a position in the Z-axis of the viewpoint where its corresponding two-dimensional pixel is located. In some implementations, the depth map is composed of pixels, where each pixel is defined by a value (e.g., 0-255). For example, a value of "0" indicates a pixel located farthest from a viewpoint (e.g., camera, optical sensor, depth camera sensor) in a "three-dimensional" scene, and a value of "255" indicates a pixel located closest to the viewpoint in the "three-dimensional" scene. In other embodiments, the depth map represents a distance between an object in the scene and a plane of the viewpoint. In some implementations, the depth map includes information about the relative depths of various features of the object of interest in the field of view of the depth camera (e.g., the relative depths of the eyes, nose, mouth, ears of the user's face). In some embodiments, the depth map includes information that enables the device to determine a contour of the object of interest in the z-direction.
The apparatus 100 optionally further comprises one or more contact intensity sensors 165. FIG. 1A shows a contact intensity sensor coupled to an intensity sensor controller 159 in the I/O subsystem 106. The contact strength sensor 165 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other strength sensors (e.g., sensors for measuring force (or pressure) of a contact on a touch-sensitive surface). The contact strength sensor 165 receives contact strength information (e.g., pressure information or a surrogate for pressure information) from the environment. In some implementations, at least one contact intensity sensor is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on the rear of the device 100, opposite the touch screen display 112 located on the front of the device 100.
The device 100 optionally further includes one or more proximity sensors 166. Fig. 1A shows a proximity sensor 166 coupled to the peripheral interface 118. Alternatively, the proximity sensor 166 is optionally coupled to the input controller 160 in the I/O subsystem 106. The proximity sensor 166 optionally performs as described in the following U.S. patent application nos.: 11/241,839, entitled "Proximity Detector In Handheld Device";11/240,788, entitled "Proximity Detector In Handheld Device";11/620,702, entitled "Using Ambient Light Sensor T0 Augment Proximity Sensor Output";11/586,862, entitled "Automated Response To And Sensing OfUser Activity In Portable Devices"; and 11/638,251, entitled "Methods And Systems For Automatic Configuration Of Peripherals," which are hereby incorporated by reference in their entirety. In some embodiments, the proximity sensor is turned off and the touch screen 112 is disabled when the multifunction device is placed near the user's ear (e.g., when the user is making a telephone call).
The device 100 optionally further comprises one or more tactile output generators 167. FIG. 1A shows a haptic output generator coupled to a haptic feedback controller 161 in the I/O subsystem 106. The tactile output generator 167 optionally includes one or more electroacoustic devices such as speakers or other audio components; and/or electromechanical devices for converting energy into linear motion such as motors, solenoids, electroactive polymers, piezoelectric actuators, electrostatic actuators, or other tactile output generating means (e.g., means for converting an electrical signal into a tactile output on a device). The contact intensity sensor 165 receives haptic feedback generation instructions from the haptic feedback module 133 and generates a haptic output on the device 100 that can be perceived by a user of the device 100. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., inward/outward of the surface of device 100) or laterally (e.g., backward and forward in the same plane as the surface of device 100). In some embodiments, at least one tactile output generator sensor is located on the rear of the device 100, opposite the touch screen display 112 located on the front of the device 100.
The device 100 optionally further includes one or more accelerometers 168. Fig. 1A shows accelerometer 168 coupled to peripheral interface 118. Alternatively, accelerometer 168 is optionally coupled to input controller 160 in I/O subsystem 106. Accelerometer 168 optionally performs as described in the following U.S. patent publication nos.: 20050190059 under the names "acceletation-based Theft Detection System for Portable Electronic Devices" and 20060017692 under the name "Methods And Apparatuses For Operating A Portable Device Based On An Accelerometer", both of which disclosures are incorporated herein by reference in their entirety. In some implementations, information is displayed in a portrait view or a landscape view on a touch screen display based on analysis of data received from one or more accelerometers. The device 100 optionally includes a magnetometer and a GPS (or GLONASS or other global navigation system) receiver in addition to the accelerometer 168 for obtaining information about the position and orientation (e.g., longitudinal or lateral) of the device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or instruction set) 128, a contact/motion module (or instruction set) 130, a graphics module (or instruction set) 132, a text input module (or instruction set) 134, a Global Positioning System (GPS) module (or instruction set) 135, and an application program (or instruction set) 136. Furthermore, in some embodiments, memory 102 (fig. 1A) or 370 (fig. 3) stores device/global internal state 157, as shown in fig. 1A and 3. The device/global internal state 157 includes one or more of the following: an active application state indicating which applications (if any) are currently active; display status, indicating what applications, views, or other information occupy various areas of the touch screen display 112; sensor status, including information obtained from the various sensors of the device and the input control device 116; and location information relating to the device location and/or pose.
Operating system 126 (e.g., darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or embedded operating systems such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.), and facilitates communication between the various hardware components and software components.
The communication module 128 facilitates communication with other devices through one or more external ports 124 and also includes various software components for processing data received by the RF circuitry 108 and/or the external ports 124. External port 124 (e.g., universal Serial Bus (USB), firewire, etc.) is adapted to be directly coupled to other devices or indirectly via a network (e.g., the internet, wireless LAN, etc.)The rows are coupled. In some embodiments, the external port is in communication withThe 30-pin connector used on the (Apple inc. Trademark) device is the same or similar and/or compatible with a multi-pin (e.g., 30-pin) connector.
The contact/motion module 130 optionally detects contact with the touch screen 112 (in conjunction with the display controller 156) and other touch sensitive devices (e.g., a touchpad or physical click wheel). The contact/motion module 130 includes various software components for performing various operations related to contact detection, such as determining whether a contact has occurred (e.g., detecting a finger press event), determining the strength of the contact (e.g., the force or pressure of the contact, or a substitute for the force or pressure of the contact), determining whether there is movement of the contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger drag events), and determining whether the contact has ceased (e.g., detecting a finger lift event or a contact break). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining a velocity (magnitude), a speed (magnitude and direction), and/or an acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single finger contacts) or simultaneous multi-point contacts (e.g., "multi-touch"/multiple finger contacts). In some embodiments, the contact/motion module 130 and the display controller 156 detect contact on the touch pad.
In some implementations, the contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether the user has "clicked" on an icon). In some implementations, at least a subset of the intensity thresholds are determined according to software parameters (e.g., the intensity thresholds are not determined by activation thresholds of particular physical actuators and may be adjusted without changing the physical hardware of the device 100). For example, without changing the touchpad or touch screen display hardware, the mouse "click" threshold of the touchpad or touch screen may be set to any of a wide range of predefined thresholds. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more intensity thresholds in a set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting multiple intensity thresholds at once with a system-level click on an "intensity" parameter).
The contact/motion module 130 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different movements, timings, and/or intensities of the detected contacts). Thus, gestures are optionally detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger press event, and then detecting a finger lift (lift off) event at the same location (or substantially the same location) as the finger press event (e.g., at the location of an icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event, then detecting one or more finger-dragging events, and then detecting a finger-up (lift-off) event.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other displays, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual attribute) of the displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including but not limited to text, web pages, icons (such as user interface objects including soft keys), digital images, video, animation, and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphic module 132 receives one or more codes for designating graphics to be displayed from an application program or the like, and also receives coordinate data and other graphic attribute data together if necessary, and then generates screen image data to output to the display controller 156.
Haptic feedback module 133 includes various software components for generating instructions used by haptic output generator 167 to generate haptic output at one or more locations on device 100 in response to user interaction with device 100.
Text input module 134, which is optionally a component of graphics module 132, provides a soft keyboard for entering text in various applications (e.g., contacts 137, email 140, IM 141, browser 147, and any other application requiring text input).
The GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to the phone 138 for use in location-based dialing, to the camera 143 as picture/video metadata, and to applications that provide location-based services, such as weather gadgets, local page gadgets, and map/navigation gadgets).
The application 136 optionally includes the following modules (or sets of instructions) or a subset or superset thereof:
contact module 137 (sometimes referred to as an address book or contact list);
a telephone module 138;
video conferencing module 139;
email client module 140;
an Instant Messaging (IM) module 141;
a fitness support module 142;
a camera module 143 for still and/or video images;
an image management module 144;
a video player module;
a music player module;
browser module 147;
Calendar module 148;
a gadget module 149, optionally comprising one or more of: weather gadgets 149-1, stock gadgets 149-2, calculator gadget 149-3, alarm gadget 149-4, dictionary gadget 149-5, and other gadgets obtained by the user, and user-created gadgets 149-6;
a gadget creator module 150 for forming a user-created gadget 149-6;
search module 151;
a video and music player module 152 that incorporates the video player module and the music player module;
a note module 153;
map module 154; and/or
An online video module 155.
Examples of other applications 136 optionally stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, contacts module 137 is optionally used to manage an address book or contact list (e.g., in application internal state 192 of contacts module 137 stored in memory 102 or memory 370), including: adding one or more names to the address book; deleting the name from the address book; associating a telephone number, email address, physical address, or other information with the name; associating the image with the name; classifying and classifying names; providing a telephone number or email address to initiate and/or facilitate communications through telephone 138, video conferencing module 139, email 140, or IM 141; etc.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, telephone module 138 is optionally used to input a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contact module 137, modify the entered telephone number, dial the corresponding telephone number, conduct a conversation, and disconnect or hang up when the conversation is completed. As described above, wireless communication optionally uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, text input module 134, contacts module 137, and telephony module 138, videoconferencing module 139 includes executable instructions to initiate, conduct, and terminate a videoconference between a user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions for creating, sending, receiving, and managing emails in response to user instructions. In conjunction with the image management module 144, the email client module 140 makes it very easy to create and send emails with still or video images captured by the camera module 143.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, instant message module 141 includes executable instructions for: inputting a character sequence corresponding to an instant message, modifying previously inputted characters, transmitting a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for phone-based instant messages or using XMPP, SIMPLE, or IMPS for internet-based instant messages), receiving an instant message, and viewing the received instant message. In some embodiments, the transmitted and/or received instant message optionally includes graphics, photographs, audio files, video files, and/or other attachments supported in an MMS and/or Enhanced Messaging Service (EMS). As used herein, "instant message" refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module, workout support module 142 includes executable instructions for creating a workout (e.g., with time, distance, and/or calorie burn targets); communicate with a fitness sensor (exercise device); receiving fitness sensor data; calibrating a sensor for monitoring fitness; selecting and playing music for exercise; and displaying, storing and transmitting the fitness data.
In conjunction with touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions for: capturing still images or videos (including video streams) and storing them in the memory 102, modifying features of still images or videos, or deleting still images or videos from the memory 102.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions for arranging, modifying (e.g., editing), or otherwise manipulating, tagging, deleting, presenting (e.g., in a digital slide or album), and storing still and/or video images.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions for browsing the internet according to user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, email client module 140, and browser module 147, calendar module 148 includes executable instructions for creating, displaying, modifying, and storing calendars and data associated with calendars (e.g., calendar entries, to-do items, etc.) according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, gadget module 149 is a mini-application (e.g., weather gadget 149-1, stock gadget 149-2, calculator gadget 149-3, alarm gadget 149-4, and dictionary gadget 149-5) or a mini-application created by a user (e.g., user created gadget 149-6) that is optionally downloaded and used by a user. In some embodiments, gadgets include HTML (hypertext markup language) files, CSS (cascading style sheet) files, and JavaScript files. In some embodiments, gadgets include XML (extensible markup language) files and JavaScript files (e.g., yahoo | gadgets).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, gadget creator module 150 is optionally used by a user to create gadgets (e.g., to transform user-specified portions of a web page into gadgets).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions for searching memory 102 for text, music, sound, images, video, and/or other files that match one or more search criteria (e.g., one or more user-specified search terms) according to user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuit 110, speaker 111, RF circuit 108, and browser module 147, video and music player module 152 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, as well as executable instructions for displaying, rendering, or otherwise playing back video (e.g., on touch screen 112 or on an external display connected via external port 124). In some embodiments, the device 100 optionally includes the functionality of an MP3 player such as an iPod (trademark of Apple inc.).
In conjunction with the touch screen 112, the display controller 156, the contact/movement module 130, the graphics module 132, and the text input module 134, the notes module 153 includes executable instructions for creating and managing notes, backlog, and the like according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 is optionally configured to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data related to shops and other points of interest at or near a particular location, and other location-based data) according to user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuit 110, speaker 111, RF circuit 108, text input module 134, email client module 140, and browser module 147, online video module 155 includes instructions for: allowing a user to access, browse, receive (e.g., by streaming and/or downloading), play back (e.g., on a touch screen or on an external display connected via external port 124), send an email with a link to a particular online video, and otherwise manage online video in one or more file formats such as h.264. In some embodiments, the instant messaging module 141 is used to send links to particular online videos instead of the email client module 140. Additional descriptions of online video applications can be found in U.S. provisional patent application Ser. No. 60/936,562, and U.S. patent application Ser. No. 11/968,067, entitled "Portable Multifunction Device, method, and Graphical User Interface for Playing Online Videos," filed on even date 20, 6, 2007, and entitled "Portable Multifunction Device, method, and Graphical User Interface for Playing Online Videos," filed on even date 31, 12, 2007, the contents of both of which are hereby incorporated by reference in their entirety.
Each of the modules and applications described above corresponds to a set of executable instructions for performing one or more of the functions described above, as well as the methods described in this patent application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented in a separate software program, such as a computer program (e.g., including instructions), process, or module, and thus the various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. For example, the video player module is optionally combined with the music player module into a single module (e.g., video and music player module 152 in fig. 1A). In some embodiments, memory 102 optionally stores a subset of the modules and data structures described above. Further, memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device in which the operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or touch pad. By using a touch screen and/or a touch pad as the primary input control device for operating the device 100, the number of physical input control devices (e.g., push buttons, dials, etc.) on the device 100 is optionally reduced.
A predefined set of functions performed solely by the touch screen and/or touch pad optionally includes navigation between user interfaces. In some embodiments, the touchpad, when touched by a user, navigates the device 100 from any user interface displayed on the device 100 to a main menu, home menu, or root menu. In such implementations, a touch pad is used to implement a "menu button". In some other embodiments, the menu buttons are physical push buttons or other physical input control devices, rather than touch pads.
FIG. 1B is a block diagram illustrating exemplary components for event processing according to some embodiments. In some embodiments, memory 102 (FIG. 1A) or memory 370 (FIG. 3) includes event sorter 170 (e.g., in operating system 126) and corresponding applications 136-1 (e.g., any of the aforementioned applications 137-151, 155, 380-390).
The event classifier 170 receives the event information and determines the application view 191 of the application 136-1 and the application 136-1 to which the event information is to be delivered. The event sorter 170 includes an event monitor 171 and an event dispatcher module 174. In some embodiments, the application 136-1 includes an application internal state 192 that indicates one or more current application views that are displayed on the touch-sensitive display 112 when the application is active or executing. In some embodiments, the device/global internal state 157 is used by the event classifier 170 to determine which application(s) are currently active, and the application internal state 192 is used by the event classifier 170 to determine the application view 191 to which to deliver event information.
In some implementations, the application internal state 192 includes additional information, such as one or more of the following: restoration information to be used when the application 136-1 resumes execution, user interface state information indicating that the information is being displayed or ready for display by the application 136-1, a state queue for enabling the user to return to a previous state or view of the application 136-1, and a repeat/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripheral interface 118. The event information includes information about sub-events (e.g., user touches on the touch sensitive display 112 as part of a multi-touch gesture). The peripheral interface 118 transmits information it receives from the I/O subsystem 106 or sensors, such as a proximity sensor 166, one or more accelerometers 168, and/or microphone 113 (via audio circuitry 110). The information received by the peripheral interface 118 from the I/O subsystem 106 includes information from the touch-sensitive display 112 or touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to peripheral interface 118 at predetermined intervals. In response, the peripheral interface 118 transmits event information. In other embodiments, the peripheral interface 118 transmits event information only if there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or receiving an input exceeding a predetermined duration).
In some implementations, the event classifier 170 also includes a hit view determination module 172 and/or an active event identifier determination module 173.
When the touch sensitive display 112 displays more than one view, the hit view determination module 172 provides a software process for determining where within one or more views a sub-event has occurred. The view is made up of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected optionally corresponds to a level of programming within the application's programming or view hierarchy. For example, the lowest horizontal view in which a touch is detected is optionally referred to as a hit view, and the set of events that are recognized as correct inputs is optionally determined based at least in part on the hit view of the initial touch that begins a touch-based gesture.
Hit view determination module 172 receives information related to sub-events of the touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 172 identifies the hit view as the lowest view in the hierarchy that should process sub-events. In most cases, the hit view is the lowest level view in which the initiating sub-event (e.g., the first sub-event in a sequence of sub-events that form an event or potential event) occurs. Once the hit view is identified by the hit view determination module 172, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as a hit view.
The activity event recognizer determination module 173 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some implementations, the active event identifier determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, the activity event recognizer determination module 173 determines that all views that include the physical location of a sub-event are actively engaged views, and thus determines that all actively engaged views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely localized to an area associated with one particular view, the higher view in the hierarchy will remain the actively engaged view.
The event dispatcher module 174 dispatches event information to an event recognizer (e.g., event recognizer 180). In embodiments that include an active event recognizer determination module 173, the event dispatcher module 174 delivers event information to the event recognizers determined by the active event recognizer determination module 173. In some embodiments, the event dispatcher module 174 stores event information in an event queue that is retrieved by the corresponding event receiver 182.
In some embodiments, the operating system 126 includes an event classifier 170. Alternatively, the application 136-1 includes an event classifier 170. In yet another embodiment, the event classifier 170 is a stand-alone module or part of another module stored in the memory 102, such as the contact/motion module 130.
In some embodiments, application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for processing touch events that occur within a respective view of the user interface of the application. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, the respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of the event recognizers 180 are part of a separate module that is a higher level object from which methods and other properties are inherited, such as the user interface toolkit or application 136-1. In some implementations, the respective event handlers 190 include one or more of the following: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or invokes data updater 176, object updater 177, or GUI updater 178 to update the application internal state 192. Alternatively, one or more of application views 191 include one or more corresponding event handlers 190. Additionally, in some implementations, one or more of the data updater 176, the object updater 177, and the GUI updater 178 are included in a respective application view 191.
The corresponding event identifier 180 receives event information (e.g., event data 179) from the event classifier 170 and identifies events based on the event information. Event recognizer 180 includes event receiver 182 and event comparator 184. In some embodiments, event recognizer 180 further includes at least a subset of metadata 183 and event transfer instructions 188 (which optionally include sub-event delivery instructions).
Event receiver 182 receives event information from event sorter 170. The event information includes information about sub-events such as touches or touch movements. The event information also includes additional information, such as the location of the sub-event, according to the sub-event. When a sub-event relates to movement of a touch, the event information optionally also includes the rate and direction of the sub-event. In some embodiments, the event includes rotation of the device from one orientation to another orientation (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about a current orientation of the device (also referred to as a device pose).
The event comparator 184 compares the event information with predefined event or sub-event definitions and determines an event or sub-event or determines or updates the state of the event or sub-event based on the comparison. In some embodiments, event comparator 184 includes event definition 186. Event definition 186 includes definitions of events (e.g., a predefined sequence of sub-events), such as event 1 (187-1), event 2 (187-2), and others. In some implementations, sub-events in an event (e.g., 187-1 and/or 187-2) include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition of event 1 (187-1) is a double click on the displayed object. For example, a double click includes a first touch on the displayed object for a predetermined length of time (touch start), a first lift-off on the displayed object for a predetermined length of time (touch end), a second touch on the displayed object for a predetermined length of time (touch start), and a second lift-off on the displayed object for a predetermined length of time (touch end). In another example, the definition of event 2 (187-2) is a drag on the displayed object. For example, dragging includes touching (or contacting) on the displayed object for a predetermined period of time, movement of the touch on the touch-sensitive display 112, and lift-off of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some implementations, the event definitions 186 include definitions of events for respective user interface objects. In some implementations, the event comparator 184 performs a hit test to determine which user interface object is associated with a sub-event. For example, in an application view that displays three user interface objects on touch-sensitive display 112, when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test to determine which of the three user interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 190, the event comparator uses the results of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects an event handler associated with the sub-event and the object that triggered the hit test.
In some embodiments, the definition of the respective event (187) further includes a delay action that delays delivery of the event information until it has been determined that the sequence of sub-events does or does not correspond to an event type of the event recognizer.
When the respective event recognizer 180 determines that the sequence of sub-events does not match any of the events in the event definition 186, the respective event recognizer 180 enters an event impossible, event failed, or event end state after which subsequent sub-events of the touch-based gesture are ignored. In this case, the other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the ongoing touch-based gesture.
In some embodiments, the respective event recognizer 180 includes metadata 183 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to the actively engaged event recognizer. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate how event recognizers interact or are able to interact with each other. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to different levels in a view or programmatic hierarchy.
In some embodiments, when one or more particular sub-events of an event are identified, the corresponding event recognizer 180 activates an event handler 190 associated with the event. In some implementations, the respective event identifier 180 delivers event information associated with the event to the event handler 190. The activate event handler 190 is different from sending (and deferring) sub-events to the corresponding hit view. In some embodiments, event recognizer 180 throws a marker associated with the recognized event, and event handler 190 associated with the marker retrieves the marker and performs a predefined process.
In some implementations, the event delivery instructions 188 include sub-event delivery instructions that deliver event information about the sub-event without activating the event handler. Instead, the sub-event delivery instructions deliver the event information to an event handler associated with the sub-event sequence or to an actively engaged view. Event handlers associated with the sequence of sub-events or with the actively engaged views receive the event information and perform a predetermined process.
In some embodiments, the data updater 176 creates and updates data used in the application 136-1. For example, the data updater 176 updates a telephone number used in the contact module 137 or stores a video file used in the video player module. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, the object updater 177 creates a new user interface object or updates the location of the user interface object. GUI updater 178 updates the GUI. For example, the GUI updater 178 prepares the display information and sends the display information to the graphics module 132 for display on a touch-sensitive display.
In some embodiments, event handler 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, the data updater 176, the object updater 177, and the GUI updater 178 are included in a single module of the respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It should be appreciated that the above discussion regarding event handling of user touches on a touch sensitive display also applies to other forms of user inputs that utilize an input device to operate the multifunction device 100, not all of which are initiated on a touch screen. For example, mouse movements and mouse button presses optionally in conjunction with single or multiple keyboard presses or holds; contact movement on the touchpad, such as tap, drag, scroll, etc.; inputting by a touch pen; movement of the device; verbal instructions; detected eye movement; inputting biological characteristics; and/or any combination thereof is optionally used as input corresponding to sub-events defining the event to be distinguished.
Fig. 2 illustrates a portable multifunction device 100 with a touch screen 112 in accordance with some embodiments. The touch screen optionally displays one or more graphics within a User Interface (UI) 200. In this and other embodiments described below, a user can select one or more of these graphics by making a gesture on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figures) or one or more styluses 203 (not drawn to scale in the figures). In some embodiments, selection of one or more graphics will occur when a user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up and/or down), and/or scrolling of a finger that has been in contact with the device 100 (right to left, left to right, up and/or down). In some implementations or in some cases, inadvertent contact with the graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture that swipes over an application icon optionally does not select the corresponding application.
The device 100 optionally also includes one or more physical buttons, such as a "home" or menu button 204. As previously described, menu button 204 is optionally used to navigate to any application 136 in a set of applications that are optionally executed on device 100. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on touch screen 112.
In some embodiments, the device 100 includes a touch screen 112, menu buttons 204, a press button 206 for powering the device on/off and for locking the device, one or more volume adjustment buttons 208, a Subscriber Identity Module (SIM) card slot 210, a headset jack 212, and a docking/charging external port 124. Pressing button 206 is optionally used to turn on/off the device by pressing the button and holding the button in the pressed state for a predefined time interval; locking the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or unlock the device or initiate an unlocking process. In an alternative embodiment, the device 100 also accepts voice input through the microphone 113 for activating or deactivating certain functions. The device 100 also optionally includes one or more contact intensity sensors 165 for detecting the intensity of contacts on the touch screen 112, and/or one or more haptic output generators 167 for generating haptic outputs for a user of the device 100.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, the device 300 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child learning toy), a gaming system, or a control device (e.g., a home controller or an industrial controller). The device 300 generally includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. Communication bus 320 optionally includes circuitry (sometimes referred to as a chipset) that interconnects and controls communications between system components. The device 300 includes an input/output (I/O) interface 330 with a display 340, typically a touch screen display. The I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and a touchpad 355, a tactile output generator 357 (e.g., similar to the tactile output generator 167 described above with reference to fig. 1A), a sensor 359 (e.g., an optical sensor, an acceleration sensor, a proximity sensor, a touch sensitive sensor, and/or a contact intensity sensor (similar to the contact intensity sensor 165 described above with reference to fig. 1A)) for generating tactile output on the device 300. Memory 370 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and optionally includes non-volatile memory such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 370 optionally includes one or more storage devices located remotely from CPU 310. In some embodiments, memory 370 stores programs, modules, and data structures, or a subset thereof, similar to those stored in memory 102 of portable multifunction device 100 (fig. 1A). Furthermore, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk editing module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (fig. 1A) optionally does not store these modules.
Each of the above elements in fig. 3 is optionally stored in one or more of the previously mentioned memory devices. Each of the above-described modules corresponds to a set of instructions for performing the above-described functions. The above-described modules or computer programs (e.g., sets of instructions or instructions) need not be implemented in a separate software program (such as a computer program (e.g., instructions), process or module, and thus the various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures described above. Further, memory 370 optionally stores additional modules and data structures not described above.
Attention is now directed to embodiments of user interfaces optionally implemented on, for example, portable multifunction device 100.
Fig. 4A illustrates an exemplary user interface of an application menu on the portable multifunction device 100 in accordance with some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
Signal strength indicators 402 for wireless communications such as cellular signals and Wi-Fi signals;
time 404;
bluetooth indicator 405;
battery status indicator 406;
tray 408 with icons for commonly used applications, such as:
an icon 416 labeled "phone" of the o phone module 138, the icon 416 optionally including an indicator 414 of the number of missed calls or voice mails;
an icon 418 labeled "mail" of the o email client module 140, the icon 418 optionally including an indicator 410 of the number of unread emails;
icon 420 labeled "browser" of the omicron browser module 147; and
an icon 422 labeled "iPod" of the omicron video and music player module 152 (also known as iPod (trademark of Apple inc.) module 152); and
icons of other applications, such as:
icon 424 labeled "message" of omicron IM module 141;
icon 426 labeled "calendar" of calendar module 148;
icon 428 labeled "photo" of image management module 144;
an icon 430 labeled "camera" of the omicron camera module 143;
icon 432 labeled "online video" of online video module 155;
Icon 434 labeled "stock market" for the o stock market gadget 149-2;
icon 436 labeled "map" of the omicron map module 154;
icon 438 labeled "weather" for the o weather gadget 149-1;
icon 440 labeled "clock" for the o alarm clock gadget 149-4;
icon 442 labeled "fitness support" of omicron fitness support module 142;
icon 444 labeled "note" of the omicron note module 153; and
an icon 446 labeled "set" for a set application or module that provides access to the settings of device 100 and its various applications 136.
It should be noted that the iconic labels shown in fig. 4A are merely exemplary. For example, the icon 422 of the video and music player module 152 is labeled "music" or "music player". Other labels are optionally used for various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 of fig. 3) having a touch-sensitive surface 451 (e.g., tablet or touchpad 355 of fig. 3) separate from a display 450 (e.g., touch screen display 112). The device 300 also optionally includes one or more contact intensity sensors (e.g., one or more of the sensors 359) for detecting the intensity of the contact on the touch-sensitive surface 451 and/or one or more tactile output generators 357 for generating tactile outputs for a user of the device 300.
While some of the examples below will be given with reference to inputs on touch screen display 112 (where the touch sensitive surface and the display are combined), in some embodiments the device detects inputs on a touch sensitive surface separate from the display, as shown in fig. 4B. In some implementations, the touch-sensitive surface (e.g., 451 in fig. 4B) has a primary axis (e.g., 452 in fig. 4B) that corresponds to the primary axis (e.g., 453 in fig. 4B) on the display (e.g., 450). According to these embodiments, the device detects contact (e.g., 460 and 462 in fig. 4B) with the touch-sensitive surface 451 at a location corresponding to a respective location on the display (e.g., 460 corresponds to 468 and 462 corresponds to 470 in fig. 4B). In this way, when the touch-sensitive surface (e.g., 451 in FIG. 4B) is separated from the display (e.g., 450 in FIG. 4B) of the multifunction device, user inputs (e.g., contacts 460 and 462 and movement thereof) detected by the device on the touch-sensitive surface are used by the device to manipulate the user interface on the display. It should be appreciated that similar approaches are optionally used for other user interfaces described herein.
Additionally, while the following examples are primarily given with reference to finger inputs (e.g., finger contacts, single-finger flick gestures, finger swipe gestures), it should be understood that in some embodiments one or more of these finger inputs are replaced by input from another input device (e.g., mouse-based input or stylus input). For example, a swipe gesture is optionally replaced with a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, a flick gesture is optionally replaced by a mouse click (e.g., instead of detection of contact, followed by ceasing to detect contact) when the cursor is over the position of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be appreciated that multiple computer mice are optionally used simultaneously, or that the mice and finger contacts are optionally used simultaneously.
Fig. 5A illustrates an exemplary personal electronic device 500. The device 500 includes a body 502. In some embodiments, device 500 may include some or all of the features described with respect to devices 100 and 300 (e.g., fig. 1A-4B). In some implementations, the device 500 has a touch sensitive display 504, hereinafter referred to as a touch screen 504. In addition to or in lieu of touch screen 504, device 500 has a display and a touch-sensitive surface. As with devices 100 and 300, in some implementations, touch screen 504 (or touch-sensitive surface) optionally includes one or more intensity sensors for detecting the intensity of an applied contact (e.g., touch). One or more intensity sensors of the touch screen 504 (or touch sensitive surface) may provide output data representative of the intensity of the touch. The user interface of the device 500 may respond to touches based on the intensity of the touches, meaning that touches of different intensities may invoke different user interface operations on the device 500.
Exemplary techniques for detecting and processing touch intensity are found, for example, in the following related patent applications: international patent application serial number PCT/US2013/040061, filed 5/8 a 2013, entitled "Device, method, and Graphical User Interface for Displaying User Interface Objects Corresponding to an Application", issued as WIPO patent publication No. WO/2013/169849; and international patent application serial number PCT/US2013/069483, filed 11/2013, entitled "Device, method, and Graphical User Interface for Transitioning Between Touch Input to Display Output Relationships", published as WIPO patent publication No. WO/2014/105276, each of which is hereby incorporated by reference in its entirety.
In some embodiments, the device 500 has one or more input mechanisms 506 and 508. The input mechanisms 506 and 508 (if included) may be in physical form. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, the device 500 has one or more attachment mechanisms. Such attachment mechanisms, if included, may allow for attachment of the device 500 with, for example, a hat, glasses, earrings, necklace, shirt, jacket, bracelet, watchband, bracelet, pants, leash, shoe, purse, backpack, or the like. These attachment mechanisms allow the user to wear the device 500.
Fig. 5B depicts an exemplary personal electronic device 500. In some embodiments, the apparatus 500 may include some or all of the components described with reference to fig. 1A, 1B, and 3. The device 500 has a bus 512 that operatively couples an I/O section 514 with one or more computer processors 516 and memory 518. The I/O portion 514 may be connected to a display 504, which may have a touch sensitive component 522 and optionally an intensity sensor 524 (e.g., a contact intensity sensor). In addition, the I/O portion 514 may be connected to a communication unit 530 for receiving application and operating system data using Wi-Fi, bluetooth, near Field Communication (NFC), cellular, and/or other wireless communication technologies. The device 500 may include input mechanisms 506 and/or 508. For example, the input mechanism 506 is optionally a rotatable input device or a depressible input device and a rotatable input device. In some examples, the input mechanism 508 is optionally a button.
In some examples, the input mechanism 508 is optionally a microphone. Personal electronic device 500 optionally includes various sensors, such as a GPS sensor 532, an accelerometer 534, an orientation sensor 540 (e.g., compass), a gyroscope 536, a motion sensor 538, and/or combinations thereof, all of which are operatively connected to I/O section 514.
The memory 518 of the personal electronic device 500 may include one or more non-transitory computer-readable storage media for storing computer-executable instructions that, when executed by the one or more computer processors 516, may, for example, cause the computer processors to perform the techniques described below, including processes 700, 900, 1100, and 1300 (fig. 7, 9, 11, and 13). A computer-readable storage medium may be any medium that can tangibly contain or store computer-executable instructions for use by or in connection with an instruction execution system, apparatus, and device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer readable storage medium may include, but is not limited to, magnetic storage devices, optical storage devices, and/or semiconductor storage devices. Examples of such storage devices include magnetic disks, optical disks based on CD, DVD, or blu-ray technology, and persistent solid state memories such as flash memory, solid state drives, etc. The personal electronic device 500 is not limited to the components and configuration of fig. 5B, but may include other components or additional components in a variety of configurations.
As used herein, the term "affordance" refers to a user-interactive graphical user interface object that is optionally displayed on a display screen of device 100, 300, and/or 500 (fig. 1A, 3, and 5A-5H). For example, an image (e.g., an icon), a button, and text (e.g., a hyperlink) optionally each constitute an affordance.
As used herein, the term "focus selector" refers to an input element for indicating the current portion of a user interface with which a user is interacting. In some implementations that include a cursor or other position marker, the cursor acts as a "focus selector" such that when the cursor detects an input (e.g., presses an input) on a touch-sensitive surface (e.g., touch pad 355 in fig. 3 or touch-sensitive surface 451 in fig. 4B) above a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted according to the detected input. In some implementations including a touch screen display (e.g., touch sensitive display system 112 in fig. 1A or touch screen 112 in fig. 4A) that enables direct interaction with user interface elements on the touch screen display, the contact detected on the touch screen acts as a "focus selector" such that when an input (e.g., a press input by a contact) is detected on the touch screen display at the location of a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations, the focus is moved from one area of the user interface to another area of the user interface without a corresponding movement of the cursor or movement of contact on the touch screen display (e.g., by moving the focus from one button to another using a tab key or arrow key); in these implementations, the focus selector moves according to movement of the focus between different areas of the user interface. Regardless of the particular form that the focus selector takes, the focus selector is typically controlled by the user in order to deliver a user interface element (or contact on the touch screen display) that is interactive with the user of the user interface (e.g., by indicating to the device the element with which the user of the user interface desires to interact). For example, upon detection of a press input on a touch-sensitive surface (e.g., a touchpad or touch screen), the position of a focus selector (e.g., a cursor, contact, or selection box) over a respective button will indicate that the user desires to activate the respective button (rather than other user interface elements shown on the device display).
As used in the specification and claims, the term "characteristic intensity" of a contact refers to the characteristic of a contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. The characteristic intensity is optionally based on a predefined number of intensity samples or a set of intensity samples acquired during a predetermined period of time (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, 5 seconds, 10 seconds) relative to a predefined event (e.g., after detection of contact, before or after detection of lift-off of contact, before or after detection of start of movement of contact, before or after detection of end of contact, and/or before or after detection of decrease in intensity of contact). The characteristic intensity of the contact is optionally based on one or more of: maximum value of intensity of contact, average value of intensity of contact, value at first 10% of intensity of contact, half maximum value of intensity of contact, 90% maximum value of intensity of contact, etc. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an operation. For example, the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold. In this example, contact of the feature strength that does not exceed the first threshold results in a first operation, contact of the feature strength that exceeds the first strength threshold but does not exceed the second strength threshold results in a second operation, and contact of the feature strength that exceeds the second threshold results in a third operation. In some implementations, a comparison between the feature strength and one or more thresholds is used to determine whether to perform one or more operations (e.g., whether to perform or forgo performing the respective operations) rather than for determining whether to perform the first or second operations.
Attention is now directed to embodiments of a user interface ("UI") and associated processes implemented on an electronic device, such as portable multifunction device 100, device 300, or device 500.
Fig. 6A-6U illustrate an exemplary user interface for managing dials based on depth data of previously captured media items, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the process in fig. 7.
Fig. 6A shows a computer 600 in which a display 602 is turned off. Computer system 600 includes a rotatable and depressible input mechanism 604. In some embodiments, computer system 600 optionally includes one or more features of device 100, device 300, or device 500. In some embodiments, computer system 600 is a tablet, phone, laptop, desktop, camera, or the like. In some implementations, the inputs described below may optionally be replaced with alternative inputs, such as a pressing input and/or a rotating input received via the rotatable and depressible input mechanism 604.
In some implementations, the computer system 600 wakes up and displays the watch user interface 606 in response to inputs such as a tap input, a wrist lift input, a press input received via the rotatable and depressible input mechanism 604, and/or a rotation input received via the rotatable and depressible input mechanism 604.
In FIG. 6B, computer system 600 displays a watch user interface 606 that includes background elements 606a, system text 606B, foreground elements 606c, and complex function blocks 606d1. In one implementation, the foreground element 606c and the background element 606a correspond to portions of a portrait media item (e.g., a picture) that are divided into at least two layers based on depth data of the media item such that the foreground element 606c is based on a first layer of the media item and the background element 606a is based on a second layer of the media item that is different from the first layer of the media item. In some implementations, the computer system 600 partitions the media item into a first layer and a second layer based on determining that the first layer of the media item and the second layer of the media item are different distances from the camera sensor when capturing the media item with depth data.
At fig. 6B, the watch user interface 606 is based on an image that includes depth data indicating that the foreground element 606c is closer to the camera sensor than the background element 606a when the image was captured. The computer system 600 generates and displays the watch user interface 606 based on the image depth data by stacking elements of the watch user interface 606 to indicate a depth of view (FOD). For example, fig. 6B shows that the background element 606a is below (e.g., is overlaid by) the system text 606B, which is below the foreground element 606c, which is below the complex function block 606d1. Thus, the elements included in the watch user interface 606 are displayed in a simulated stack such that each element is displayed with a different simulated (e.g., virtual) distance from the display 602. For example, in FIG. 6B, the complex function block 606d1 has a minimum emulation distance from the display 602, while the background element 606a has a maximum emulation distance from the display 602. In some implementations, the computer system 600 creates and/or generates the watch user interface 606 without user input specifying an order in which elements of an image with depth data are layered or virtually stacked. In some implementations, elements of the watch user interface are displayed in a different arrangement or virtual stacking order. For example, in some embodiments, computer system 600 generates and/or displays a watch user interface based on an image with depth data, where complex functional blocks (e.g., 606d1, 640d, etc.) are displayed under (e.g., behind) foreground elements (e.g., 606c, 640c, etc.), such as in fig. 6Q described below. In some embodiments, computer system 600 generates and/or displays a watch user interface based on an image with depth data, wherein system text (e.g., 606b, 642b, etc.) is displayed over (e.g., in front of) a foreground element (e.g., 606c, 642c, etc.), such as in fig. 6R described below.
In fig. 6B, system text 606B includes a lock icon 606B1 indicating that computer system 600 is currently in a locked state. In some embodiments, the features of computer system 600 are limited when computer system 600 is in a locked state. The system text 606b also includes a date 606b2 indicating a current date (e.g., month, day, and/or year) and a current time 606b3 indicating a time of day (e.g., hour, minute, and/or second).
Fig. 6C shows computer system 600 displaying watch user interface 606 with simulated parallax visual effects. In fig. 6C, computer 600 is a wristwatch being worn on wrist 608. In fig. 6C, the relative positions of the elements of the watch user interface 606, including the background element 606a, the system text 606b, and the foreground element 606C, are adjusted based on the angle of rotation of the wrist 608. For example, the top portion of FIG. 6C shows that when the wrist 608 is at the first angle of rotation, the foreground element 606C is displayed at an angle that substantially obscures the system text 606b. However, the bottom portion of FIG. 6C shows that when the wrist 608 is at a second angle of rotation that is different from the first angle of rotation, the relative positions of the elements of the watch user interface 606 are adjusted based on the angular change of the wrist 608 such that the foreground element 606C does not obscure the system text 606b. In some examples, the magnitude of the change in position of an element of the watch user interface 606 is significantly based on the angle of rotation of the user's wrist.
In some implementations, the relative positions of elements of the watch user interface 606 are limited to a range, and wrist position changes beyond a threshold amount (e.g., beyond a threshold angle) will not result in elements of the watch user interface 606 being updated beyond the threshold amount. In some implementations, the magnitude of the simulated parallax visual effect is not as pronounced as shown in fig. 6C. In some embodiments, some elements of the watch user interface are affected by wrist rotation (e.g., move based on wrist rotation), while other elements remain in a fixed position within the watch user interface 606. For example, in some implementations, the simulated parallax visual effect is applied to foreground elements (e.g., 606 c) and background elements (e.g., 606 a), but not to system text (e.g., 606 b) or complex functional blocks (e.g., 606d1, 606d 2).
Fig. 6D illustrates an animation in which a watch user interface 606 is displayed in a simulated push-pull zoom animation (e.g., an animation in which a simulated camera is moved toward or away from a subject while adjusting zoom lenses in such a way that the subject remains the same size to create a visual effect in which the background grows in size and detail or the foreground increases in size relative to the background). In some embodiments, when the computer system 600 initially displays the watch user interface 606 (e.g., after closing an application, after selecting the watch user interface 606 via dial selection mode, after waking from a sleep state, after initial power-up, after unlocking the computer system 600, etc.), the computer system displays the watch user interface 606 in a push-pull zoom animation. The top portion of fig. 6D shows the computer system 600 displaying the watch user interface 606 with a push-pull zoom effect, wherein initially the background element 606a of the watch user interface 606 is displayed at the first simulated zoom level applied. The bottom portion of fig. 6D shows a second portion of the push-pull zoom animation in which the background element 606a has been updated to be displayed at a second simulated zoom level that is different from the first simulated zoom level. In some embodiments, computer system 600 automatically displays the simulated push-pull zoom animation on watch user interface 606 and maintains a second simulated zoom level applied to background element 606a after the display of the animation after the simulated animation is played.
In some embodiments, the simulated push-pull zoom applies a progressive zoom level to the background element 606a while maintaining the simulated zoom level applied to the foreground element 606 c. In some implementations, displaying the simulated push-pull zoom animation involves initially applying a minimum amount of simulated zoom effect (e.g., a minimum magnification level) to the background element 606a. During the process of simulating the push-pull zoom animation, the simulated zoom effect applied to the background element 606a is updated such that at the end of the simulated push-pull zoom animation, a higher amount of the simulated zoom effect is applied to the background element 606a of the watch user interface 606. In some implementations, displaying the simulated push-pull zoom animation involves initially applying a maximum amount of simulated zoom effect (e.g., a maximum magnification level) to the background element 606a. During the process of simulating the push-pull zoom animation, the simulated zoom effect applied to the background element 606A is updated such that at the end of the simulated push-pull zoom animation, a lower amount of the simulated zoom effect is applied to the background element of the watch user interface 606 (e.g., 606A as shown in FIG. 6A).
At fig. 6E, computer system 600 displays watch user interface 606, where system text 606b has been updated to be displayed without lock icon 606b1, indicating that computer 600 is not in a locked state. In some embodiments, the computer 600 system transitions from the locked state to the unlocked state in response to a sequence of user inputs received via one or more input mechanisms in communication with the computer system 600. In some embodiments, computer system 600 transitions from the locked state to the unlocked state in response to a plurality of tap inputs received at computer system 600 corresponding to entry of a password. In some implementations, the computer system 600 transitions from the locked state to the unlocked state in response to a press input received on the rotatable and depressible input mechanism 604. In some embodiments, computer system 600 transitions from a locked state to an unlocked state in response to a sequence of one or more user inputs received via a computer system other than computer system 600, such as a paired phone (e.g., computer system 660), in communication with computer system 600. In some implementations, the computer system 600 transitions from a locked state to an unlocked state in response to a wrist lifting gesture.
At fig. 6E, computer system 600 detects a long press input 650a on watch user interface 606. At FIG. 6F, in response to detecting the long press input 650a, the computer system 600 displays a selection user interface 610a. The selection user interface 610a includes a representation 618a that is a graphical representation of the watch user interface 606. The representation 618a includes elements of the watch user interface 606 including background elements 606a, system text 606b, foreground elements 606c, and complex function blocks 606d1. In some embodiments, representation 618a is a static representation of watch user interface 606 and includes current time 606b3 with text indicating a time other than the current time and complex function block 606d1 with information other than real-time data.
The selection user interface 610a includes a sharing user-interactive graphical user interface object 614 that, when selected, causes the computer system 600 to display a user interface related to transmitting and/or sharing information about the watch user interface 606 to another computer system (e.g., phone, watch, tablet, etc.). The selection user interface 610a also includes an editing user interactive graphical user interface object 616 that, when selected, causes the computer system 600 to display an editing user interface for editing aspects of the watch user interface 606. The selection user interface 610a also includes a dial indicator 612a that includes visual and/or textual indicators that indicate the name of the watch user interface currently centered in the selection user interface 610a. At fig. 6F, dial indicator 612a indicates that currently indicated watch user interface 606, which is represented by representation 618a in selection user interface 610a, is titled "portrait".
The selection user interface 610a also includes at least partial views of the representation 607a and the representation 607 b. The representation 607a and the representation 607b represent watch user interfaces other than the watch user interface 606. In some embodiments, in response to receiving swipe input on display 602 and/or rotational input via rotatable and depressible input mechanism 604, the computer system displays representation 607a or 607b in the center of selection user interface 610a, with the view of the respective representation being more comprehensive than that shown at fig. 6F.
At FIG. 6F, computer system 600 detects tap input 650b on edit user interactive graphical user interface object 616. At fig. 6G, in response to detecting tap input 650b, computer system 600 displays editing user interface 620a1. Editing user interface 620a1 includes representation 618b1 representing watch user interface 606. In some embodiments, representations 618b1 and 618a are substantially identical. In some implementations, representation 618b1 matches representation 618a significantly, but is displayed in a different size than representation 618 a. At fig. 6G, representation 618b1 includes elements of watch user interface 606 including background elements 606a, system text 606b, foreground elements 606c, and complex function blocks 606d1.
Editing user interface 620a1 includes an aspect indicator 624a that includes a visual and/or textual representation of an aspect of watch user interface 606 that is currently selected for editing. At fig. 6G, the aspect indicator 624a indicates that the aspect of the watch user interface 606 that is currently selected for editing is "style".
Editing user interface 620a1 also includes a selection indicator 622a1 that includes a visual and/or textual representation of the currently selected option of the editable aspect of watch user interface 606. At fig. 6G, selection indicator 622a1 indicates that the currently selected "style" option of watch user interface 606 is "classical".
Editing user interface 620a1 also includes a location indicator 626a1. The position indicator 626a1 includes a graphical indication of the number of selectable options of the editable aspect of the watch user interface 606 currently being edited and the position of the currently selected option in the list of selectable options. For example, the position indicator 626a1 indicates that the currently selected option "classical" of the "style" aspect of the watch user interface 606 is at the top of the list of at least two possible options of the "style" aspect of the watch user interface 606.
At fig. 6G, computer system 600 detects rotational input 638a via rotatable and depressible input mechanism 604. At fig. 6H, in response to detecting the rotational input 638a, the computer system 600 displays an editing user interface 620a2. In some implementations, the computer system 600 displays the editing user interface 620a2 in response to swipe inputs received while displaying the editing user interface 620a1 (e.g., swipe down inputs on the display 602). Editing user interface 620a2 includes representation 618B2 representing an edited representation of watch user interface 606 that now includes current time 606B3 of system text 606B displayed in a different font than was previously used to display current time 606B3 (e.g., the font used at fig. 6B-6G). At fig. 6H, the "style" aspect of dial 606 has been edited to be displayed in a "modern" style rather than a "classical" style. Thus, selection indicator 622a2 indicates that the currently selected "style" option of watch user interface 606 is "modern", and location indicator 626a2 indicates that the location of the "style" aspect of watch user interface 606 within the selectable option has been updated.
At FIG. 6H, computer system 600 detects swipe input 650c on editing user interface 620a2. At FIG. 6I, in response to detecting swipe input 650c, computer system 600 displays editing user interface 620b1, which includes representation 618c1 of watch user interface 606. Editing user interface 620b1 also includes an aspect indicator 624b that indicates that editing user interface 620b1 is a location for editing system text 606 b.
Editing user interface 620b1 also includes a selection indicator 622b1 that includes a visual and/or textual representation of the currently selected option of the editable aspect of watch user interface 606. At fig. 6I, the selection indicator 622b1 indicates that the currently selected "location" option of the watch user interface 606 is "top". Thus, representation 618c1 includes system text 606b that is displayed toward the top of display 602.
Editing user interface 620b1 also includes a position indicator 626b1. The position indicator 626b1 includes a graphical indication of the number of selectable options of the editable aspect of the watch user interface 606 currently being edited and the position of the currently selected option in the list of selectable options. For example, the position indicator 626b1 indicates that the currently selected option "top" of the "position" of the system text 606b is at the top of the list of at least two possible options in the "position" aspect of the system text 606b.
At FIG. 6I, computer system 600 detects rotational input 638b via rotatable and depressible input mechanism 604. At fig. 6J, in response to detecting the rotational input 638b, the computer system 600 displays an editing user interface 620b2 including a representation 618c 2. Representation 618c2 matches representation 618c1 significantly, except that the location of system text 606b has been altered such that system text 606b is now displayed closer to the bottom of representation 618c2, and thus closer to the bottom of display 602. Editing user interface 620b2 also includes an aspect indicator 624b that indicates that editing user interface 620b2 is the user interface for editing the location of system text 606b.
Editing user interface 620b2 also includes a selection indicator 622b2 that includes a visual and/or textual representation of the currently selected option of the editable aspect of watch user interface 606. At fig. 6J, selection indicator 622b2 indicates that the currently selected "location" option of system text 606b is "bottom".
Editing user interface 620b2 also includes a position indicator 626b2. The position indicator 626b2 includes a graphical indication of the number of selectable options of the editable aspect of the watch user interface 606 currently being edited and the position of the currently selected option in the list of selectable options. For example, position indicator 626b2 indicates that the currently selected option "bottom" of "position" of watch user interface 606 is lower than the position of the "top" selectable option as shown by position indicator 626b1 at fig. 6I.
At FIG. 6I, computer system 600 detects swipe input 650d on editing user interface 620b 1. At FIG. 6K, in response to detecting swipe input 650d, computer system 600 displays editing user interface 620c1 including representation 618d 1. Representation 618d1 matches representation 618c1 significantly. Editing user interface 620c1 also includes an aspect indicator 624c that indicates that editing user interface 620c1 is a user interface for editing the color of aspects of watch user interface 606.
Editing user interface 620c1 also includes a selection indicator 622c1 that includes a visual and/or textual representation of the currently selected color option of aspects of watch user interface 606. In some embodiments, the currently selected color option is applied to an element of system text 606 b. In some embodiments, the currently selected color option is applied to some elements of the system text 606b (e.g., the current time 606b 3), but not to other elements (e.g., the date 606b2 and/or the lock icon 606b 1). At fig. 6K, selection indicator 622c1 indicates that the currently selected "color" option of watch user interface 606 is "orange".
Editing user interface 620c1 also includes color option indicators 628 that include various selectable color options. The selected color 628a includes a visual indication surrounding the currently selected color that provides a visual and/or graphical indication of the selected color and its location within the color option indicator 628.
At FIG. 6K, computer system 600 detects swipe input 650e on editing user interface 620c 1. At FIG. 6L, in response to detecting swipe input 650e, computer system 600 displays editing user interface 620d1 including representation 618e 1. The representation 618e1 is substantially the same as the representation 618d1, except that the former is displayed in a larger size and that a blurring and/or darkening effect is being applied to elements of the representation 618d1 that are not currently being edited (e.g., elements of the watch user interface 606 other than complex function blocks). The editing user interface 620d1 also includes an aspect indicator 624d that indicates that the editing user interface 620d1 is a user interface for editing complex function blocks displayed with the watch user interface 606.
At FIG. 6L, computer system 600 detects tap input 650f on complex function block 606d 1. At fig. 6M, in response to detecting tap input 650f, computer system 600 displays an editing user interface 620d2 that includes a plurality of selectable complex function block options to be displayed with watch user interface 606.
Fig. 6M includes complex function block options that may be selected for display with the watch user interface 606. In some embodiments, the selectable complex function blocks are categorized into a plurality of categories based on associated features and/or applications associated with the selectable complex function blocks. Editing user interface 620d2 includes category 632a, which includes visual and/or textual indications of complex function blocks under category 632a that are related to "heart rate". Editing user interface 620d2 also includes category 632b, which includes visual and/or textual indications of complex function blocks under category 632b that are related to "weather". In some embodiments, a category may include multiple complex function blocks, in which case multiple complex function blocks associated with a given category are displayed below text and/or visual indications associated with the category. In some implementations, the editing user interface 620d2 is initially displayed centered on the selected complex function block from the previous user interface and/or with focus selection. In some implementations, the computer system navigates from one complex function block option to another complex function block option (e.g., moving focus selection) by scrolling through swipe inputs on editing user interface 620d2 and/or rotational inputs via rotatable and pressable input mechanism 604.
Editing user interface 620d2 also includes a cancel user interactive graphical user interface object 630 which, when selected, causes computer system 600 to cease displaying editing user interface 620d2 and display editing user interface 620d1. Editing user interface 620d2 also includes a close user interactive graphical user interface object 634 which, when selected, edits watch user interface 606 to be displayed without complex function blocks (e.g., without 606d1 or 606d 2).
Editing user interface 620d2 also includes a location indicator 626c. The location indicator 626c includes a graphical indication of the number of selectable options for the complex function block displayed with the watch user interface 606 and the location of the complex function block currently having focus selection within the list of selectable complex function block options. For example, at fig. 6M, the position indicator 626c indicates the relative position of the complex function block 606d2 to be displayed with the watch user interface 606 within the list of selectable complex function block options.
At fig. 6M, computer system 600 detects tap input 650g on complex function block 606d2 and press input 636a via a rotatable and pressable input mechanism while complex function block 606d2 has a focus selection. At fig. 6N, in response to detecting tap input 650g or press input 636a, computer system 600 displays editing user interface 620d3 including representation 618e 2. The representation 618e2 is substantially the same as the representation 618e1 except that the complex function block options have been edited such that the representation 618e2 includes complex function blocks 606d2 that are heart rate complex function blocks instead of complex function blocks 606d1 that are weather complex function blocks.
At fig. 6M, computer system 600 detects a press input 636b via rotatable and pressable input mechanism 604. At fig. 6O, in response to pressing input 636b, computer system 600 displays a selection user interface 610b that is substantially identical to selection user interface 610a, except that the former includes a representation 618f that includes edits made to watch user interface 606 at fig. 6G-6N. In particular, representation 618f differs from representation 618a in that current time 606b3 is displayed in a different font, and representation 618f includes complex function block 606d2 instead of complex function block 606d1.
At fig. 6O, computer system 600 detects tap input 650h on representation 618f and press input 636c via rotatable and pressable input mechanism 604. At fig. 6P, in response to detecting tap input 650h or press input 636c, computer system 600 displays a watch user interface 638 including a background element 606A, system text 606b displayed in a different font than that used for watch user interface 606 at fig. 6A, a foreground element 606c, and a complex function block 606d2.
At fig. 6Q, computer system 600 displays watch user interface 640. In some implementations, the computer system 600 transitions from displaying the watch user interface 638 to displaying the watch user interface 640 in response to an input (e.g., a tap input on the watch user interface 638). In some implementations, computer system 600 transitions from displaying watch user interface 638 to displaying watch user interface 640 based on the passage of time (e.g., system text 606b indicates that the current time at fig. 6P is 10:09, and system text 640b indicates that the current time at fig. 6Q is 3:08). In some implementations, the computer system 600 transitions from displaying the watch user interface 638 to displaying the watch user interface 640 in response to a wrist-lift gesture.
Watch user interface 640 includes a background element 640a, system text 640b, a foreground element 640c, and a complex function block 640d. Like user interface 606, the elements of watch user interface 640 are arranged and displayed in a virtual stack. The elements of the watch user interface 640 are arranged such that the background element 640a is below the system text 640b, which is below the complex function block 640d, which is below the foreground element 640 c. Notably, the virtual arrangement of foreground elements (e.g., 640 c) in front of (e.g., overlaying) a complex function block (e.g., 640 d) is different from the watch user interface 606.
In some implementations, when the computer system generates and/or creates a watch user interface, such as watch user interface 640, based on an image with depth data, computer system 600 virtually arranges the layers below in accordance with determining that the layers may be displayed in a particular order while the particular layers arranged at the top mask the layers arranged below by no more than a threshold amount. For example, in some embodiments, in accordance with a determination that a foreground element (e.g., 640 c) will not obscure a complex functional block (e.g., 640) by more than a threshold amount (e.g., 1/5 of a complex functional block, 1/6 of a complex functional block, etc.), the foreground element is disposed in front of (e.g., overlays) the complex functional block.
The above procedure for layering foreground elements over complex functional blocks may also be applied to layering foreground elements over system text. For example, as described below with respect to fig. 6R, computer system 600 may generate a watch user interface based on media items having depth data, wherein in accordance with a determination that space in the media items is insufficient to generate and/or display a watch user interface based on the media items in which system text may be disposed below a foreground element such that the system text will not be obscured beyond a threshold amount, the foreground element is disposed below the system text.
At fig. 6R, computer system 600 displays watch user interface 642. In some implementations, computer system 600 transitions from displaying watch user interface 640 to displaying watch user interface 642 in response to an input (e.g., a tap input on watch user interface 640). In some implementations, computer system 600 transitions from displaying watch user interface 640 to displaying watch user interface 642 based on the passage of time (e.g., system text 640b indicates that the current time at fig. 6Q is 3:08, and system text 642b indicates that the current time at fig. 6R is 9:01). In some implementations, computer system 600 transitions from displaying watch user interface 640 to displaying watch user interface 642 in response to a wrist lift gesture.
The watch user interface 642 includes a background element 642a, system text 642b, a foreground element 642c, and a complex function block 642d. Like watch user interface 606 and watch user interface 640, the elements of watch user interface 642 are virtually arranged as layers. In the watch user interface 642, the elements of the watch user interface 640 are arranged in a virtual stack such that the background element 642a is below the foreground element 640c, which is below the system text 642b and the complex function block 642d 1. Notably, the arrangement that causes the foreground elements (e.g., 640 c) to be virtually disposed under the system text 642b is different from the watch user interface 606.
In some implementations, when a computer system generates a watch user interface, such as watch user interface 642, based on media with depth data, computer system 600 arranges elements of the watch user interface in a virtual stack according to a determination that layers may be displayed in a particular order, with particular layers arranged at the top obscuring layers arranged below by no more than a threshold amount. For example, in some embodiments, in accordance with a determination that the foreground element (e.g., 642 c) will not obscure more than a threshold amount of system text (e.g., 1/5 of system text, 1/6 of system text, etc.), the foreground element is disposed in front of (e.g., overlays) the system text.
At fig. 6R, computer system 600 generates and displays watch user interface 642 in accordance with determining that space in the media item is insufficient to generate a watch user interface in which system text 642b is disposed below the foreground element and such that system text 642b is not obscured or obscured by foreground element 642c by more than a threshold amount. Thus, the computer system 600 generates a watch user interface 642 in which elements of the watch user interface 642 are arranged in a virtual stack such that foreground elements are below the system text 642 b.
Fig. 6S-6U illustrate a user interface for enabling and displaying a user interface using media items having depth data via a computer system 660, wherein the computer system 660 is in wireless communication with the computer system 600. In some embodiments, computer system 600 and computer system 660 are logged into the same user account. In some embodiments, computer system 600 and computer system 660 are paired. In some embodiments, computer system 660 optionally includes one or more features of device 1, device 300, or device 500. In some embodiments, computer system 660 is a tablet, phone, laptop, desktop, camera, or the like.
At fig. 6S, computer system 660 displays, via display 662, my watch user interface 675a that includes options for editing a watch user interface that may be displayed via computer system 600. My watch user interface 675a includes a back user interactive graphical user interface 644 that, when selected, causes computer system 660 to display a user interface for selecting which computer system (e.g., watch) is being configured via computer system 660. The my watch user interface 675a also includes a watch name 646 indicating that the watch currently selected for configuration via computer system 660 is Jane's watch. In fig. 6S, computer system 600 corresponds to Jane' S watch. My watch user interface 675a also includes a search bar 664 which, when selected, is operable to search among a plurality of selectable watch user interfaces available via computer system 600 for configuration via computer system 660.
My watch user interface 675a further includes a header 647 that includes a visual and/or textual indication that indicates that the representation of the dial displayed below header 647 corresponds to a dial that is available (e.g., stored in the local memory of the computer system) on computer system 600 (e.g., jane's watch). My watch user interface 675a includes representations of the multiple dials available on computer system 600, including a representation 648 of the watch user interface titled "meridian", a representation 652 of the watch user interface titled "portrait" corresponding to the watch user interface 642 being displayed via computer system 600, and a representation 654 of the watch user interface titled "sports".
My watch user interface 675a also includes an options area 666. Options area 666 includes a plurality of selectable options for configuring various features of computer system 600. The options area 666 includes a notification user interactive graphical user interface object 666a that, when selected, causes the computer system 660 to display a user interface for editing notification settings of the computer system 600. The options area 666 also includes a display user interactive graphical user interface object 666b which, when selected, causes the computer system 660 to display a user interface including options for editing the display and brightness settings of the computer system 600.
My watch user interface 675a also includes selectable options for displaying, via computer system 660, user interfaces other than My watch user interface 675a that are related to configuring features of computer system 600. For example, my watch user interface 675a includes a dial gallery user-interactive graphical user interface object 656 that, when selected, causes computer system 660 to display a user interface for viewing additional watch user interfaces available on computer system 600. My watch user interface 675a also includes a discovery user interactive graphical user interface object 658 that, when selected, causes computer system 660 to display a user interface for obtaining (e.g., downloading) additional watch user interfaces that have not been downloaded onto computer system 600. My watch user interface 675a also includes a dial gallery user-interactive graphical user interface object 654 that corresponds to My watch user interface 675a and, when selected, causes computer system 660 to display My watch user interface 675a.
In fig. 6S, computer system 600 displays a watch user interface 642 that maintains the features of watch user interface 642 as described and illustrated in fig. 6R discussed above. At FIG. 6S, computer system 660 detects tap input 650i on representation 652a, which corresponds to watch user interface 642 subsequently displayed on computer system 600.
At fig. 6T, in response to detecting tap input 650i, computer system 660 displays my watch user interface 675b that includes additional options for configuring the manner in which watch user interface 642 is displayed via computer system 600. My watch user interface 675b includes a back user interactive graphical user interface 671 that, when selected, causes computer system 660 to display my watch user interface 675a. My watch user interface 675b also includes a dial name 676 indicating that the name of the watch user interface currently selected for configuration via computer system 660 is "portrait". My dial user interface 675b also includes a shared user-interactive graphical user interface object 669 that, when selected, causes computer system 660 to display a user interface related to transmitting and/or sharing information about watch user interface 642 to another device (e.g., another computer system).
My watch user interface 675b also includes a representation 652a, which is a representation of a watch user interface (e.g., watch user interface 642) that is currently being displayed on computer system 600. In some implementations, the representation 652a is a live preview of the currently selected configuration that has been selected for display via the computer system 600. Thus, in some embodiments, representation 652a is updated in response to input received via computer system 660, such that selecting an option on my dial user interface 675b causes both representation 652a as displayed by computer system 660 and watch user interface 642 as displayed by 600 to be updated. The watch user interface also includes a description 674 that includes a textual description of features of the watch user interface currently selected for editing (e.g., a "portrait" watch user interface, corresponding to watch user interface 642).
The my watch user interface 675b also includes a color area 668 for selecting a color for displaying aspects of the watch user interface 642 via the computer system 600. Color region 668a includes a selected color 668a that indicates the currently selected color that aspects of watch user interface 642 are being displayed. In some implementations, aspects affected by the color selection include system text 642b. In this manner, watch user interface 675b may be used to edit the color of aspects of watch user interface 642 in a similar manner to the color editing process described above with respect to editing user interface 620c 1.
Watch user interface 675b also includes an options area 670 that includes selectable options for editing aspects of watch user interface 642. The options area 670 includes a content header 670a that indicates that the options included in the area 670 below the header 670a are for editing the content of the currently selected watch user interface (e.g., 642). Region 670 also includes an album user interactive graphical user interface object 670b that, when selected, configures the "portrait" watch user interface to be displayed using media items with depth data from the selected media item album. Selection indicator 672 is displayed as a check mark on album user interactive graphical user interface object 670b to indicate that watch user interface 642 is currently configured to be displayed using a media item album. Album name user interactive graphical user interface object 670c1 includes the title of the album from which the media item of the watch user interface including depth data was selected to be generated by computer system 600. At FIG. 6T, album name user interactive graphical user interface object 670c1 indicates that the media item with depth data being used to generate watch user interface 642 is currently selected from the album titled "spring". Region 670 further includes a photo user interactive graphical user interface object 670d that, when selected, configures a "portrait" watch user interface (e.g., 642) to be displayed using media items with depth data from computer system 600 and/or photo albums accessible to computer system 600. Region 670 further includes a dynamic user-interactive graphical user interface object 670e that, when selected, configures a "portrait" watch user interface (e.g., 642) to be displayed using media items with depth data from new and/or updated media items, and/or media items with depth data that newly become available via computer system 600 and/or computer system 660.
At FIG. 6U, computer system 660 displays my watch user interface 675c. At fig. 6U, in my watch user interface 675c, album name user interactive graphical user interface object 670c1 has been replaced with album name user interactive graphical user interface object 670c2, which indicates that the album from which the media items with depth data of the "portrait" watch user interface was being generated has been updated from "spring" to "summer". Thus, representation 652a has been replaced with representation 652b, which corresponds to watch user interface 680 displayed via 600.
Watch user interface 680 is generated and displayed by computer system 600 based on media items with depth data selected from an album titled "summer" instead of the previously selected album titled "spring". In some embodiments, computer system 660 transitions from my watch user interface 675b to my watch user interface 675c and computer system 600 transitions from displaying watch user interface 642 to displaying watch user interface 680 in response to a sequence of user inputs received at computer system 660, including a tap input on album name user interactive graphical user interface object 670c 1. Thus, at FIG. 6U, in response to a sequence of one or more user inputs received via computer system 660 while displaying my watch user interface 675b, including a tap input on album name 670c1, computer system 600 displays watch user interface 680, including background element 680a, system text 680b, foreground object 680c, and complex function block 680d.
Thus, fig. 6S-6U illustrate a watch user interface displayed via computer system 600 that may be updated and/or configured via input received at a computer system in wireless communication (e.g., pairing) with computer system 600. In addition, fig. 6S-6U demonstrate a source of media items with depth data that may be manually edited and/or configured via computer system 660 (e.g., a computer system in wireless communication with computer system 600) for generating a watch user interface for display via computer system 600.
Fig. 7 is a flow chart illustrating a method for managing dials based on depth data of previously captured media items using a computer system, according to some embodiments. The method 700 is performed at a computer system (e.g., 100, 300, 500, 600) (e.g., smart watch, wearable electronic device, smart phone, desktop computer, laptop computer, tablet computer) in communication with a display generating component and one or more input devices (e.g., display controller, touch-sensitive display system, rotatable input mechanism, touch-sensitive surface). Some operations in method 700 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 700 provides an intuitive way for managing dials based on depth data of previously captured media items. The method reduces the cognitive burden on the user to manage the dial based on the depth data of previously captured media items, thereby creating a more efficient human-machine interface. For battery-driven computing devices, a user is enabled to more quickly and efficiently manage dials to save power and increase time between battery charges based on depth data of previously captured media items.
In some embodiments, the dial described in method 700 may be displayed and/or edited in a manner described below with respect to method 1500 (e.g., fig. 15) and/or as described below with respect to fig. 14A-14R.
The computer system (e.g., 600) receives (702), via one or more input devices, input (e.g., a lift-up to wake gesture, a tap gesture, a digital crown rotation gesture, etc.) corresponding to a request to display a media item-based user interface.
In response to receiving the input, the computer system displays (704) a user interface (e.g., 606) (e.g., watch user interface, wake screen, dial, lock screen) via a display generation component. Displaying the user interface includes simultaneously displaying: a media item (706) (e.g., photograph, video, GIF, animation) comprising a background element (e.g., 606a as shown in fig. 6B) and a foreground element (e.g., 606c as shown in fig. 6B) segmented from the background element based on depth information; and system text (e.g., 606B as shown in fig. 6B) (708) (e.g., first time, current date), wherein the system text is displayed in front of the background element (e.g., visually overlaying the background element or at a location corresponding to a portion of the background element) and behind the foreground element (e.g., at least partially visually overlaying by the foreground element), and has content dynamically selected based on the context of the computer system. In some implementations, the media item includes depth data (e.g., data that can be used to segment a foreground element from one or more background elements, such as data indicating that the foreground element is less than a threshold distance from one or more cameras when the media is captured and the background element is greater than the threshold distance from one or more cameras when the media is captured, or a data set related to a distance between two objects in the media, including a camera sensor and a data set of relative distances between at least a first object and a second object in a field of view of the camera sensor when the media is captured, multiple layers). In some embodiments, the background element and the foreground element are selected (in some embodiments, automatically) based on depth data (e.g., in accordance with a determination that the background element is positioned behind the foreground element). Automatically creating a user interface (e.g., 606 as shown in fig. 6B), wherein displaying the user interface includes simultaneously displaying: a media item comprising a background element, a foreground element segmented from the background element based on depth information; and system text, wherein the system text is displayed in front of the background element and behind the foreground element and has content dynamically selected based on the context of the computer system, such that the user interface can be displayed without requiring the user to provide multiple inputs to configure the user interface (e.g., by manually dividing the media item into divided elements, and/or by selecting which element of the media should be the foreground element and which element of the media item should be the background element). Performing operations when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping display the user interface by determining that the media item includes a background element and a foreground element that is segmented from the background element based on depth information), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, in accordance with a determination that input is received in a first context (e.g., at a first time, at a first date, at a first time zone), a computer system (e.g., 600) displays first content (e.g., 606B as shown in fig. 6B) in system text (e.g., first time, first date). In some implementations, in accordance with determining that the input is received in a second context (e.g., at a second time, at a second date, at a second time zone), the computer system displays a second content (e.g., 640b as shown in fig. 6Q) in the system text that is different from the first content (e.g., a second time, a second date). Displaying system text having different content according to different contexts provides visual feedback about the context of the computer system. Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to be able to quickly and easily view information about the context of the computer system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, a computer system (e.g., 600) detects a change in the context of the computer system (e.g., a time change, a date change, a time zone change). In some embodiments, in response to detecting a change in the context of the computer system, the computer system updates the system text based at least in part on the change in the context (e.g., 606B as shown in fig. 6B). In some embodiments, updating the system text includes modifying the system text to display different content. Updating system text based on changes in the context of a computer system provides improved visual feedback by enabling the computer system to display context-specific system text to quickly and easily inform a user about current context information. Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user view context information quickly and easily), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the media item-based user interface is a dial (e.g., 606 as shown in fig. 6B) (e.g., a dial including a time indication and one or more watch complex function blocks). Displaying the user interface as a dial provides improved visual feedback by helping a user quickly and easily access information provided by the user interface in the dial. Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user access information included in the user interface quickly and easily), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the user interface (e.g., 606 as shown in fig. 6B) is an initial display screen (e.g., wake screen or lock screen) of the computer system (e.g., at 600) (e.g., smart phone, tablet, computer, TV) when transitioning from a low power state (e.g., as shown in fig. 6A) (e.g., off state, sleep state, low power mode, battery saving mode, economy mode) to a higher power state (e.g., active state, on state, normal (e.g., non-low power) mode). The user interface that initially displays the transition of the computer system from a low power state to a higher power state provides improved visual feedback by helping a user quickly and easily access information when the computer system transitions from a low power state to a higher power state (e.g., upon waking up). Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user access information provided in the user interface), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the user interface is a lock screen (e.g., 606 as shown in fig. 6A) (e.g., where authentication (e.g., biometric authentication; password authentication) is required to unlock the computer system). In some implementations, the lock screen includes a prompt (e.g., instructions) to provide information to unlock the device. Displaying the user interface as a lock screen improves visual feedback by helping a user quickly and easily access information provided in the user interface while restricting access to other features of the device based on the lock state of the device. Providing improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user access information included in the user interface when the device is in a locked state), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the device. Furthermore, displaying the user interface as a lock screen user interface improves the security of the device while maintaining functionality by helping the user view information included in the user interface while other features of the computer system are not enabled due to the device being in a locked state.
In some embodiments, displaying the system text (e.g., 606B as shown in fig. 6B) includes displaying the current time (e.g., 606B3 as shown in fig. 6B) (e.g., time of day; time in the current time zone) and/or the current date in the system text. In some embodiments, the text is continually updated over time to reflect the time of day. In some embodiments, the text coordinates with and/or aims to reflect a coordinated universal time phase with an offset based on the currently selected time zone. Displaying a user interface (e.g., 606 as shown in fig. 6B), wherein displaying the user interface includes displaying system text including a current time and/or a current date, allows the user interface to include information about a current activity state of the computer system, which provides improved visual feedback by enabling a user to quickly and efficiently view the current activity state information. Providing improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user determine the date/time quickly), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the system text (e.g., 606B as shown in fig. 6B) is at least partially obscured by the foreground element (e.g., 606c as shown in fig. 6B). Displaying the system text, wherein the system text is at least partially obscured by foreground elements of the media item, allows elements (e.g., the system text and/or the foreground elements) displayed in a user interface (e.g., 606 as shown in fig. 6B) to be displayed in a larger size without degrading the functionality and/or readability of the system text, which provides improved visual feedback by allowing a user to easily and effectively view the content of the system text (e.g., in a larger font to improve readability) and/or view the foreground elements of the larger size media item to more clearly see the foreground elements. Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user view larger sized foreground elements and system text without impeding readability), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the media includes photographs and/or video. Displaying a user interface that includes system text (e.g., 606B as shown in fig. 6B) and media items (where the media items are photos and/or videos) provides improved visual feedback by allowing a user to easily and effectively view the photos and/or videos while simultaneously viewing the system text. Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the device.
In some embodiments, displaying the user interface (e.g., 606 as shown in fig. 6B) includes displaying an animation. In some implementations, the animation includes a change in appearance over time of one or more of the elements of the user interface based at least in part on the depth information. In some embodiments, the animation includes displaying the foreground element in a first set of characteristics and displaying the background element in a second set of characteristics different from the first set of characteristics. Displaying an animation, wherein the animation includes a change in appearance of one or more elements of the user interface over time based at least in part on depth information of the media item, wherein displaying the animation provides improved visual feedback as to which portion of the media item is a background element (e.g., 606a as shown in fig. 6B) and which portion of the media item is a foreground element (e.g., 606c as shown in fig. 6B). Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by visually identifying different elements of the media item), which in turn reduces power usage and extends battery life of the device by enabling a user to more quickly and efficiently use the device.
In some embodiments, the animation includes simulating a zoom effect. In some embodiments, the zoom effect includes a blurred background element (e.g., 606a as shown in fig. 6B). In some implementations, the zooming effect includes reducing the ambiguity of the foreground element (e.g., 606c as shown in fig. 6B) (e.g., focusing the foreground element). In some embodiments, the zoom effect includes blurring the background element while reducing the blurring of the foreground element. Displaying an animation of a media item that includes simulated zoom effects provides improved visual feedback as to which portion of the media item is a background element and which portion of the media item is a foreground element. Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by visually identifying different elements of the media item), which in turn reduces power usage and extends battery life of the device by enabling a user to more quickly and efficiently use the device.
In some embodiments, the animation includes simulating a push-pull zoom effect. In some implementations, pushing and pulling the zoom effect includes displaying an animation in which the simulated camera moves toward or away from the foreground element (e.g., 606c as shown in fig. 6D) while adjusting the zoom lens in a manner that keeps the foreground element (e.g., 606 c) the same size to create a visual effect in which the background (e.g., 606a as shown in fig. 6D) grows in size and detail or the foreground increases in size relative to the background. In some embodiments, the push-pull scaling effect includes updating a simulated scaling effect applied to the background element (e.g., 606 a) while maintaining the foreground element (e.g., 606c as shown in fig. 6D) at a constant scaling level. In some implementations, the push-pull zoom effect includes zooming out the background element (e.g., 606 a) while maintaining the simulated zoom level applied to the foreground element (e.g., 606 c). In some implementations, the push-pull zoom effect includes zooming in on the background element (e.g., 606 a) while maintaining the simulated zoom level applied to the foreground element (e.g., 606 c). Displaying an animation of a media item that includes a simulated push-pull zoom effect provides improved visual feedback regarding which portion of the media item is a background element and which portion of the media item is a foreground element (e.g., 606 c). Providing improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by visually identifying different elements of the media item), which in turn reduces power usage and extends battery life of the device by enabling a user to more quickly and efficiently use the device.
In some implementations, the animation includes a parallax effect (e.g., as shown in fig. 6C). In some implementations, the parallax effect includes updating a position of the display foreground element (e.g., 606C as shown in fig. 6C) relative to the background element (e.g., 606a as shown in fig. 6C). In some implementations, the parallax effect includes translating the foreground elements on the display at a first speed and translating the background elements on the display at a second speed different from the first speed. Displaying an animation of a media item that includes a parallax effect provides improved visual feedback as to which portion of the media item is a background element and which portion of the media item is a foreground element. Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by visually identifying different elements of the media item), which in turn reduces power usage and extends battery life of the device by enabling a user to more quickly and efficiently use the device.
In some implementations, the computer system detects movement (e.g., movement of the computer system; e.g., movement caused by a user of the computer system (e.g., a wrist tilt gesture)) while the computer system (e.g., 600) is in a higher power state (e.g., active state, on state, normal (e.g., non-low power) mode) (e.g., as shown in fig. 6C). In some embodiments, in response to detecting the movement, the computer system displays, via the display generation component, a user interface having a simulated parallax effect with a direction and/or magnitude determined based on the direction and/or magnitude of the movement. In some implementations, the parallax effect is based at least in part on the degree and/or direction of movement. In some implementations, displaying the user interface with the simulated parallax effect (e.g., 606 as shown in fig. 6C) includes displaying the media item with the simulated panning effect, where the foreground elements are shown to move faster as the field of view pans than the background elements. In some implementations, the user interface is not displayed with a parallax effect in response to detecting movement while the computer system is in a low power state (e.g., off state, sleep state, low power mode, battery saving mode, economy mode). Displaying an animation of a media item that includes a parallax effect in response to movement provides improved visual feedback regarding which portion of the media item is a background element and which portion of the media item is a foreground element. Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by visually identifying different elements of the media item), which in turn reduces power usage and extends battery life of the device by enabling a user to more quickly and efficiently use the device.
In some embodiments, the computer system (e.g., 600) displays, via the display generation component, an editing user interface (e.g., 620a 1) for editing a first complex function block (e.g., 606d1 as shown in fig. 6B) of the user interface (e.g., 606 as shown in fig. 6B). In some implementations, complex functional blocks refer to any clock face feature other than hours and minutes for indicating time (e.g., clock hands or hour/minute indications). In some implementations, complex functional blocks provide data obtained from applications. In some embodiments, the complex function block includes an affordance that, when selected, launches the corresponding application. In some implementations, the complex function blocks are displayed at fixed predefined locations on the display. In some implementations, the complex function blocks occupy respective positions (e.g., lower right, lower left, upper right, and/or upper left) at particular areas of the dial. In some implementations, when displaying the editing user interface, the computer system receives a first sequence of one or more user inputs (e.g., touch inputs, rotation inputs, press inputs) via one or more input devices. In some embodiments, in response to receiving a first sequence of one or more user inputs, the computer system edits a first complex function block (e.g., as shown in fig. 6L-6N). In some embodiments, wherein the complex function block includes information from the first application, editing the complex function block includes editing the complex function block to display different information from the first application. In some embodiments, editing the complex function block includes editing the complex function block to display different information from a second application different from the first application. Editing the first complex function block in response to receiving a sequence of one or more user inputs while displaying the editing user interface enables the user to edit the first complex function block easily and in an intuitive manner. Providing improved control options enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the system text (e.g., 606B as shown in fig. 6B) displayed in the user interface (e.g., 606 as shown in fig. 6B) is displayed in a first font. In some embodiments, after displaying the user interface in which the system text is displayed in the first font, the computer system receives a request to edit the user interface (e.g., touch input, rotation input, press input) via one or more input devices (e.g., as shown in fig. 6F). In some embodiments, in response to receiving a request to edit the user interface, the computer system displays an editing user interface (e.g., 620a 1) for the editing user interface via the display generating component. In some embodiments, upon displaying the editing user interface, the computer system receives a second sequence of one or more user inputs (e.g., touch inputs, rotation inputs, press inputs) via one or more input devices (e.g., as shown in fig. 6G-6H). In some embodiments, in response to receiving a second sequence of one or more user inputs, the computer system selects a second font for the system text. In some embodiments, after selecting the second font for the system text, the computer system displays the user interface. In some embodiments, the system text displayed in the user interface is displayed in a second font that is different from the first font (e.g., as shown in fig. 6P). In some embodiments, updating the user interface to display the system text with a second font different from the first font involves updating the user interface to stop displaying the system text with the first font. Editing fonts with which system text is displayed in response to receiving a second sequence of one or more user inputs while displaying the editing user interface enables a user to easily and intuitively edit fonts. Providing improved control options enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the system text (e.g., 606B as shown in fig. 6B) displayed in the user interface (e.g., 606) is displayed in a first color. In some embodiments, after displaying the user interface in which the system text is displayed in the first color, the computer system receives a second request to edit the user interface via the one or more input devices. In some embodiments, in response to receiving the second request to edit the user interface, the computer system displays an editing user interface (e.g., 620c 1) for the editing user interface via the display generating component. In some implementations, when displaying the editing user interface, the computer system receives a third sequence of one or more user inputs (e.g., touch inputs, rotation inputs, press inputs) via the one or more input devices. In some embodiments, in response to receiving a third sequence of one or more user inputs, the computer system selects a second color for the system text. In some embodiments, after selecting the second color for the system text, the computer system displays a user interface, wherein the system text displayed in the user interface is displayed in a second color different from the first color. In some embodiments, updating the user interface to display the system text with a second color different from the first font involves updating the user interface to stop displaying the system text in the first color. Editing the color with which the system text is displayed in response to receiving a third sequence of one or more user inputs while displaying the editing user interface enables the user to easily and intuitively edit the color. Providing improved control options enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system (e.g., 600) detects that a predetermined condition has been met (e.g., a predetermined amount of time has elapsed, a user input has been detected (e.g., a tap, a wrist lift)). In some embodiments, in response to detecting that the predetermined condition has been met, the computer system displays a user interface (e.g., 606 as shown in fig. 6B). In some implementations, the user interface is based on the second media item rather than the media item (e.g., as shown in fig. 6Q). In some implementations, displaying the user interface includes simultaneously displaying the second media item (which includes the second background element and the second foreground element segmented with the second background element based on the depth information) and the system text (e.g., 640b as shown in fig. 6Q). In some embodiments, the system text is displayed in front of the second background element (e.g., 640 a) and behind the second foreground element (e.g., 640 c) and has content dynamically selected based on the context of the computer system (e.g., as shown in fig. 6Q). In some embodiments, the predetermined condition is met when the computer detects an input (e.g., tap input, rotation input, and/or movement) via one or more input devices. In some embodiments, the predetermined condition is met when the computer system changes state (e.g., from a low power state to a higher power state, from an off state to an on state, from a sleep state to an awake state). In some implementations, the second media item is automatically selected. In some implementations, the second media item includes depth data. In some implementations, the second media item includes a second background element and a second foreground element. The media item is conditionally stopped from being displayed based on whether a predetermined condition has been met, and for a second media item based user interface to be displayed, the operation is caused to be performed by the particular device without further user input. Performing operations when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping to display the user interface based on updated media items when certain conditions are met), which in turn reduces power usage and extends battery life of the device by enabling a user to more quickly and efficiently use the device.
In some implementations, the computer system (e.g., 600, 660) displays, via the display generation component, a media selection user interface (e.g., 675 b) that includes a collection of media items (e.g., as shown in fig. 6S) (e.g., from a media library of the computer system). In some implementations, the computer system receives, via the one or more input devices, a fourth sequence of one or more user inputs (e.g., touch inputs, rotation inputs, press inputs) corresponding to the selection of the third media item. In some implementations, the computer system displays the user interface in response to receiving a fourth sequence of one or more user inputs (e.g., touch inputs, rotation inputs, press inputs) corresponding to selection of the subset of the set of media items that includes the third media item. In some implementations, the user interface is based on a third media item. In some implementations, the computer system generates a qualified set of media items based at least in part on characteristics of the media items (e.g., availability of depth information, shape of depth information, presence of a particular type of point of interest (e.g., face, pet, favorite person), location of the point of interest (e.g., face, pet, important foreground elements) in the media items. In some implementations, the collection of media items is a subset of a larger collection of media items (e.g., photo albums) that are accessible from (e.g., stored on) a computer system. Displaying a user interface based on the third media item in response to receiving a fourth sequence of one or more user inputs corresponding to selection of the third media item, such that the user can easily and intuitively edit the user interface to be displayed based on the selected media item. Providing improved control options enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide media items upon which to select the user interface), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the device.
In some implementations, in accordance with a determination that the plurality of media items includes at least one media item that satisfies a first set of predetermined criteria (e.g., availability of depth information, shape of depth information, presence of a particular type of point of interest (e.g., face, pet, favorite person), location of the point of interest (e.g., face, pet, important foreground element) in the media), one or more media items that satisfy the first set of predetermined criteria are added to a subset of media items selected for use with the user interface (e.g., 606). In some implementations, in accordance with a determination that the plurality of media items does not contain at least one media item that meets a first set of predetermined criteria, the computer system forgoes adding the media item to a subset of media items selected for use with the user interface. In some implementations, determining that the plurality of media items includes at least one media item that meets a first set of criteria includes evaluating the plurality of media items available (e.g., accessible) to the computer system to determine whether a media item of the plurality of media items meets the first set of predetermined criteria. In some implementations, the user interface is displayed after adding one or more media items that meet a first set of predetermined criteria to the subset of media items. In some implementations, as part of displaying the user interface, the computer system automatically selects (e.g., without user input) a fourth media item from the subset of media items selected for use with the user interface, and after selecting the fourth media item from the subset of media items selected for use with the user interface, the computer system displays the fourth media item. A user interface is displayed that includes media items, wherein the media items are automatically selected based on a determination of a set of characteristics of the media items, the user being provided with the media item-based user interface without requiring the user to select the media items in order to view the media item-based user interface. Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the determination of the set of characteristics for the media item includes determining that displaying the system text (e.g., 606 b) behind the foreground element does not obscure more than a threshold amount of the system text. In some implementations, the determining includes determining that the media item includes a portion above the foreground element (e.g., at a top of the media item) that is large enough to cause the system text to be displayed without being obscured beyond a threshold amount. Displaying the media item-based user interface, wherein the media item is selected based on whether displaying the system text behind a foreground element of the media item would result in the system text being obscured beyond a threshold amount, provides the user with the media item-based user interface without requiring the user to select the media item in order to view the media item-based user interface (e.g., 640), wherein the system text is not overly obscured (e.g., to maximize legibility and/or readability). Performing operations when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by providing a user interface in which readable system text is behind the foreground elements of the media item), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, in accordance with a determination that the fifth media item (e.g., photograph, video, GIF, animation) meets the first set of predetermined criteria, the computer system (e.g., 600) displays a second user interface (e.g., 640 as shown in fig. 6Q) based on the fifth media item (e.g., watch user interface, wake screen, dial, lock screen) via the display generation component. As part of displaying the second user interface, the computer system simultaneously displays a fifth media item including a third background element and a third foreground element segmented with the third background element based on the depth information, and system text (e.g., first time, current date). In some embodiments, the system text is displayed in front of (e.g., visually overlaying or at a location corresponding to a portion of) the third background element and behind (e.g., at least partially visually overlaying) the third foreground element, and has content dynamically selected based on the third context of the computer system (e.g., 640 as shown in fig. 6Q). In some implementations, in accordance with a determination that the fifth media item does not meet the first set of predetermined criteria, the computer system displays the second user interface via the display generating component. In some implementations, as part of displaying the second user interface, the computer system simultaneously displays a fifth media item including a third background element and a third foreground element segmented from the background element based on depth information, and system text. In some embodiments, the system text is displayed in front of (e.g., visually overlaying or at a location corresponding to a portion of) the third background element and in front of the third foreground element, and has content dynamically selected based on the third context of the computer system (e.g., as shown in 642 of fig. 6R). Determining whether to display the system text in front of or behind a foreground element of the media item based on predetermined criteria provides a user interface based on the media item, wherein the location where the system text is located is selected based on the predetermined criteria without requiring the user to select the location of the system text. Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, in accordance with a determination that the fifth media item meets the first set of predetermined criteria, the computer system (e.g., 600) displays system text (e.g., 640b as shown in fig. 6Q) in an upper portion (e.g., top) of the second user interface. In some implementations, in accordance with a determination that the fifth media item does not meet the first set of predetermined criteria, the computer system displays the system text (e.g., 642 b) in a lower portion (e.g., bottom) of the second user interface (e.g., 642 as shown in fig. 6S). A second user interface having system text displayed in an upper or lower portion of the user interface based on whether the fifth media item meets a first set of predetermined criteria is provided to the user, wherein the portion of the second user interface in which the system text is displayed is automatically determined without requiring the user to select a location of the system text. Performing operations when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by selecting a preferred portion of the media item to display system text therein), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the device.
In some embodiments, the computer system concurrently displays a second complex function block (e.g., 606d2 as shown in fig. 6P) as part of displaying the user interface. In some implementations, the second complex functional block is displayed in front of (e.g., visually overlaying or at a location corresponding to a portion of) the foreground element (e.g., 606 c). In some implementations, complex functional blocks refer to any clock face feature other than hours and minutes for indicating time (e.g., clock hands or hour/minute indications). In some implementations, complex functional blocks provide data obtained from applications. In some embodiments, the complex function block includes an affordance that, when selected, launches the corresponding application. In some implementations, the complex function blocks are displayed at fixed predefined locations on the display. In some implementations, the complex function blocks occupy respective positions (e.g., lower right, lower left, upper right, and/or upper left) at particular areas of the dial. Displaying the second complex function block in front of the foreground element provides improved visual feedback by allowing the user to view the second complex function block without the second complex function block being visually obscured by the foreground element of the media item (which provides visual feedback that the second complex function block may still be selected when the foreground element is displayed). Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the device.
In some implementations, the computer system (e.g., 600) concurrently displays the third complex functional block (e.g., 640 d) as part of displaying the user interface (e.g., 640). In some embodiments, the third complex functional block is displayed behind (e.g., at least partially visually overlaid by) the foreground element (e.g., 640c as shown in fig. 6S). In some implementations, complex functional blocks refer to any clock face feature other than hours and minutes for indicating time (e.g., clock hands or hour/minute indications). In some implementations, complex functional blocks provide data obtained from applications. In some embodiments, the complex function block includes an affordance that, when selected, launches the corresponding application. In some implementations, the complex function blocks are displayed at fixed predefined locations on the display. In some implementations, the complex function blocks occupy respective positions (e.g., lower right, lower left, upper right, and/or upper left) at particular areas of the dial. Displaying the second complex function block behind the foreground element provides improved visual feedback by visually emphasizing the foreground element of the media item while maintaining the display of the second complex function block. Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the device.
It is noted that the details of the process described above with respect to method 700 (e.g., fig. 7) also apply in a similar manner to the methods described below with respect to methods 900, 1100, and 1300. For example, method 700 optionally includes one or more of the features of the various methods described above with reference to method 900. For example, the device may use a user interface including a time indication based on geographic data as described with reference to fig. 8A-8M or a watch user interface as described with reference to fig. 6A-6U as a watch user interface. As another example, a watch user interface as described with reference to fig. 6A-6U may include an hour number updated based on a current time as described with reference to method 1100 with reference to fig. 10A-10W. As another example, method 1300 optionally includes one or more of the features of the various methods described above with reference to method 700. For example, the watch user interfaces of fig. 6A-6U may be created or edited via a process for updating and selecting the watch user interfaces as described with reference to fig. 12A-12W. As another example, method 1500 optionally includes one or more of the features of the various methods described above with reference to method 700. For example, the watch user interfaces of fig. 6A-6U may first be edited via a second computer system as shown with reference to fig. 14A-14R. For the sake of brevity, these details are not repeated hereinafter.
Fig. 8A-8M illustrate an exemplary user interface for managing a clock face based on geographic data. The user interfaces in these figures are used to illustrate the processes described below, including the process in fig. 9.
Fig. 8A shows computer system 800 displaying watch user interface 816a via display 802. The computer system 800 includes a rotatable and depressible input mechanism 804. In some embodiments, computer system 800 optionally includes one or more features of device 100, device 300, or device 500. In some embodiments, computer system 800 is a tablet, phone, laptop, desktop, camera, or the like. In some implementations, the inputs described below may optionally be replaced with alternative inputs, such as pressing inputs and/or rotating inputs received via the rotatable and depressible input mechanism 804.
Fig. 8A includes a location indicator 814a that indicates that computer system 800 is located in san francisco in the pacific time zone. In some embodiments, features of watch user interface 816a correspond to and/or are based on a determination that computer system 800 is located in a particular location and/or a particular time zone, as discussed further below.
The watch user interface 816a includes a plurality of sections including a section 820a that includes a circular dial around which location names (e.g., city, country, island, region, etc.) are displayed. The names of the various locations include name 820a1 for los Angeles, name 820a2 for Debye, name 820a3 for Beijing, and name 820a4 for Mexico. The location within portion 820a of the display location name and the orientation of the display location name correspond to geographic data indicating the current location and/or time zone in which computer system 800 is located. At fig. 8A, los angeles is displayed at the bottom center of portion 820a based on determining that computer system 800 is located in san francisco in the pacific time zone (as indicated by location indicator 814 a). At FIG. 8A, portion 820a includes location names corresponding to locations representing different time zones. In some embodiments, computer system 800 displays a location name corresponding to the time zone in which computer system 800 is located in the bottom center position of portion 820a (e.g., where name 820a1 (los Angeles) is located in FIG. 8A). In some embodiments, the location name corresponding to the time zone in which computer system 800 is located is different from the actual city in which computer system 800 is located (e.g., los Angeles represents san Francisco, etc.). In some embodiments, in accordance with a determination that the location of computer system 800 has changed and/or computer system 800 has moved from a first time zone to a different time zone, computer system 800 updates the location and/or orientation of the location name displayed within portion 820 a.
Watch user interface 816a also includes an indicator 815 that includes a graphical indicator of a location name corresponding to the location of computer system 800. In some embodiments, indicator 815 comprises a graphical indicator of an hour number included in portion 820b that corresponds to the current hour in the time zone in which computer system 800 is located. In watch user interface 816a, indicator 815 includes an arrow in the bottom center of portion 820a that indicates that the hour number included in portion 820b corresponding to the current hour of los angeles is 10 (e.g., the time of los angeles is about 10:00 a.m.).
The watch user interface 816a also includes a portion 820b that includes a circular dial containing a plurality of hour numbers corresponding to the hours of the day. In watch user interface 816a, portion 820b includes a plurality of hour numbers ranging from 1 to 24, where each number corresponds to a different hour that constitutes 24 hours of a day. In some embodiments, portion 820b includes a 12 hour number instead of a 24 hour number. In some embodiments, portion 820b contains copies of some hours digits (e.g., two eight, etc.) and/or some hours digits are omitted (e.g., no seven digits, etc.) to allow for compliance with daylight savings time in different time zones and/or cities or countries. In some embodiments, computer system 800 updates the hour number included in portion 820b to omit at least one hour number and/or repeat at least one hour number according to a time period in which the determined date corresponds to the period of time during which daylight savings time is effective.
In some embodiments, the relative location of the location name included in portion 820a and the hour number included in portion 820b roughly indicates the time in the location corresponding to the location name displayed adjacent to the hour number. For example, at FIG. 8A, the name 820a2 (Debye) is shown centered around the hour number "22", indicating that the current time of Debye corresponds to the "22" hour number (e.g., about 10:00 pm). The name 820a4 (mexico) is shown centered around the hour number "5", indicating that the current time of mexico city corresponds to the "5" hour number (e.g., approximately 5:00 a.m.).
Watch user interface 816a also includes a portion 820c that includes a circular area of watch user interface 816a that includes a time indication 826 that includes an analog clock hand, wherein the position of the analog clock hand represents a current time (e.g., time, minutes, and/or seconds). The portion 820c also includes a map 824a that includes at least a partial view of an animated map and/or globe. In some implementations, the map 824a includes a view of an animated map and/or globe that includes a representation of the location (e.g., city, country, island, region, etc.) in which the computer system 800 is located (e.g., san francisco, region corresponding to Yu Taiping ocean time zone, etc.). Portion 820c also includes a light-dark boundary 822 that includes visual and/or graphical animations that represent the separation between day and night. In some embodiments, the bright-dark boundary 822 is displayed on the map 824a such that it indicates the portion of the animated map and/or globe that is currently at night time, and/or the portion of the animated map and/or globe that is currently at day time. In some embodiments, the bright-dark boundary 822 updates over time to reflect the passage of time and/or the movement of the earth.
Computer system 800 displays the location name in an orientation that makes the location name more readable. For example, in watch user interface 816a, name 820a1 (los Angeles) is oriented such that the top of the letter of name 820a1 is displayed closer to time indication 826 than the bottom of the letter of letter 820a 1. Similarly, the name 820a4 (mexico) is oriented such that the top of the letter of the name 820a4 is displayed closer to the time indication 826 than the bottom of the letter 820a 4. However, the names 820a2 (debye) and 820a3 (Beijing) are displayed such that the bottom of the letter is displayed closer to the time indication 826 than the top of the letter. In some implementations, the orientation at which the location name is displayed is updated based on geographic data related to the location of computer system 800.
Watch user interface 816a also includes a lock icon 818 that indicates that computer system 800 is currently in a locked state. In some embodiments, the features of computer system 800 are limited when computer system 800 is in a locked state. The watch user interface 816a also includes a plurality of complex function blocks including complex function block 806, complex function block 808, complex function block 810, and complex function block 812. In some embodiments, the complex functional blocks (806, 808, 810, 812) include information from applications available on the computer system 800 (e.g., installed on a computer system). In some implementations, the complex function blocks (806, 808, 810, 812) are updated according to the passage of time to display updated information (e.g., from an application associated with the complex function blocks). In some implementations, the complex function blocks (806, 808, 810, 812) are selected such that the computer system 800 launches an application corresponding to the selected complex function block.
Fig. 8B illustrates computer system 800 at different times and different locations. Fig. 8B includes a location indicator 814B that indicates that computer system 800 is located in an abb ratio in the standard time zone of bay. At fig. 8B, computer system 800 displays watch user interface 816B. Watch user interface 816b includes a portion 820a in which the location name displayed within portion 820a has been updated. For example, while name 820a1 (los Angeles) is displayed in watch user interface 816a in the bottom center of portion 820a, name 820a1 is displayed in watch user interface 816b in the top center of portion 820 a. Although the name 820a2 (debye) is shown in 816a in the top center of portion 820a, the name 820a2 is shown in watch user interface 816b in the bottom center of portion 820 a. The locations at which name 820a4 (mexico) and name 820a3 (beijing) are displayed are also updated in watch user interface 816b.
The orientation in which some location names are displayed is also updated in the watch user interface 816b. In watch user interface 816a, name 820a1 (los Angeles) is displayed in an orientation such that the top of the letter of name 820a1 is closer to time indication 826 than the bottom of the letter of name 820a 1. In watch user interface 816b, name 820a1 (los Angeles) is displayed in an orientation such that the bottom of the letter of name 820a1 is closer to time indication 826 than the top of the letter of name 820a 1. Similarly, in watch user interface 816a, name 820a2 (debye) is displayed in an orientation such that the bottom of the letter of name 820a2 is closer to time indication 826 than the top of the letter of name 820a 1. In watch user interface 816b, name 820a2 (debye) is displayed in an orientation such that the top of the letter of name 820a2 is closer to time indication 826 than the bottom of the letter of name 820a 2. The orientations of display names 820a3 (Beijing) and 820a4 (Mexico) have also been updated.
Watch user interface 816b also includes an indicator 815 that includes a graphical indicator of a location name corresponding to geographic data related to the location of the computer system. Indicator 815 also includes a graphical indicator of an hour number included in portion 820b that corresponds to the current hour in the time zone in which computer system 800 is located. In watch user interface 816b, indicator 815 includes an arrow at the bottom center of portion 820a indicating that the hour number in portion 820b, including the current hour corresponding to debye, is 12.
The watch user interface 816a also includes a portion 820b that includes a circular dial containing a plurality of hour numbers corresponding to the hours of the day. In watch user interface 816b, portion 820b includes a plurality of hour numbers ranging from 1 to 24, where each number corresponds to a different hour that constitutes 24 hours of a day.
Watch user interface 816b also includes a portion 820c that includes a circular area of watch user interface 816b that includes a time indication 826 that includes an analog clock hand, wherein the position of the analog clock hand represents a current time (e.g., time, minutes, and/or seconds). The portion 820c also includes a map 824a that includes at least a partial view of an animated map and/or globe. In some embodiments, map 824a comprises a view of an animated map and/or globe that includes a representation of the location (e.g., city, country, island, region, etc.) in which computer system 800 is located (e.g., debye, region corresponding to a standard time zone of a bay, etc.). Portion 820c also includes a light-dark boundary 822 that includes visual and/or graphical animations that represent the separation between day and night. In some embodiments, the bright-dark boundary 822 is displayed on the map 824a such that it indicates the portion of the animated map and/or globe that is currently at night time, and/or the portion of the animated map and/or globe that is currently at day time.
Watch user interface 816b also includes a lock icon 818 that indicates that computer system 800 is currently in a locked state. In some embodiments, the features of computer system 800 are limited when computer system 800 is in a locked state. The watch user interface 816b also includes a plurality of complex function blocks including complex function block 806, complex function block 808, complex function block 810, and complex function block 812. In some embodiments, the complex functional blocks (806, 808, 810, 812) include information from applications available on the computer system 800 (e.g., installed on a computer system). In some embodiments, the complex function blocks (806, 808, 810, 812) are updated according to the passage of time to display updated information. In some implementations, the complex function blocks (806, 808, 810, 812) are selected such that the computer system 800 launches an application corresponding to the selected complex function block.
Fig. 8C shows computer system 800 at different times and different locations. FIG. 8C includes a location indicator 814C that indicates that computer system 800 is located in Ireland island in the Ireland standard time zone. At fig. 8C, computer system 800 displays a watch user interface 816C. Watch user interface 816c includes a portion 820a in which the location name displayed within portion 820a has been updated. For example, while name 820a1 (los Angeles) is displayed in the watch user interface 816a in the bottom center of portion 820a and in watch user interface 816b in the top center of portion 820a, name 820a1 is displayed in watch user interface 816c to the left of portion 820 a. Although the name 820a2 is displayed in the watch user interface 816a in the top center of the portion 820a and in the watch user interface 816b at the bottom center of the portion 820a, the name 820a2 is displayed in the watch user interface 816c on the right side of the portion 820 a. The locations of the names 820a4a (mexico) and 820a3 (beijing) have also been updated in the watch user interface 816c.
The orientation in which some location names are displayed is also updated in the watch user interface 816 c. In watch user interface 816a, name 820a1 (los Angeles) is displayed in an orientation such that the top of the letter of name 820a1 is closer to time indication 826 than the bottom of the letter of name 820a1, and in watch user interface 816b, name 820a1 (los Angeles) is displayed in an orientation such that the bottom of the letter of name 820a1 is closer to time indication 826 than the top of the letter of name 820a 1. In watch user interface 816a, name 820a2 (debye) is displayed in an orientation such that the bottom of the letter of name 820a2 is closer to time indication 826 than the top of the letter of name 820a1, and in watch user interface 816b, name 820a2 (debye) is displayed in an orientation such that the top of the letter of name 820a2 is closer to time indication 826 than the bottom of the letter of name 820a 2. In watch user interface 816c, both name 820a1 (los Angeles) and name 820a2 (Debye) are displayed in the same orientation such that the bottom of the letter is closer to time indication 826 than the top of the letter. The orientation of some location names is maintained between time zone transitions. For example, in both watch user interfaces 816b and 816c, the name 820a4 (mexico) is displayed in an orientation such that the bottom of the letter is closer to the time indication 826. In some implementations, the computer system 800 flips the orientation of the display location name in accordance with the determination that the location in the portion 820a where the given location name has been moved has changed by more than a threshold amount. In some implementations, the computer system 800 reverses the orientation of the display name in accordance with determining that the angle at which the given city name will be displayed based on the updated time zone has changed by more than a threshold amount.
Watch user interface 816c also includes an indicator 815 that includes a graphical indicator of a location name corresponding to the location of computer system 800. Indicator 815 also includes a graphical indicator of an hour number included in portion 820b that corresponds to the current hour in the time zone in which computer system 800 is located. In watch user interface 816c, indicator 815 includes an arrow at the bottom center of portion 820a indicating that the hour number included in portion 820b corresponding to the current hour of london is 2 (e.g., the time of london is about 2:00 a.m.).
The watch user interface 816c also includes a portion 820b that includes a circular dial containing a plurality of hour numbers corresponding to the hours of the day. In watch user interface 816c, portion 820b includes a plurality of hour numbers ranging from 1 to 24, where each number corresponds to a different hour that constitutes 24 hours of a day.
Watch user interface 816c also includes a portion 820c that includes a circular area of watch user interface 816c that includes a time indication 826 that includes an analog clock hand, wherein the position of the analog clock hand represents a current time (e.g., time, minutes, and/or seconds). The portion 820c also includes a map 824a that includes at least a partial view of an animated map and/or globe. In some implementations, map 824a includes a view of an animated map and/or globe that includes a representation of a location (e.g., city, country, island, region, etc.) in which computer system 800 is located (e.g., ireland island, region corresponding to an Ireland standard time zone, etc.). Portion 820c also includes a light-dark boundary 822 that includes visual and/or graphical animations that represent the separation between day and night. In some embodiments, the bright-dark boundary 822 is displayed on the map 824a such that it indicates the portion of the animated map and/or globe that is currently at night time, and/or the portion of the animated map and/or globe that is currently at day time.
Watch user interface 816b also includes a lock icon 818 that indicates that computer system 800 is currently in a locked state. In some embodiments, the features of computer system 800 are limited when computer system 800 is in a locked state. The watch user interface 816b also includes a plurality of complex function blocks including complex function block 806, complex function block 808, complex function block 810, and complex function block 812. In some embodiments, the complex functional blocks (806, 808, 810, 812) include information from applications available on the computer system 800 (e.g., installed on a computer system). In some embodiments, the complex function blocks (806, 808, 810, 812) are updated according to the passage of time to display updated information. In some implementations, the complex function blocks (806, 808, 810, 812) are selected such that the computer system 800 launches an application corresponding to the selected complex function block.
FIG. 8D illustrates that, in some embodiments, the computer system displays a portion 820a that rotates through a different time zone in response to a rotational input received via the rotatable and depressible input mechanism 804. In some embodiments, the rotational input can be used to change the currently selected time zone such that the watch user interface is displayed according to the first time zone rather than the second time zone. FIG. 6D illustrates computer system 800 displaying a watch user interface 816D that is displayed according to the selection of the Pacific standard time zone as indicated by location indication 814D. In displaying watch user interface 816d, computer system 800 receives rotational input 860a via rotatable and depressible input mechanism 804, and in response to receiving rotational input 860a, computer system 800 displays watch user interface 816e, which is an updated display of watch user interface 816d according to the selection of the bay standard time zone display as indicated by location indication 814 e. In displaying watch user interface 816e, computer system 800 receives rotational input 860b via rotatable and depressible input mechanism 804, and in response to receiving rotational input 860b, computer system 800 displays watch user interface 816f, which is an updated display of watch user interface 816e according to the selection of the Ireland standard time zone display as indicated by position indication 814 f. In some embodiments, when displaying the watch user interface 816f, the computer system 800 receives a rotation input 860c via the rotatable and depressible input mechanism 804, and in response to receiving the rotation input 860c, the computer system 800 displays the watch user interface 816d according to an updated display of the watch user interface 816f with the pacific standard time zone display selected as indicated by the position indication 814 f. In some implementations, the selected time zone continues to be updated in response to rotational input received via the rotatable and depressible input mechanism 804. In some embodiments, computer system 800 loops through a limited number of time zone options in a set order.
At fig. 8D, computer system 600 displays a watch user interface (e.g., 816D, 816e, 816 f) without lock icon 818, indicating that computer 800 is not in a locked state. In some embodiments, the computer 800 system transitions from the locked state to the unlocked state in response to a sequence of user inputs received via one or more input mechanisms in communication with the computer system 800. In some implementations, the computer system 800 transitions from the locked state to the unlocked state in response to a plurality of tap inputs received at the computer system 800 corresponding to entry of a password. In some implementations, the computer system 800 transitions from the locked state to the unlocked state in response to a press input received on the rotatable and depressible input mechanism 604. In some embodiments, computer system 800 transitions from a locked state to an unlocked state in response to a sequence of one or more user inputs received via a computer system other than computer system 800, such as a paired phone, in communication with computer system 800. In some implementations, the computer system 800 transitions from a locked state to an unlocked state in response to a wrist lifting gesture. In some implementations, in response to receiving a rotational input via the rotatable and depressible input mechanism 804 while the computer system 800 is in the locked state as shown in fig. 8D, the computer system 800 does not update the watch user interface displayed via the display 802 to correspond to a different time zone.
FIG. 8E shows the computer system 800 displaying a watch user interface 816g that matches the watch user interface 816 a. Watch user interface 816g includes a portion 820c that includes a circular area of watch user interface 816g that includes a time indication 826 that includes an analog clock hand, wherein a position of the analog clock hand represents a current time (e.g., time, minutes, and/or seconds). The portion 820c also includes a map 824a that includes at least a partial view of an animated map and/or globe. In some implementations, the map 824a includes a view of an animated map and/or globe that includes a representation of the location (e.g., city, country, island, region, etc.) in which the computer system 800 is located (e.g., san francisco, region corresponding to Yu Taiping standard time zone, etc.). Portion 820c also includes a light-dark boundary 822 that includes visual and/or graphical animations that represent the separation between day and night. In some embodiments, the bright-dark boundary 822 is displayed on the map 824a such that it indicates the portion of the animated map and/or globe that is currently at night time, and/or the portion of the animated map and/or globe that is currently at day time. At fig. 8E, computer system 800 detects an input 850a (e.g., a tap input) on map 824 a.
At fig. 8F, in response to receiving input 850a, computer system 800 displays a watch user interface 816h, which is an updated version of watch user interface 816g, wherein map 824a has been replaced with map 824b. In some implementations, map 824b is a more enlarged version of map 824 a. In some implementations, the map 824b includes a city level view of the location corresponding to the location name that the indicator 815 is currently indicating. At fig. 8F, indicator 815 indicates location name 820a1 (los angeles). Thus, map 824b includes a city view of at least a portion of the map of los Angeles. In some embodiments, transitioning from displaying map 824a as shown by watch user interface 816g to displaying map 824b as shown by watch user interface 816h includes displaying an animation, wherein the animation is depicted in a globe rotation animation that transitions from map 824a to map 824b and/or zooming in on the animation.
FIG. 8G illustrates the computer system 800 displaying a watch user interface 816i that matches the watch user interface 816 a. Specifically, like watch user interface 816a, watch user interface 816a includes a portion 820c that includes a circular area of watch user interface 816i that includes a time indication 826 that includes an analog clock hand, wherein a position of the analog clock hand represents a current time (e.g., time, minutes, and/or seconds). The portion 820c also includes a map 824a that includes at least a partial view of an animated map and/or globe. In some embodiments, map 824a comprises a view of an animated map and/or globe that includes a representation of the location (e.g., city, country, island, region, etc.) in which computer system 800 is located. Portion 820c also includes a light-dark boundary 822 that includes visual and/or graphical animations that represent the separation between day and night. In some embodiments, the bright-dark boundary 822 is displayed on the map 824a such that it indicates the portion of the animated map and/or globe that is currently at night time, and/or the portion of the animated map and/or globe that is currently at day time. At fig. 8G, computer system 800 detects an input 850b (e.g., a long press input) on watch user interface 816i.
At fig. 8H, in response to receiving input 850b, computer system 800 displays selection user interface 842a. Selection user interface 842a is a user interface for selecting a watch user interface to be displayed by computer system 800. Selection user interface 842a includes representation 844b1, which is a representation of watch user interface 816i and includes various features of watch user interface 816 i. In some embodiments, representation 844b1 is a static representation of watch user interface 816i and includes an indication of a time other than the current time and/or complex functional blocks containing information other than real-time update data.
Selection user interface 842a also includes partial views of representation 844a and representation 844b corresponding to a watch user interface other than watch user interface 816 i. Selection user interface 842a also includes a shared user-interactive graphical user interface object 825 that, when selected, causes computer system 800 to display a user interface related to transmitting and/or sharing information about watch user interface 816i to another device (e.g., another computer system). Selection user interface 842a also includes an editing user-interactive graphical user interface object 828 that, when selected, causes computer system 800 to display an editing user interface for editing aspects of watch user interface 816 i. The selection user interface 842a also includes a dial indicator 846 that includes a visual and/or textual indication of the name of the watch user interface currently centered in the selection user interface 842a. At fig. 8H, dial indicator 846 indicates that currently indicated watch user interface 816i, which is represented by representation 844b1 in selection user interface 842a, is titled "world clock". At fig. 8H, the computer system detects an input 850c (e.g., tap input) on the edit user interactive graphical user interface object 828.
At FIG. 8I, in response to detecting input 850c, computer system 800 displays editing user interface 848a1. The editing user interface 848a1 includes an aspect indicator 854 that includes a visual and/or textual representation of an aspect of the watch user interface 816i that is currently selected for editing. At fig. 8I, an aspect indicator 854 indicates that the aspect of the watch user interface 816I currently selected for editing is a "style".
The editing user interface 848a1 also includes a selection indicator 852a that includes a visual and/or textual representation of the currently selected option of the editable aspect of the watch user interface 816 i. At fig. 8I, selection indicator 852a indicates that the currently selected "style" option of watch user interface 816I is "analog".
The editing user interface 848a1 also includes a position indicator 856a. The position indicator 856a includes a graphical indication of the number of selectable options of the editable aspect of the watch user interface 816i currently being edited and the position of the currently selected option in the list of selectable options. For example, the position indicator 856a indicates that the currently selected option "simulation" of the "style" aspect of the watch user interface 816i is located at the top of the list of at least two possible options of the "style" aspect of the watch user interface 816 i.
Editing user interface 848a1 also includes a representation 844d that indicates that the watch user interface currently being edited is a watch user interface corresponding to representation 844d that is watch user interface 816 i. The representation 844d corresponds to the watch user interface 816i and includes features of the watch user interface 816i including a portion 820c that includes a circular region of the watch user interface 816a that includes a time indication 826 that includes an analog clock hand, wherein the position of the analog clock hand represents time (e.g., time, minutes, and/or seconds). In some implementations, in representation 844d, the time indicated by indication 826 (e.g., by the position of the analog clock hands) is a fixed time and/or is different than the current time. At fig. 8I, computer system 1200 detects rotational input 860d via rotatable and depressible input mechanism 804.
At fig. 8J, in response to receiving the rotation input 860d, the computer system 800 displays an editing user interface 848a2 that is an edited version of the editing user interface 848a1, wherein the representation 844d no longer includes the time indication 826 but includes the time indication 858, which includes a digital indication of time without an analog clock hand.
The editing user interface 848a2 also includes a selection indicator 852b that includes a visual and/or textual representation of the currently selected option of the editable aspect of the watch user interface 816 i. At fig. 8J, selection indicator 852a indicates that the currently selected "style" option of watch user interface 816i is "number".
The editing user interface 848a2 also includes a position indicator 856b. The position indicator 856b includes an updated version of the position indicator 856a, wherein a change in the position indicator 856b relative to the position indicator 856a indicates that the currently selected "style" option has changed (e.g., from "analog" to "digital"). At fig. 8J, computer system 800 receives a press input 870a via rotatable and pressable input mechanism 804.
At fig. 8K, in response to receiving press input 870a, computer system 800 displays selection user interface 842b. Fig. 8K shows an edited representation of a watch user interface 816i in a selection user interface. Selection user interface 842b matches selection user interface 842a except that representation 844b1 has been replaced with representation 844b2. The representation 844b2 includes a time indication 858 (e.g., a digital indication of time) instead of a time indication 826 that includes an analog clock hand. At fig. 8K, computer system 800 detects a press input 870b via a depressible and rotatable input mechanism 804.
At fig. 8L, in response to detecting press input 870b, computer system 800 displays watch user interface 816j. Watch user interface 816j matches watch user interface 816i except that watch user interface 816j has a time indication 858 including a digital indication of time in portion 820c instead of time indication 826 including an analog clock hand.
At fig. 8M, computer system 800 displays a watch user interface 816k that includes a time indication 826 in which an analog clock hand is displayed extending beyond the edge of portion 820 c. In fig. 8M, the analog clock hands of time indication 826 are shown extending to the edge of portion 820 a. In some embodiments, the analog clock hands are shown as extending farther or closer than shown in fig. 8M. In some implementations, the analog clock hands at least partially mask at least one hour number contained within portion 820 b. In some implementations, the analog clock hands at least partially obscure at least one location name within the portion 820 a. In some implementations, the length of the clock hands included in the time indication 826 is an editable aspect of the watch user interface.
Fig. 9 is a flow chart illustrating a method for managing a clock face based on geographic data using a computer system, in accordance with some embodiments. The method (900) is performed at a computer system (e.g., 800) (e.g., a smart watch, wearable electronic device, smart phone, desktop computer, laptop computer, tablet computer) in communication with a display generation component (e.g., 802) and one or more input devices (e.g., a display controller, touch-sensitive display system). In some implementations, the computer system communicates with one or more input devices (e.g., rotatable input mechanisms, touch-sensitive surfaces). Some operations in method 900 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 900 provides an intuitive way for managing a clock face based on geographic data. The method reduces the cognitive burden on a user to manage the clock face based on geographic data, thereby creating a more efficient human-machine interface. For battery-powered computing devices, enabling users to manage the clock face based on geographic data faster and more efficiently saves power and increases the time between battery charges.
The computer system receives (902), via one or more input devices, a request (e.g., tap input, swipe, wrist lift, press input) to display a clock face (e.g., 816 a).
In response to receiving a request to display a clock face, a computer system (e.g., 800) displays (904), via a display generation component (e.g., 802), a clock face (e.g., 816 a) that includes names of one or more different cities (e.g., 820a1, 820a2, 820a3, and 820a4 as shown in fig. 8A). Displaying the clock face includes simultaneously displaying a current time indication (e.g., 826 as shown in fig. 8A) of a current time zone associated with the computer system (906) and names of one or more different cities (e.g., around at least a portion of the current time indication of the current time zone) (908). In some embodiments, the current time indication is continuously or periodically updated over time to reflect the time of day (e.g., time in the current time zone). In some embodiments, the current time indication is coordinated with and/or is intended to reflect a coordinated universal time with an offset based on the currently selected time zone.
Where the one or more different cities include a first city (e.g., 820a5 as shown in fig. 8A) and displaying the names of the one or more cities includes displaying the first city name, wherein determining that the computer system is associated with a first time zone (e.g., 814 a) based on (91 0) (e.g., the current time zone is the first time zone) displays the first city name in text at a first location in the clock face (e.g., 820a5 as shown in fig. 8A) oriented such that the bottom of letters in the first city name are closer to the current time indication (e.g., 826 as shown in fig. 8A) than the top of letters in the first city name; and determining from (912) that the computer system is associated with a second time zone (e.g., 814B) different from the first time zone (e.g., the current time zone is the second time zone), displaying the first city name (e.g., 820a5 as shown in fig. 8B) in text at a second location in the clock face, the text oriented such that a top of letters in the first city name are closer to the current time indication than a bottom of letters in the first city name. In some embodiments, the clock face includes at least one complex functional block (e.g., 812 as shown in fig. 8A). In some implementations, complex functional blocks refer to any clock face feature other than hours and minutes for indicating time (e.g., clock hands or hour/minute indications). In some implementations, complex functional blocks provide data obtained from applications. In some embodiments, the complex function block includes an affordance that, when selected, launches the corresponding application. In some implementations, the complex function blocks are displayed at fixed predefined locations on the display. In some implementations, the complex function blocks occupy respective positions (e.g., lower right, lower left, upper right, and/or upper left) at particular areas of the dial. In some implementations, complex functional blocks can be edited (e.g., to display data corresponding to different applications available on a computer system). Based on whether the computer system is associated with the first time zone (e.g., 814 a) or the second time zone (e.g., 814B), the city name is conditionally displayed in a first position in a first orientation (e.g., 820a5 as shown in fig. 8A) or in a second position in a second orientation (e.g., 820a2 as shown in fig. 8B), the user is provided with relevant information about the context of the computer system without requiring the user to provide further input, and legibility of the city name is improved by maintaining the city name front up rather than rotating the name around the dial such that they are displayed upside down. Performing operations when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user determine whether the first city name represents a current time zone associated with the computer system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently. Further, selecting an orientation of text for a city name based on determining that the computer system is associated with a particular time zone reduces the number of inputs required to display the city name in that orientation by eliminating the need for a user to manually select an orientation of text for a different city name. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the one or more cities include a second city (e.g., 820a6 as shown in fig. 8A), and displaying the names of the one or more cities includes simultaneously displaying the second city name (e.g., 820a6 as shown in fig. 8A) and the first city name (e.g., 820a5 as shown in fig. 8A), wherein in accordance with a determination that the computer system (e.g., 800) is associated with a first time zone (e.g., 814 a) (e.g., the current time zone is the first time zone), the second city name (e.g., 814a6 as shown in fig. 8A) is displayed in text at a third location in the clock face, the text oriented such that a top of letters in the second city name are closer to the current time indication (e.g., 826 as shown in fig. 8A) than a bottom of letters in the second city name; and in accordance with a determination that the computer system is associated with a second time zone (e.g., 814B) different from the first time zone (e.g., the current time zone is the second time zone), displaying a second city name (e.g., 814a6 as shown in fig. 8B) in text at a fourth location in the clock face, the text oriented such that a bottom of letters in the second city name are closer to the current time indication than a top of letters in the second city name. The second city name is conditionally displayed in the first orientation in the third position or in the second orientation in the fourth position based on whether the computer system is associated with the first time zone or the second time zone, providing the user with relevant information about the context of the computer system without requiring the user to provide further input. Performing operations when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user determine whether the second city name represents a current time zone associated with the computer system), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the device.
In some embodiments, the one or more cities include a third city (e.g., 820a3 as shown in fig. 8A), and as part of displaying the names of the one or more cities, the computer system (e.g., 800) simultaneously displays the third city name (e.g., 820a3 as shown in fig. 8A), the first city name (e.g., 820a5 as shown in fig. 8A), and the second city name (e.g., 820a61 as shown in fig. 8A). In some embodiments, in accordance with a determination that the computer system is associated with the first time zone (e.g., 814 a) (e.g., the current time zone is the first time zone), the computer system displays the third city name in text at a fifth location in the clock face (e.g., 820a3 as shown in fig. 8A) oriented such that the bottom of the letters in the third city name are closer to the current time indication (e.g., 826 as shown in fig. 8A) than the top of the letters in the third city name. In some embodiments, in accordance with a determination that the computer system is associated with a second time zone (e.g., 814B) different from the first time zone (e.g., the current time zone is the second time zone), the computer system displays a third city (e.g., 820a3 as shown in fig. 8B) in text at a sixth location in the clock face, the text oriented such that a top of letters in the third city name are closer to the current time indication than a bottom of letters in the third city name. Displaying the third city name conditionally in the first orientation in the fifth position or in the second orientation in the sixth position based on whether the computer system is associated with the first time zone or the second time zone provides the user with relevant information about the context of the computer system without requiring the user to provide further input. Performing operations when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user determine whether the third city name represents a current time zone associated with the computer system), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the device.
In some embodiments, the one or more cities include a fourth city (e.g., 820a2 as shown in fig. 8A), and the computer system (e.g., 800) concurrently displays the fourth city name (e.g., 820a2 as shown in fig. 8A), the first city name (e.g., 820a5 as shown in fig. 8A), the second city name (e.g., 820a6 as shown in fig. 8A), and the third city name (e.g., 820a3 as shown in fig. 8A) as part of displaying the names of the one or more cities. In some embodiments, in accordance with a determination that the computer system (e.g., 800) is associated with the first time zone (e.g., 814 a) (e.g., the current time zone is the first time zone), the computer system displays a fourth city name in text at a seventh location in the clock face (e.g., 820a2 as shown in fig. 8A), the text oriented such that a bottom of letters in the fourth city name are closer to the current time indication (e.g., 826 as shown in fig. 8A) than a top of letters in the fourth city name. In some embodiments, in accordance with a determination that the computer system is associated with a second time zone (e.g., 814B) different from the first time zone (e.g., the current time zone is the second time zone), the computer system displays a fourth city name (e.g., 820a2 as shown in fig. 8B) in text at an eighth location in the clock face, the text oriented such that a top of letters in the fourth city name are closer to the current time indication than a bottom of letters in the fourth city name. The fourth city name is conditionally displayed in the seventh position in the first orientation or in the eighth position in the second orientation based on whether the computer system is associated with the first time zone or the second time zone, providing the user with relevant information about the context of the computer system without requiring the user to provide further input. Performing operations when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user determine whether the fourth city name represents a current time zone associated with the computer system), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the device.
In some embodiments, in accordance with a determination that the computer system (e.g., 800) is associated with a third time zone (e.g., 814C) that is different from the first time zone and the second time zone (e.g., the current time zone is the third time zone), the computer system displays the first city name in text (e.g., 820a5 as shown in fig. 8C) oriented such that a top of letters in the first city name are closer to the current time indication (e.g., 826 as shown in fig. 8C) than a bottom of letters in the first city name. In some embodiments, in accordance with a determination that the computer system is associated with a third time zone (e.g., 814C) (e.g., the current time zone is the third time zone), the computer system displays the second city name in text (e.g., 820a6 as shown in fig. 8C) oriented such that the top of the letters in the second city name are closer to the current time indication than the bottom of the letters in the second city name. The first city name is conditionally displayed in the same orientation as the second city name, providing the user with relevant information about the context of the computer system without requiring the user to provide further input. Performing operations when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by displaying information about which city corresponds to which time zone), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the one or more cities include a third city (e.g., 820a3 as shown in fig. 8A), and as part of displaying the names of the one or more cities, the computer system (e.g., 800) simultaneously displays the third city name (e.g., 820a3 as shown in fig. 8A), the first city name (e.g., 820a5 as shown in fig. 8A), and the second city name (e.g., 820a6 as shown in fig. 8A). In some embodiments, in accordance with a determination that the computer system is associated with the first time zone (e.g., 814 a) (e.g., the current time zone is the first time zone), the computer system displays the third city name in text at a fifth location in the clock face (e.g., 820a3 as shown in fig. 8A) oriented such that the bottom of the letters in the third city name are closer to the current time indication (e.g., 826 as shown in fig. 8A) than the top of the letters in the third city name. In some embodiments, in accordance with a determination that the computer system is associated with a second time zone (e.g., 814B) different from the first time zone (e.g., the current time zone is the second time zone), the computer system displays a third city name (e.g., 820a3 as shown in fig. 8B) in text at a sixth location in the clock face, the text oriented such that a top of letters in the third city name are closer to the current time indication than a bottom of letters in the third city name. In some embodiments, in accordance with a determination that the computer system is associated with a third time zone (e.g., 814C) (e.g., the current time zone is the third time zone), the computer system displays the third city name in text (e.g., 820a3 as shown in fig. 8C) oriented such that the bottom of letters in the third city name are closer to the current time indication than the top of letters in the third city name.
In some embodiments, when the current time zone associated with the computer system is maintained, the orientation of the names displaying one or more different cities is maintained (e.g., the orientation of name 820a4 is maintained such that the bottom of the letter is closer to the current time indication than the top of the letter, as shown in fig. 8B-8C). In some embodiments, the computer system foregoes changing the orientation of displaying the names of one or more different cities as long as the computer system remains in the same time zone. Maintaining an orientation that displays names of one or more different cities while maintaining a current time zone associated with the computer system provides visual feedback to the user that the current time zone associated with the computer system has not changed. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the first location in the clock face displaying the first city name (e.g., 820a5 as shown in fig. 8A) indicates a current time in the first city (e.g., a current time of moscow) relative to a current time in a current time zone associated with the computer system (e.g., 814 a) (e.g., a current time in a time zone associated with the first city). In some embodiments, the current time in the time zone associated with the first city (e.g., 820a5 as shown in fig. 8A) is different from the current time in the time zone associated with the computer system (e.g., 826 as shown in fig. 8A). The method includes displaying a first city name, wherein a first location in the clock face in which the first city name is displayed indicates a current time in the first city relative to a current time in a current time zone associated with the computer system, and providing visual feedback to a user regarding a relative time between the current time in the first city and the current time in the current time zone associated with the computer system. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the ninth location in the clock face (e.g., 81 6 a) that displays the fifth city name (e.g., 820a4 as shown in fig. 8A) indicates the current time in the fifth city (e.g., the current time in the time zone associated with the city (e.g., mexico) relative to the current time in the first city (e.g., 820a4 as shown in fig. 8A) and relative to the current time in the current time zone associated with the computer system (e.g., 814 a) (e.g., as indicated by 826 shown in fig. 8A). A fifth city name is displayed, wherein a first location in the clock face in which the fifth city name is displayed indicates a current time in the fifth city relative to a current time in a current time zone associated with the computer system, and visual feedback is provided to the user regarding a relative time between the current time in the fifth city and the current time in the current time zone associated with the computer system. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the clock face (e.g., 81 a) includes an indicator of sunrise time (e.g., 822 as shown in fig. 8A) in a current time zone associated with the computer system (e.g., visual element, text element representing sunrise time). In some embodiments, the clock face includes an indicator of sunset time (e.g., 822 as shown in fig. 8A) in a current time zone associated with the computer system (e.g., visual element, text element representing sunset time). Displaying indicators of sunrise time and sunset time simultaneously provides visual feedback to a user regarding sunrise time and sunset time, and current time indication of a current time zone associated with a computer system provides visual feedback regarding various related times corresponding to the current time zone (e.g., 814 a) associated with the computer system, and enables the user to quickly and efficiently discern sunrise time and sunset time in addition to current time. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, sunrise time and sunset time vary over the course of a year. In some embodiments, the graphical indicator of sunrise time (e.g., 822 as shown in fig. 8A) and the graphical indicator of sunset time (e.g., 822 as shown in fig. 8A) are updated (e.g., automatically by a computer system) to indicate sunrise/sunset time of the day. In some embodiments, the graphical indicator of sunrise time and the graphical indicator of sunset time are updated based on data retrieved from a remote computer (e.g., remote server, software update server). The display of indicators of sunrise time and sunset time, wherein sunrise time and sunset time vary over the year, and the current time indication of the current time zone associated with the computer system, enables a user to quickly and efficiently discern sunrise time and sunset time in addition to the current time of the year. The change in sunrise time and sunset time over the year allows the indicator to naturally shift over the year as the sunrise time and sunset time provide a visual indication of the latest sunrise time and sunset time. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the clock face comprises an analog dial (e.g., 820c as shown in fig. 8A) (e.g., a circular dial with hour marks evenly spaced around the circumference of the circle, representing 24 hours (e.g., instead of 12 hours)). In some embodiments, an indicator of sunrise time (e.g., 822 in fig. 8A) and an indicator of sunset time (e.g., 822 in fig. 8A) are displayed within the analog dial. Simultaneously displaying indicators of sunrise time and sunset time in the analog dial while displaying the current time indication provides visual feedback to the user regarding sunrise time and sunset time, enabling the user to quickly and efficiently discern sunrise and sunset and the current time. Displaying an indicator of sunrise time and an indicator of sunset time within the analog scale provides visual feedback of the indicator in relation to features (e.g., time-of-day features) provided by and/or within the analog scale. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some implementations, the clock face includes a map (e.g., 824a as shown in fig. 8A) (e.g., a visual representation of a globe). In some embodiments, the indicator of sunrise time includes a first light and dark boundary displayed on the map (e.g., 822 as shown in fig. 8A), and the indicator of sunset time includes a second light and dark boundary displayed on the map. In some implementations, the light-dark boundary includes a visual animation depicting a change in the lighting effect applied to the map. In some embodiments, the shadow effect is selectively displayed over a portion of the map that is on one side of the light-dark boundary. In some embodiments, the first and second light-dark boundaries are single light-dark boundaries (e.g., the first and second light-dark boundaries are the same line (e.g., portions of the same line)). In some implementations, a single light and dark boundary curves across the map to indicate both sunrise time and sunset time. Displaying indicators of sunrise time and sunset time via light and dark boundaries enables a user to graphically view sunrise time and sunset time quickly and efficiently and provide visual feedback regarding the relationship between sunset time and/or sunrise time and a map included in a clock face (e.g., the width between the indicator lines may provide visual feedback regarding the length of the day). Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the current time zone (e.g., 814a as shown in fig. 8A) associated with the computer system (e.g., 800) is selected based on an automatically determined location of the computer system (e.g., based on GPS, GLONASS, wi-Fi/bluetooth triangulation, cell tower metadata, etc.). In some embodiments, the computer system automatically selects a current time zone associated with the computer system based on the automatically determined location of the computer system. Selecting the current time zone associated with the computer system based on the global positioning data enables the computer system to display information related to the time zone in which the computer system is located without requiring additional user input from a user selecting the current time zone associated with the computer system. Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the one or more different cities include a representative city selected based on an automatically determined location of the computer system (e.g., 820a1 as shown in fig. 8A). In some embodiments, the computer system displays a visual indicator (e.g., 81 5 as shown in fig. 8A) corresponding to the representative city (e.g., a label (e.g., triangle label), a graphic element, an arrow, a text element (e.g., a text element displayed in bold font)). In some implementations, the representative city is a city selected based on determining that it is located in a current time zone (e.g., 814 a) associated with the computer system. In some embodiments, the visual indicator corresponding to the representative city indicates that the representative city represents a current time zone associated with the computer system. Displaying a visual indicator corresponding to a representative city selected based on the automatically determined location of the computer system provides visual feedback that the current time indication corresponds to the representative city (rather than a different city displayed around the clock face). Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the representative city (e.g., 820a1 as shown in fig. 8A) is different from the city in which the computer system is located (e.g., san francisco, as shown at 814a in fig. 8A). Displaying a user interface having a representative city that is different from the city in which the computer system is located provides visual feedback that the time being displayed corresponds to the current time zone associated with the computer system in which the representative city is located and indicates that the time being displayed is not specific to the city in which the computer system is located and is accurate for cities other than the current city in which the computer system is located. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, rotating the location on the clock face where the names of one or more different cities are displayed corresponds to updating a current time zone associated with the computer system (e.g., 800) (e.g., as shown in fig. 8A-8C). In some implementations, updating the location of the names of one or more different cities includes rotating the location of the names around the clock face (e.g., by an angle) about the rotation axis. Transferring the current time zone while rotating a dial containing names of one or more cities provides visual feedback on the clock face showing the location of the cities in relation to the current time zone associated with the computer system. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some implementations, a computer system (e.g., 800) communicates with a rotatable input mechanism (e.g., 804) (e.g., a rotatable input device; a rotatable input device). In some embodiments, the computer system detects rotation (e.g., 860 a) of the rotatable input mechanism about the first axis of rotation via the rotatable input mechanism (e.g., as shown in fig. 8D). In some embodiments, in response to detecting rotation of the rotatable input mechanism about a first axis of rotation, the computer system rotates a position on the clock face that displays names of one or more different cities about a second axis of rotation that is different from (e.g., perpendicular to) the first axis of rotation, wherein rotating the position on the clock face that displays names of one or more different cities comprises: in accordance with a determination that rotation of the rotatable input mechanism about the first axis of rotation is in a first direction (e.g., clockwise), rotation of the one or more different cities about the second axis of rotation is in a third direction (e.g., clockwise); and in accordance with a determination that rotation of the rotatable input mechanism about the first axis of rotation is in a second direction (e.g., counter-clockwise) different from the first direction, rotation of the one or more different cities about the second axis of rotation is in a fourth direction (e.g., counter-clockwise) different from the third direction. Based on a determination as to whether the detected rotational input is in a first direction or a second direction to rotate the location on the clock face where the name of one or more different cities is displayed in two different directions, providing visual feedback that the direction of the rotational input may be used to control the rotational direction, reducing the number of inputs required to perform the operation, enhancing operability of the device, and making the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors when operating/interacting with the system), which in turn reduces power usage and prolongs battery life of the device by enabling the user to more quickly and efficiently use the system. Reducing the number of inputs required to perform the operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user rotate the city in the desired direction quickly and efficiently), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the background of the clock face is a world map (e.g., 824a as shown in fig. 8A) (e.g., an animation representing a globe). Displaying the clock face, wherein the background of the clock face is a world map, provides visual feedback of the features of the clock face in relation to the world map. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some implementations, a computer system (e.g., 800) receives user input (e.g., 850 a) (e.g., tap input, swipe, press input, and/or mouse click) on a world map (e.g., 824a as shown in fig. 8E) via one or more input devices. In some embodiments, in response to receiving the user input, the computer system centers a city on the world map that represents a current time zone associated with the computer system (e.g., as shown in fig. 8F). In some embodiments, centering the city representing the current time zone associated with the computer system on the world map includes magnifying the city representing the current time zone associated with the computer system (e.g., shown in map 824b in fig. 8F). Centering cities representing the current time zone associated with the computer system in response to user input reduces the amount of input required to center the selected city (e.g., by helping the user center the relevant city without, for example, pinching and/or swipe the input multiple times), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, as part of centering a city representing a current time zone associated with a computer system (e.g., 800) on a world map (e.g., 824 a), the computer system transitions from eccentrically displaying the city associated with the current time zone associated with the computer system to displaying the city associated with the current time zone associated with the computer system (e.g., 824 b) at a point on a clock face in a center of the clock face (e.g., 81 h), as shown in fig. 8F), around which point a plurality of clock hands rotate, including a first time Zhong Zhizhen (e.g., an hour, minute, or second hand) and a second clock hand (e.g., 826) (e.g., another one of the hour, minute, or second hand). Displaying a city associated with the current time zone associated with the computer system in a center of the world map, wherein the city representing the current time zone associated with the computer system is displayed behind the plurality of clock hands, provides visual feedback that the time currently indicated by the clock hands corresponds to the city displayed behind the clock hands. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the clock face (e.g., 81 k) includes a plurality of clock hands (e.g., 826 as shown in fig. 8A) including a first clock Zhong Zhizhen (e.g., hour, minute, or second hand) and a second clock hand (e.g., 826) (e.g., the other of hour, minute, or second hand). In some embodiments, a computer system (e.g., 800) updates the position of a clock pointer to indicate a current time in a current time zone associated with the computer system. In some embodiments, the computer system displays the names of one or more different cities behind (e.g., covered by) the plurality of clock pointers (e.g., 826 as shown in fig. 8M). Displaying names of one or more different cities behind the plurality of clock pointers enables the plurality of clock pointers to be displayed without obscuring the plurality of clock pointers, thereby providing improved visual feedback by allowing the names of cities to be read while maintaining an unobstructed view of the clock pointers to indicate the current time. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the clock face (e.g., 81 a) comprises a second analog dial (e.g., 820b as shown in fig. 8A) (e.g., a circular dial with hour marks evenly angularly spaced around the circumference of the circle, representing 24 hours (e.g., instead of 12 hours)), and wherein the analog dial is updated (e.g., automatically) based on the current time in the current time zone associated with the computer system (e.g., 800). In some embodiments, the analog dial rotates over time (e.g., automatically) to reflect the passage of time, wherein the change in the angle of rotation of the analog dial corresponds to the change in time. Displaying the clock face including a second analog dial that is gradually updated based on the current time provides visual feedback about the current time and enables the user to quickly and efficiently determine the current time while looking at the clock face. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some implementations, an indication (e.g., 81 5) of the current time (e.g., 81 5 as shown in fig. 8A) in the current time zone associated with the computer system (e.g., 800) is displayed at the bottom of the clock face. Displaying a city representing the current time zone associated with the computer system in a fixed portion of the clock face (such as the bottom of the user interface) provides visual feedback that the city in the location corresponds to the current time zone associated with the computer system. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some implementations, the clock face includes an embedded time indication (e.g., 826 as shown in fig. 8A) at a first location on the clock face (e.g., 826 as shown in fig. 8G) (e.g., an analog clock face having an hour hand and optionally a minute hand and/or a second hand that indicates time) (e.g., 826 as shown in fig. 8A). In some embodiments, the clock face includes a digital indication of time (e.g., 858 as shown in fig. 8L), wherein the digital indication of time includes a tick mark representing seconds around the digital time indication. In some embodiments, the clock face is circular. In some implementations, the embedded time indication includes a representation of the current time in a current time zone associated with the computer system in a 12-hour format (e.g., instead of 24 hours). Displaying the embedded time indication including a representation of the current time in a 12 hour format provides visual feedback regarding the current time and enables the clock face to quickly and easily communicate to a viewer the current time in a current time zone associated with the computer system in a second manner other than the current time indication. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the computer system displays the embedded time indication (e.g., 826 as shown in fig. 8G) via a display generation component (e.g., 802). In some implementations, the embedded time indication (e.g., 826 as shown in fig. 8G) is presented according to a first format (e.g., the first time indication includes an analog time indication). In some embodiments, the computer system receives, via one or more input devices, a sequence of one or more user inputs (e.g., touch inputs, rotation inputs, press inputs) corresponding to a request to edit the embedded time indication (e.g., as shown in fig. 8G-8L). In some embodiments, in response to receiving a sequence of one or more user inputs corresponding to a request to edit an embedding time indication, the computer system displays the embedding time indication via the display generating component (e.g., 858 as shown in fig. 8L). In some implementations, the embedded time indication (e.g., 858 as shown in fig. 8L) is presented according to a second format (e.g., the first time indication includes a digital time indication) that is different from the first format (e.g., 858). Editing the embedded time indication to be displayed according to a second format different from the first format provides improved visual feedback by allowing improved readability and/or matching user preferences. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the clock face includes a second analog dial (e.g., 820 b) that includes a plurality of hour marks representing twenty-four hour periods (e.g., a circular dial with hour marks evenly angularly spaced around the circumference of the circle, representing 24 hours (instead of 12 hours)). In some embodiments, the second dial (e.g., 820b as shown in fig. 8A) is entirely contained within the perimeter of the first dial (e.g., 820a as shown in fig. 8A). In some embodiments, the second dial is oriented such that zero (e.g., midnight) is at the bottom of the dial and twelve (e.g., noon) times are at the top of the dial. In some embodiments, the second dial is contained within a first region of the dial that represents the first face of the clock. In some embodiments, in accordance with a determination that the current date does not fall within a predetermined time range (e.g., a time range that complies with daylight savings time in at least one of the time zones represented on the clock face of the period), the plurality of hour indicia representing the twenty-four hour period is a first plurality of hour indicia (e.g., hour numbers ranging from 0 to 24). In some embodiments, in accordance with a determination that the current date falls within a predetermined time range (e.g., a time range that complies with daylight savings time in at least one of the time zones represented on the time clock face of the period), the plurality of hour indicia representing the twenty-four hour time period is a second plurality of hour indicia different from the first plurality of hour indicia. In some embodiments, the first plurality of hour indicia comprises hour indicia ranging from 1 to 24 (e.g., 1, 2, 3, 4 … … 24) to represent 24 hours of the day, while the second plurality of hour indicia comprises at least one indicia different from the first plurality of hour indicia to allow for compliance with daylight savings in different time zones and/or cities or countries (e.g., 1, 2, 4 … …). In some embodiments, the second analog scale includes a first plurality of hour indicia when daylight savings time is not in effect, but the second analog scale includes a second plurality of hour indicia when daylight savings time is in effect. A plurality of hour indicia representing twenty-four hour periods are conditionally displayed based on whether the current date falls within a predetermined time range, the user being provided with the appropriate plurality of hour indicia when the relevant condition is met without requiring further input from the user, the plurality of hour indicia being either the first plurality of hour indicia or the second plurality of hour indicia. Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the second plurality of hour marks includes at least one repeated hour mark (e.g., 820b as shown in fig. 8M) (e.g., at least one hour mark is included more than once in the second plurality of hour marks). In some embodiments, displaying the hour indicia representing the twenty-four hour period during daylight savings time includes displaying at least one of the hour indicia in more than one location on the clock face. The inclusion of at least one repeated hour mark in the second plurality of hour marks provides visual feedback that the current time in at least two of the time zones represented by the second plurality of hour marks is the same when the relevant condition is met without requiring the user to provide further input. Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first plurality of hour marks includes at least a first hour mark that is not included in the second plurality of hour marks (e.g., as shown in 820B of fig. 8B compared to 820B of fig. 8M). In some embodiments, displaying the hour stamp representing the twenty-four hour period during daylight savings time includes omitting at least one hour stamp corresponding to a particular time of day (e.g., hour) (e.g., as shown by the lack of a "6" hour number in 820b of fig. 8M). Including at least a first hour mark that is not included in the second plurality of hour marks provides a different visual feedback of the relative difference between the time zones represented by the hour marks when the second plurality of hour marks are displayed relative to when the first plurality of hour marks are displayed when the relevant condition is met without requiring the user to provide further input. Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
It is noted that the details of the process described above with respect to method 900 (e.g., fig. 9) also apply in a similar manner to the methods described herein. For example, method 900 optionally includes one or more of the features of the various methods described herein with reference to method 700, method 1100, and method 1300. For example, method 900 optionally includes one or more of the features of the various methods described above with reference to method 700. For example, the device may use a user interface including a time indication based on geographic data as described with reference to fig. 8A-8M or a watch user interface as described with reference to fig. 6A-6U as a watch user interface. As another example, a watch user interface as described with reference to fig. 8A-8M may include hour numbers updated based on current time as described with reference to fig. 10A-10W and method 1100. As another example, method 900 optionally includes one or more of the features of the various methods described below with reference to method 1300. For example, the watch user interfaces of fig. 8A-8M may be created or edited via a process for updating and selecting the watch user interfaces as described with reference to fig. 12A-12W. As another example, method 900 optionally includes one or more of the features of the various methods described below with reference to method 1500. For example, a layout editing user interface including a preview user interface corresponding to the watch user interface of fig. 8A-8M may be displayed on a computer system in communication with computer system 800. For the sake of brevity, these details are not repeated hereinafter.
Fig. 10A-10W illustrate an exemplary user interface for managing a clock face based on state information of a computer system. The user interfaces in these figures are used to illustrate the processes described below, including the process in fig. 11.
10A-10W illustrate an exemplary user interface for enabling and displaying a user interface including hour numbers displayed in strokes of different widths. The user interfaces in these figures are used to illustrate the processes described below, including the process in fig. 11.
Fig. 10A shows a computer 1000 displaying a watch user interface 1020A via a display 1002. Computer system 1000 includes a rotatable and depressible input mechanism 1004. In some embodiments, computer system 1000 optionally includes one or more features of device 100, device 300, or device 500. In some embodiments, computer system 1000 is a tablet, phone, laptop, desktop, camera, or the like. In some implementations, the inputs described below may optionally be replaced with alternative inputs, such as pressing inputs and/or rotating inputs received via the rotatable and depressible input mechanism 1004.
In fig. 10A, watch user interface 1020A includes a plurality of hour numbers displayed in strokes of different widths. At fig. 10A, watch user interface 1020A includes an hour number 1006a corresponding to a "1" hour number that corresponds to the hour portion of the current time as "1" (e.g., 1:00 am, 1:00 pm, etc.). Watch user interface 1020a also includes a hours number 1006b corresponding to a "2" hours number, a hours number 1006c corresponding to a "3" hours number, a hours number 1006d corresponding to a "4" hours number, a hours number 1006e corresponding to a "5" hours number, a hours number 1006f corresponding to a "6" hours number, a hours number 1006g corresponding to a "7" hours number, a hours number 1006h corresponding to a "8" hours number, a hours number 1006i corresponding to a "9" hours number, a hours number 1006j corresponding to a "10" hours number, a hours number 1006k corresponding to a "11" hours number, and a hours number 10061 corresponding to a "12" hours number. In some implementations, watch user interface 1020 includes 24 digits corresponding to 24 hours of a day, rather than 12 hours digits (e.g., in a military time format). At watch user interface 1020a, hour numbers (e.g., hour numbers 1006 a-10061) are displayed at different stroke widths. In some embodiments, displaying the number at a given stroke width includes displaying an hour number, wherein the width of the strokes and/or text of the lines making up the hour number is drawn and/or displayed at a particular thickness. Watch user interface 1020a includes hour numbers 1020a-10201 displayed in a generally square shape in a numerical order that may be traversed in a clockwise or counter-clockwise direction. In some embodiments, the hour numbers 1020a-10201 may be displayed in a rounded shape (e.g., circular, oval, etc.).
Watch user interface 1020a displays the hours numbers (e.g., hours numbers 1006 a-10061) in a width stroke based on the current time. In watch user interface 1020a, the current time is 10:09, as indicated by the position of the analog clock hands included in time indication 1008. In some implementations, the width of each hour number included in watch user interface 1020a is displayed based on the hour portion of the current time. In watch user interface 1020a, the hour number corresponding to the hour portion of the current time is displayed in maximum width strokes. In watch user interface 1020a, the hour number displayed adjacent to the hour number corresponding to the hour portion of the current time in the counter-clockwise direction is displayed in a second large width stroke. This pattern continues around the dial in the counter-clockwise direction until an hour number is displayed adjacent to the hour number corresponding to the hour portion of the current time in the clockwise direction, which hour number is displayed in the minimum width stroke. For example, at watch user interface 1020a, at 10:09, hour number 1006j ("10") is displayed with the largest width stroke, hour number 1006i ("9") is displayed with the second largest width stroke, and hour number 1006k ("11") is displayed with the smallest width stroke. In some implementations, displaying an hour number in a given width stroke corresponds to displaying an hour number in a certain pixel width. For example, in watch user interface 1020a, the line making up hour number 1006j ("10") is 20 pixels wide, the line making up hour number 1006i ("9") is 18 pixels wide, and the line making up hour number 1006k ("11") is 2 pixels wide. Thus, the width strokes displaying the other hour numbers (e.g., hour numbers 1006a-1006h and hour number 10061) gradually decrease as one traverses around watch user interface 1020a in a counter-clockwise direction, beginning with hour number 1006j ("10") corresponding to the hour portion of the current time, and ending with hour number 1006k ("11") numerically following the hour number corresponding to the hour portion of the current time.
The watch user interface 1020a also includes a time indication 1008 that includes an analog clock hand, where the position of the analog clock hand represents the current time (e.g., time, clock, and/or seconds). At watch user interface 1020a, time indication 1008 indicates that the current time is 10:09 (e.g., morning or afternoon).
Watch user interface 1020a also includes a date indication 1010 including a visual and/or textual indication of the current date (e.g., the current date of the week, the current date of the month, the current month, and/or the current year). Watch user interface 1020a also includes complex function blocks 1012a that include information from applications available on (e.g., installed on) computer system 1000. In some implementations, the complex function block 1012a is updated as time passes to display updated information (e.g., from an application associated with the complex function block 1012 a). In some implementations, the complex function block 1012a is selected such that the computer system 1000 launches an application corresponding to the complex function block 1012 a.
At fig. 10B, computer system 1000 displays a watch user interface 1020B, which shows updated versions of watch user interface 1020a at different times. The watch user interface 1020b includes a time indication 1008, wherein the position of the analog clock hands in the watch user interface 1020b indicates that the current time in the watch user interface 1020b is 11:09.
In watch user interface 1020b, based on the updated time, computer system 1000 updates the width strokes of display hours numbers 1006a-1006l such that hour number 1006k ("11") corresponding to the hour portion of the current time is displayed with the maximum width strokes. In watch user interface 1020b, the hour number displayed adjacent to the hour number corresponding to the current hour portion of the current time in the counter-clockwise direction is displayed in a second large width stroke. This pattern continues around the dial in the counter-clockwise direction until an hour number is displayed adjacent to the hour number corresponding to the hour portion of the current time in the clockwise direction, which hour number is displayed in the minimum width stroke. For example, at watch user interface 1020b, at 11:09, hour number 1006k ("11") is displayed with the largest width stroke, hour number 1006j ("10") is displayed with the second largest width stroke, and hour number 1006l ("12") is displayed with the smallest width stroke. Thus, the width strokes displaying the hour number gradually decrease as one traverses around the watch user interface 1020b in a counter-clockwise direction, starting with the hour number 1006k ("11") corresponding to the hour portion of the current time, and ending with the hour number 1006l ("12") numerically following the hour number corresponding to the hour portion of the current time.
FIG. 10C illustrates computer system 1000 receiving an input 1050a on an hour number (e.g., a tap input). In fig. 10C, the computer system is displaying a watch user interface 1020C that substantially matches watch user interface 1020b, and computer system 1000 receives input 1050a on hour number 1006f ("6").
At fig. 10D, in response to receiving input 1050a, computer system 1000 displays 1020D, which is similar to watch user interface 1020c, but includes hour numbers being displayed in strokes of different widths in response to input 1050a. 10D-10F illustrate computer system 1000 displaying a watch user interface including hour numbers 1006a-1006l, wherein the width strokes of one or more hour numbers are updated in response to input 1050a. In some implementations, the computer system 1000 displays the hour number in a ripple animation in response to receiving an input received over the hour number (e.g., 1006 f). In some embodiments, displaying the ripple animation includes temporarily displaying the hour number on which the input was received in an increased width stroke, and then decreasing the width stroke displaying the selected hour number and displaying one or more hour numbers adjacent to the selected hour number in the increased width stroke. In some implementations, the ripple animation continues through all of the hour numbers included in a given watch user interface. In some implementations, the animation ends after the displayed hours number is temporarily displayed on the display 1002 opposite (e.g., opposite) the selected hours number in an increased width stroke. For example, in some implementations, input received on the hour number 1006f ("6") will cause the computer system 1000 to display a ripple animation in which the hour numbers included in the watch user interface will be displayed temporarily in increasing width strokes in the following order: first, the hours 1006f ("6"), then the hours 1006g ("7") and 1006e ("5"), then the hours 1006h ("8") and 1006d ("4"), then the hours 1006i ("9") and 1006c ("3"), then the hours 1006j ("10") and 1006b ("2"), then the hours 1006k ("11") and 1006a ("1"), and finally the hours 1006l ("12"). As described above, FIG. 10D illustrates computer system 1000 displaying a watch user interface 1020D in which an hour number 1006f ("6") is temporarily displayed in an increased width stroke in response to input 1050a.
At FIG. 10E, after displaying the hour number 1006f ("6") in an increased width stroke in response to input 1050a, computer system 1000 displays a watch user interface 1020E that includes the hour numbers 1006g ("7") and 1006E ("5") being displayed in an increased width stroke. In watch user interface 1020e, hour number 1006f ("6") is displayed in the width stroke that it displayed before computer system 1000 received input 1050a (e.g., in watch user interface 1020 b). In some embodiments, computer system 100 displays an animation in which the affected hours number gradually increases and/or contracts. For example, in some implementations, the transition from displaying watch user interface 1020d to 1020e includes displaying an animation of the hour number 1006f ("6") shrinking while displaying an animation of the hour numbers 1006g ("7") and 1006e ("5") growing.
At fig. 10F, after displaying watch user interface 1020e and in response to receiving input 1050a, computer system 1000 displays 1020F, which is similar to watch user interface 1020d, but includes hour numbers being displayed in different width strokes in response to input 1050 a. At FIG. 10F, computer system 1000 displays a watch user interface 1020F that includes hours numbers 1006h ("8") and 1006d ("4") being displayed in increased width strokes. In some embodiments, the hour numbers 1006f ("6"), 1006g ("7"), and 1006e ("5") are displayed in the width strokes they were displayed before the input 1050a was received by the computer system 1000. In watch user interface 1020f, the width strokes displaying hour number 1006f ("6") return to the width strokes it displayed in watch user interface 1020c, hour numbers 1006g ("7") and 1006e ("5") are displayed with slightly increased width strokes relative to the width strokes it displayed in watch user interface 1020c, and hour numbers 1006h ("8") and 1006d ("4") are displayed with increased width strokes by a greater magnitude.
In some embodiments, computer system 100 displays an animation showing the number of affected hours growing and/or shrinking. For example, in some implementations, the transition from displaying watch user interface 1020e to 1020f includes displaying animations with the hours numbers 1006g ("7") and 1006e ("5") shrinking, while displaying animations with the hours numbers 1006h ("8") and 1006d ("4") growing.
10G-10J illustrate computer system 1000 displaying a watch user interface including hour numbers 1006a-1006l, wherein the width strokes of the displayed hour numbers 1006a-1006l are updated based on the rotational input such that the width strokes of the displayed hour numbers 1006a-1006l reflect a snake-out animation. At FIG. 10G, computer system 1000 displays a watch user interface 1020G that includes hour numbers 1006a-1006l. The watch user interface 1020g includes a time indication 1008 that indicates that the current time at the watch user interface 1020g is 12:09. Based on the current time, computer system 1000 displays watch user interface 1020g, with hour number 1006l ("12") displayed in the maximum width stroke, hour number 1006a ("1") displayed in the minimum width stroke, hour number 1006k ("11") displayed in the second maximum width stroke, and so on, as described above with respect to fig. 12A-12B. At fig. 10G, computer system 1000 receives rotational input 1060a via rotatable and depressible input mechanism 1004.
At FIG. 10H, in response to receiving the rotational input 1060a, the computer system 1000 displays a watch user interface 1020H that updates the width strokes of the display hours numbers 1006a-1006l according to the snake-out animation. Displaying the watch user interface 1020h with a snake-out animation includes displaying the hour numbers 1006a-1006l with updated width strokes in response to: the rotatable and depressible input mechanism 1004 rotates about a second axis of rotation that is different from the first axis of rotation. For example, watch user interface 1020h includes hours numbers 1006a-1006l, where, although the current time is 12:09 as indicated by time indication 1008, hours number 1006a ("1") is displayed with the largest width stroke, hours number 1006l ("12") is displayed with the second largest width stroke, and so on as the circle of hours numbers in a counter-clockwise direction traverses until hours number 1006b ("2") is displayed with the smallest width stroke.
At fig. 10I, after displaying watch user interface 1020h, computer system 1000 displays watch user interface 1020I. At watch user interface 1020i, some or all of the hour numbers 1006a-1006l are displayed with a stroke width shifted by one hour number relative to watch user interface 1020 h. For example, watch user interface 1020i includes hours numbers 1006a-1006l, where, although the current time is 12:09 as indicated by time indication 1008, hours number 1006b ("2") is displayed with the largest width stroke, hours number 1006a ("1") is displayed with the second largest width stroke, and so on as the traversal around the hours number in a counter-clockwise direction, until hours number 1006c ("3") is displayed with the smallest width stroke.
At fig. 10J, after displaying watch user interface 1020i, computer system 1000 displays watch user interface 1020J. At watch user interface 1020j, the hour numbers 1006a-1006l are displayed with a stroke width that is all shifted by one hour number relative to watch user interface 1020 i. For example, watch user interface 1020j includes hours numbers 1006a-1006l, where, although the current time is 12:09 as indicated by time indication 1008, hours number 1006c ("3") is displayed with the largest width stroke, hours number 1006b ("2") is displayed with the second largest width stroke, and so on as the traversal around the hours number in a counter-clockwise direction, until hours number 1006d ("4") is displayed with the smallest width stroke.
In some implementations, the duration of the snake animation is based at least in part on the magnitude of the rotational input 1060a (e.g., rotating the number of degrees of the rotatable and depressible input mechanism 1004). In some embodiments, the snake animation continues in the above-described mode until each hour number (e.g., hour numbers 1006a-1006 l) has been displayed with a maximum width stroke.
Fig. 10K shows the computer system 1000 displaying a watch user interface 1020K when the computer system 1000 is in a lower power state. In some embodiments, in accordance with a determination that computer system 1000 did not receive input within a threshold duration, computer system 1000 entered a lower power state. In some implementations, in accordance with a determination that computer system 1000 received a gesture that includes (e.g., a palm of a user's hand) covering at least a threshold portion of display 1002, computer system 1000 enters a lower power state. In some implementations, in response to receiving an input (e.g., a tap input, a rotation input, a press input, etc.) while the computer system 1000 is in a lower power state, the computer system exits the lower power state and returns to the higher power state. In some implementations, entering the lower power state includes displaying the watch user interface at a lower brightness. In some implementations, entering the lower power state includes limiting and/or changing animation and/or visual effects displayed in the watch user interface. In some implementations, entering the lower power state includes displaying the watch user interface without displaying elements of the watch user interface that would be displayed in the higher power state (e.g., updating the time indicator 1008 to be displayed without a second analog clock hand). In some embodiments, entering the lower power state includes discarding updating elements included in the watch user interface, such as discarding updating complex function block 1012a when computer system 1000 is in the lower power state, or updating complex function block 1012a less frequently when computer system 1000 is in the lower power state.
At fig. 10K, computer system 1000 displays a watch user interface 1020K, which is an updated version of watch user interface 1020g, wherein computer system 1000 has entered a lower power state. Upon entering the lower power state, computer system 1000 displays 1020k, which includes displaying hour numbers 1006a-1006l in updated locations within display 1002. In watch user interface 1020k, the distance between the hour numbers 1006a-1006l and the edge of display 1002 is greater than the distance between the hour numbers 1006a-1006l in watch user interface 1020 g. In some implementations, updating the watch user interface 1020g to replace it with the watch user interface 1020k includes displaying an animation of the hours digits 1006a-1006l collapsing toward the center of the display 1002. In some implementations, transitioning from the higher power state to the lower power state includes reducing the brightness of the display 1002.
At watch user interface 1020k, hour numbers 1006a-1006l are shown as outline lines rather than solid lines. In some implementations, displaying the hour number as a contour line instead of a solid line allows the background color of the watch user interface to be displayed through portions of the hour number. In some examples, transitioning from displaying watch user interface 1020g to displaying watch user interface 1020k includes an animation of display hours numbers 1006a-1006l being updated to display as an outline. In some implementations, computer system 1000 updates the watch user interface including hour numbers 1006a-1006l based on computer system 1000 entering a lower power state without hour numbers 1006a-1006l being displayed as outlines.
FIG. 10L illustrates an embodiment in which when the computer system is in a lower power state, the computer system 1000 displays a watch user interface 1020L that includes display of hour numbers 1006a-1006L in strokes of the same width. In some embodiments, computer system 1000 transitions from displaying watch user interface 1020g to displaying watch user interface 1020l in accordance with computer system 1000 entering a lower power state. In some embodiments, transitioning from displaying watch user interface 1020g to displaying watch user interface 1020l includes an animation of display hour numbers 1006a-1006l being updated to be displayed in the same width stroke. At watch user interface 1020l, the width strokes displaying the hour numbers 1006a-1006l are the same as the minimum width strokes displaying one of the hour numbers 1006a-1006l when the computer system 1000 is in a higher power state. In some implementations, transitioning from watch user interface 1020g to display 1020l includes updating hour numbers 1006a-1006l to display with the same width strokes as displayed in watch user interface 1020g by hour number 1006a ("1").
At watch user interface 1020l, hour numbers 1006a-1006l are displayed as outline lines rather than in solid lines. In some examples, transitioning from displaying watch user interface 1020g to displaying watch user 1020l includes displaying an animation of hour numbers 1006a-1006l updated to be displayed as an outline. In some implementations, the computer system 1000 displays a watch user interface 1020l in which the hour numbers 1006a-1006l are not displayed as outlines.
FIG. 10M illustrates computer system 1000 receiving input 1050b (e.g., a long press input) on watch user interface 1020M. In fig. 10M, computer system 1000 is displaying a watch user interface 1020M that substantially matches watch user interface 1020a.
At FIG. 10N, in response to receiving input 1050b, computer system 1000 displays a selection user interface 1014a. The selection user interface 1014a is a user interface for selecting a watch user interface to be displayed by the computer system 1000. Selection user interface 1014a includes representation 1016b1, which is a representation of watch user interface 1020m, and includes various features of watch user interface 1020 m. In some embodiments, representation 1016b1 is a static representation of watch user interface 1020m and includes an indication of a time other than the current time and/or complex function blocks containing information other than real-time update data.
Selection user interface 1014a also includes partial views of representation 1016a and representation 1016c corresponding to a watch user interface other than watch user interface 1020 m. Select user interface 1014a also includes a shared user-interactive graphical user interface object 1018 which, when selected, causes computer system 1000 to display a user interface related to transmitting and/or sharing information about watch user interface 1020m to another device (e.g., another computer system). The selection user interface 1014a also includes an editing user interactive graphical user interface object 1022 which, when selected, causes the computer system 1000 to display an editing user interface for editing aspects of the watch user interface 1020 m. The selection user interface 1014a also includes a dial indicator 1024 that includes a visual and/or textual indication of the name of the watch user interface currently centered in the selection user interface 1014a. At fig. 10N, dial indicator 1024 indicates the currently indicated watch user interface 1020m, titled "dial", which is represented in selection user interface 1014a by representation 1016b 1. At fig. 10N, computer system 1000 detects an input 1050c (e.g., a tap input) on edit user interactive graphical user interface object 1022.
At FIG. 10O, in response to detecting input 1050c, computer system 1000 displays editing user interface 1026a. Editing user interface 1026a includes an aspect indicator 1028a that includes a visual and/or textual representation of an aspect of watch user interface 1020m that is currently selected for editing. At fig. 10O, aspect indicator 1028a indicates that the aspect of watch user interface 1020m currently selected for editing is "style".
Editing user interface 1026a also includes a selection indicator 1034a that includes a visual and/or textual representation of the currently selected option of the editable aspect of watch user interface 1020 m. At fig. 10O, selection indicator 1034a indicates that the currently selected "style" option of watch user interface 1020m is "rounded".
Editing user interface 1026a also includes a location indicator 1032a. The position indicator 1032a includes a graphical indication of the number of selectable options of the editable aspect of the watch user interface 1020m currently being edited and the position of the currently selected option in the list of selectable options. For example, the position indicator 1032a indicates that the currently selected option "rounded" of the "style" aspect of the watch user interface 1020m is located at the top of the list of at least two possible options of the "style" aspect of the watch user interface 1020 m.
Editing user interface 1026a also includes a representation 1016c1 that indicates that the watch user interface currently being edited is a watch user interface corresponding to representation 1016c1 being watch user interface 1020 m. Representation 1016c1 corresponds to watch user interface 1020m and includes features of watch user interface 1020m including representations of complex function blocks 1012 a. At FIG. 10O, computer system 1000 detects an input 1050d (e.g., swipe input) on edit user interface 1026 a.
At FIG. 10P, in response to receiving swipe input 1050d, computer system 1000 displays editing user interface 1026b. Editing user interface 1026b includes an aspect indicator 1028b that includes a visual and/or textual representation of an aspect of watch user interface 1020m that is currently selected for editing. At fig. 10P, the aspect indicator 1028b indicates that the currently selected aspect of the watch user interface 1020m for editing is "dial color".
Editing user interface 1026b also includes a selection indicator 1034b that includes a visual and/or textual representation of the currently selected option of the editable aspect of watch user interface 1020 m. At fig. 10P, selection indicator 1034b indicates that the currently selected "dial color" option of watch user interface 1020m is "on".
Editing user interface 1026b also includes a location indicator 1032b. The position indicator 1032b includes a graphical indication of the number of selectable options of the editable aspect of the watch user interface 1020m currently being edited and the position of the currently selected option in the list of selectable options. For example, the position indicator 1032b indicates that the currently selected option "on" of the "dial color" aspect of the watch user interface 1020m is located at the bottom of the list of at least two possible options of the "dial color" aspect of the watch user interface 1020 m.
Editing user interface 1026b also includes a representation 1016c2 that indicates that the watch user interface currently being edited is a watch user interface corresponding to representation 1016c2 being watch user interface 1020 m. Representation 1016c2 corresponds to watch user interface 1020m and includes features of watch user interface 1020m including representations of complex function blocks 1012 a. At FIG. 10P, computer system 1000 detects an input 1050e (e.g., swipe input) on edit user interface 1026 b.
At FIG. 10Q, in response to receiving swipe input 1050e, computer system 1000 displays editing user interface 1026c1. Editing user interface 1026c1 includes an aspect indicator 1028c that includes a visual and/or textual representation of an aspect of watch user interface 1020m that is currently selected for editing. At fig. 10Q, aspect indicator 1028c indicates that the currently selected aspect of watch user interface 1020m for editing is a "color".
Editing user interface 1026c also includes a selection indicator 1034c1 that includes a visual and/or textual representation of the currently selected option of the editable aspect of watch user interface 1020 m. At fig. 10Q, selection indicator 1034c1 indicates that the currently selected "color" option of watch user interface 1020m is "black".
The editing user interface 1026c also includes a color option indicator 1036 that includes various selectable color options. The color option indicator 1036 includes a selected color 1036a that includes a visual indication surrounding the currently selected color that provides a visual and/or graphical indication of the selected color and its location within the color option indicator 1036.
Editing user interface 1026c1 also includes representation 1016c3 that indicates that the watch user interface currently being edited is a watch user interface corresponding to representation 1016c3 that is watch user interface 1020 m. Representation 1016c3 corresponds to watch user interface 1020m and includes features of watch user interface 1020m including representations of complex function blocks 1012 a. At fig. 10Q, computer system 1000 detects rotational input 1060c via rotatable and depressible input mechanism 1004.
At FIG. 10R, in response to receiving the rotational input 1060c, the computer system 1000 displays an edit user interface 1026c2. Editing user interface 1026c2 includes an aspect indicator 1028c that includes a visual and/or textual representation of the aspect of watch user interface 1020m that is currently selected for editing. At fig. 10R, aspect indicator 1028c indicates that the currently selected aspect of watch user interface 1020m for editing is a "color".
Editing user interface 1026c2 also includes a selection indicator 1034c2 that includes a visual and/or textual representation of the currently selected option of the editable aspect of watch user interface 1020 m. At fig. 10R, selection indicator 1034c2 indicates that the currently selected "color" option of watch user interface 1020m is "green.
The editing user interface 1026c2 also includes a color option indicator 1036 that includes various selectable color options. The color option indicator 1036 includes a selected color 1036b that includes a visual indication surrounding the currently selected color that provides a visual and/or graphical indication of the selected color and its location within the color option indicator 1036. In some embodiments, transitioning from display editing user interface 1026c1 to display editing user interface 1026c2 includes displaying an animation that illustrates the color included in color option indicator 1036 moving such that the newly selected color (e.g., "green" instead of "black") is displayed within the visual indication included in selected color 1036 b.
Editing user interface 1026c2 also includes representation 1016c4 that indicates that the watch user interface currently being edited is a watch user interface corresponding to representation 1016c4 that is watch user interface 1020 m. Upon editing user interface 1026c2, representation 1016c4 has been updated such that the context of representation 1016c4 corresponding to the context of watch user interface 1020m has been updated to be displayed in green. Thus, FIG. 10R shows that the color of the background selected for the watch user interface 1020m has been edited. Representation 1016c4 corresponds to watch user interface 1020m and includes features of watch user interface 1020m including representations of complex function blocks 1012 a. At FIG. 10R, computer system 1000 detects an input 1050f (e.g., swipe input) on edit user interface 1026c 2.
At FIG. 10S, in response to receiving input 1050f, computer system 1000 displays edit user interface 1026d1. Editing user interface 1026d1 includes an aspect indicator 1028d that includes a visual and/or textual representation of the aspect of watch user interface 1020m that is currently selected for editing. At fig. 10S, an aspect indicator 1028d indicates that the currently selected aspect of the watch user interface 1020m for editing is a "complex function block.
Editing user interface 1026d1 also includes representation 1016c5 that indicates that the watch user interface currently being edited is a watch user interface corresponding to representation 1016c5 that is watch user interface 1020 m. At fig. 10S, computer system 1000 detects input 1050g (e.g., tap input) on a portion of representation 1016c5 that corresponds to complex function block 1012a of watch user interface 1020 m.
At FIG. 10T, in response to detecting tap input 1060g, computer system 1000 displays an editing user interface 1026d2 that includes a plurality of selectable complex function block options to be displayed with watch user interface 1020 m. In some embodiments, the selectable complex function blocks are categorized into a plurality of categories based on associated features and/or applications associated with the selectable complex function blocks. Editing user interface 1026d2 includes a category 1038a that includes visual and/or textual indications of complex function blocks under category 1038a that are related to "weather". The editing user interface 1026d2 also includes a category 1038b that includes visual and/or textual indications of complex function blocks under the category 1038b that are associated with "noise". In some embodiments, a category includes multiple complex function blocks, in which case multiple complex function blocks associated with a given category are displayed below text and/or visual indications associated with the category. In some implementations, the editing user interface 1026d2 is initially displayed centered on the selected complex function block from the previous user interface (e.g., editing user interface 1026d 1) and/or with a selection focus. In some implementations, the computer system 1000 navigates from one complex function block option to another complex function block option (e.g., moves the selection focus) by scrolling through swipe inputs on the editing user interface 1026d2 and/or rotational inputs via the rotatable and depressible input mechanism 1004. The editing user interface 1026d2 also includes a cancel user interactive graphical user interface object 1042 that, when selected, causes the computer system 1000 to cease displaying the editing user interface 1026d2 and display the editing user interface 1026d1. The editing user interface 1026d2 also includes complex function blocks 1012a and 1012b, where selecting a complex function block corresponds to selecting a corresponding complex function block for display within the watch user interface 1020 m. At fig. 10T, computer system 1000 receives input 1050h (e.g., tap input) on complex function block 1012 b.
At FIG. 10U, in response to receiving input 1050h, computer system 1000 displays an editing user interface 1026d3 that includes representation 1016c 6. Representation 1016c6 is a modified version of representation 1016c5 that includes complex function block 1012b instead of complex function block 1012 a. Thus, edit user interface 1026d3 indicates: watch user interface 1020m has been edited in response to input 1050h and, in response to receiving input 1050h, computer system 1200 edits representation 1016c6 to include complex function block 1012b. At fig. 10U, computer system 1000 receives a press input 1070 on rotatable and pressable input mechanism 1004.
At fig. 10V, in response to receiving the press input 1070, the computer system 1000 displays a selection user interface 1014b. The selection user interface 1014b substantially matches the selection user interface 1014a, but the selection user interface includes a representation 1016b2 of an updated version of 1016b1 that includes updates to the watch user interface 1020m shown in fig. 100-10U (e.g., updated background color and new complex function block 1012 b). At fig. 10V, computer system 1000 detects input 1050i (e.g., tap input) on representation 1016b 2.
At fig. 10W, in response to receiving input 1050i, computer system 1000 displays watch user interface 1080. The watch user interface 1080 is an edited version of the watch user interface 1020m in which the complex function block 1012a corresponding to the air quality index complex function block has been replaced with the complex function block 1012b corresponding to the noise complex function block. Watch user interface 1080 is also displayed in a different background color than the background color of watch user interface 1020m (e.g., green instead of black as discussed above with reference to fig. 10Q-10R). In some embodiments, other aspects of the watch user interface described above may be edited in a similar manner to the process described above.
FIG. 11 is a flow diagram illustrating a method for managing a clock face based on state information of a computer system (e.g., 1000), according to some embodiments. The method (1100) is performed at a computer system (e.g., a smart watch, wearable electronic device, smart phone, desktop computer, laptop computer, tablet computer) in communication with a display generation component (e.g., 1002) (e.g., a display controller, touch sensitive display system). In some implementations, the computer system communicates with one or more input devices (e.g., rotatable input mechanisms, touch-sensitive surfaces). Some operations in method 1100 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, method 1100 provides an intuitive way for managing a clock face based on state information of a computer system (e.g., 1000). The method reduces the cognitive burden on a user to manage the clock face based on state information of the computer system, thereby creating a more efficient human-machine interface. For battery-powered computing devices, enabling users to manage the clock face based on state information of the computer system faster and more efficiently saves power and increases the time between battery charges.
When the computer system is in a first state, the computer system displays (1102) a first user interface (e.g., 1020 a) (e.g., clock face, watch user interface, wake screen, dial, lock screen) comprising an analog dial (e.g., 12 hour dial, 24 hour dial) via a display generation component. Displaying the analog dial while the computer system is in the first state includes simultaneously displaying: a time indicator (e.g., 1008 as shown in fig. 10A) on the analog dial indicating the current time (e.g., time of day; time in the current time zone) (1104) (e.g., hour hand, or hour and minute hand); and an hour indicator (e.g., 1006a-1006l as shown in fig. 10A) (1106) (e.g., a plurality of numbers corresponding to hours of a day) displayed around the analog dial, wherein the hour indicator includes an indicator of a first hour (e.g., 1006j as shown in fig. 10A) displayed in a first size (e.g., a width stroke and/or font size corresponding to the first size) and an indicator of a second hour (e.g., 1006i as shown in fig. 10A) displayed in a second size (e.g., a width stroke and/or font size corresponding to the second size) different from the first size. In some embodiments, the time indicator is updated continuously or periodically over time to reflect the time of day. In some embodiments, the time indicator coordinates with and/or aims to reflect a coordinated universal time with an offset based on the currently selected time zone. In some implementations, displaying the hour number along an outer edge of the user interface includes displaying an hour indicator along an edge of the touch-sensitive display.
After displaying an analog dial in which a first hour indicator (e.g., 1006j as shown in FIG. 10A) is displayed in a first size and a second hour indicator (e.g., 1006i as shown in FIG. 10A) is displayed in a second size, the computer system detects (1108) (e.g., determines) a request to display the analog dial while the computer system is in a second state different from the first state (e.g., in response to detecting a state change of the computer system from the first state to the second state) (e.g., a current time change (e.g., a small change of the current time, a minute change of the current time, a second change of the current time), a state change of the computer system due to a detected user input, and the computer system displays/provides a response to the user input and/or performs an operation due to the user input).
In response to detecting a change in state (e.g., from a first state to a second state) of the computer system (e.g., 1000), the computer system displays (1110) a first user interface updated to reflect the second state (e.g., as shown in FIG. 10B), the display including displaying an analog dial. Displaying the analog dial while the computer system is in the second state includes simultaneously displaying: a time indicator (e.g., 1008 as shown in fig. 10B) on the analog scale indicating the current time (1112) (e.g., hour hand, or hour and minute hand); and an hour indicator (e.g., 1 006a-10061 as shown in fig. 10B) (1114) (e.g., corresponding to a plurality of numbers corresponding to hours of a day) displayed around the analog scale, wherein the hour indicator comprises an indicator of a first hour (e.g., 1006j as shown in fig. 1 0B) displayed at a third size (e.g., a width stroke and/or font size corresponding to the third size) different from the first size and an indicator of a second hour (e.g., 1006i as shown in fig. 10B) displayed at a fourth size (e.g., a width stroke and/or font size corresponding to the fourth size) different from the second size. In some embodiments, the third dimension is different from the second dimension and the fourth dimension. In some embodiments, the third dimension is the same as the second dimension or the fourth dimension. In some embodiments, the fourth dimension is the same as the first dimension. In some embodiments, the fourth dimension is different from the first dimension and the third dimension. A first user interface (e.g., as shown in fig. 10A) including a first hour of indicators displayed in a first size and a second hour of indicators displayed in a second size is displayed while the computer system is in a first state, and a first user interface (e.g., as shown in fig. 10B) including a first hour of indicators displayed in a third size and a second hour of indicators displayed in a fourth size is displayed while the computer system is in a second state, providing visual feedback to a user regarding a current time based on the size of the displayed hour indicators, and improving the visibility of the current hour of the small user interface (e.g., wherein the current hour is displayed in a larger size than a different hour). Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, upon displaying the first user interface, the computer system (e.g., 1000) updates the indicator (e.g., 1006j as shown in fig. 10A) for the first hour from being displayed at a first size (e.g., a width stroke and/or font size corresponding to the first size) to being displayed at a third size (e.g., as shown in fig. 10B) (e.g., a width stroke and/or font size corresponding to the third size). In some embodiments, upon displaying the first user interface, the computer system updates the indicator (e.g., 1006i as shown in fig. 10A) for the second hour from being displayed at the second size (e.g., a width stroke and/or font size corresponding to the second size) to being displayed at the fourth size (e.g., as shown in fig. 10B) (e.g., a width stroke and/or font size corresponding to the fourth size). In some embodiments, transitioning from displaying the indicator of the first hour in the first size to displaying the indicator of the first hour in the third size and from displaying the indicator of the second hour in the second size to displaying the indicator of the second hour in the fourth size occurs in response to a user input (e.g., rotating the user input) (e.g., as shown in fig. 10G-10H). In some embodiments, transitioning from displaying the indicator of the first hour in the first size to displaying the indicator of the first hour in the third size and from displaying the indicator of the second hour in the second size to displaying the indicator of the second hour in the fourth size occurs in accordance with a determination that the computer system is in an active state (e.g., a higher power state) (e.g., as shown in fig. 10A-10B). Updating the size of the display hours indicator as it is displayed provides visual feedback that the time has changed (e.g., from the first hour to the second hour). Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the request to display the analog dial while the device is in the second state includes user input (e.g., as shown in fig. 10A-10B) (e.g., touch input, tap (e.g., 1050A), wrist lifting gesture). Displaying user input in response to the user input provides visual feedback that the user input was received and that the computer system is awake and/or active for further user input. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the state change of the computer system (e.g., 1000) includes a current time change (e.g., as shown in fig. 10A-10B) (e.g., an hour change of the current time, a minute change of the current time, a second change of the current time). Conditionally changing the size of the hour indicator displayed on the first user interface based on the current time change provides visual feedback to the user that the current time has changed without requiring the user to provide further input. Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the computer system detects a second user input (e.g., as shown in fig. 10C) (e.g., 1050 a) (e.g., a touch input, tap, wrist lift gesture on a touch-sensitive display). In some embodiments, in response to detecting the second user input, the computer system updates an indicator (e.g., 1006f as shown in fig. 10D) of the first hour to be displayed at a fifth size different from the first size. In some embodiments, tapping on a particular hour indicator temporarily causes the hour indicator to be displayed in an increased size. In some embodiments, displaying the indicator of the first hour at the fifth size includes animating the indicator of the first hour to grow to the fifth size. In some embodiments, the change in the size of the indicator for the first hour is temporary. In some embodiments, displaying the indicator of the first hour at the fifth size corresponds to displaying an animation of the indicator of the first hour growing to the fifth size and then shrinking to the first size (e.g., as shown in fig. 10C-10F). Updating the indicator of the first hour to be displayed at a fifth size different from the first size in response to detecting the second user input provides visual feedback that the user input was received. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the computer system sequentially animations the hour indicators displayed around the dial (e.g., 1006a-1006l as shown in fig. 10G-10J), wherein animating the hour indicators comprises: displaying the hour indicator at an initial size (e.g., 1006 j); updating the hour indicator to be displayed in an enlarged size different from the initial size; after the hour indicator is updated to be displayed in an enlarged size that is different from the initial size, the size of the hour indicator is reduced (e.g., the hour indicator is updated to be displayed in the initial size). In some embodiments, sequentially animating the hour indicators displayed around the dial includes animating the hour indicators around the edge of the dial in numerical order. In some implementations, sequentially animating the hour indicators displayed around the dial includes displaying an animation that overlaps at least two hour indicators (e.g., the animation is interlaced) such that one hour indicator is growing and the other hour indicator is shrinking. In some embodiments, sequentially animating the hour indicators displayed around the dial ends when each of the hour indicators displayed around the dial has been animated. Sequentially animating the hour indicators displayed around the dial, wherein animating the hour indicators comprises: displaying the hour indicator at an initial size, updating the hour indicator to display at an enlarged size different from the initial size, and updating the hour indicator to display at the initial size after updating the hour indicator to display at the enlarged size different from the initial size provides visual feedback that the hour indicator is a responsive, non-static graphical element updated in response to a change in context and/or input. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the time indicator (e.g., 1008 as shown in fig. 10A) includes a plurality of clock pointers. In some embodiments, the computer system (e.g., 1000) updates the position of the clock hands relative to the simulated dial (e.g., as shown in fig. 10A-10B) over time (e.g., automatically) to indicate the current time (e.g., current time; time in the current time zone). A plurality of clock hands are displayed, wherein the position of the clock hands is updated to indicate the current time, providing visual feedback regarding the current time indicated by the first user interface. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some implementations, the computer system (e.g., 1000) displays the hour indicator in a plurality of different sizes. In some embodiments, the computer system displays the hour indicator corresponding to the current time at a maximum size of the plurality of different sizes (e.g., 1006j as shown in fig. 10A). In some embodiments, the size of the display hour indicator is gradually reduced (e.g., clockwise or counterclockwise, in numerical order or in reverse numerical order) around the analog scale. In some embodiments, an hour indicator (e.g., 1006k as shown in fig. 10A) corresponding to an hour after the current time is displayed in a minimum size. In some embodiments, the hour indicator corresponding to the hour prior to the current time is displayed in a second largest size (e.g., 1006i as shown in fig. 10A). The hour indicators are displayed in a plurality of different sizes, wherein the hour indicator corresponding to the current time is displayed in a maximum size, providing visual feedback regarding the relevance of the current hour indicator displayed in the maximum size relative to other hour indicators based on their relatively smaller size. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some implementations, a computer system (e.g., 1000) communicates with a rotatable input mechanism (e.g., 1004) (e.g., a rotatable input device; a rotatable input device). In some embodiments, the computer system detects a rotational input (e.g., 1060 a) via a rotatable input mechanism (e.g., rotation of the rotatable and depressible input mechanism about an axis of rotation) (e.g., clockwise rotational input, counterclockwise rotational input) (in some embodiments, non-rotational input (e.g., flick gesture, swipe gesture, and/or mouse click)). In some embodiments, in response to detecting the rotational input, the computer system temporarily increases the size of the display of at least one hour indicator (e.g., 1006a as shown in fig. 10G-10H). In some implementations, increasing the size of the display hours indicator corresponds to animating the hours indicator to grow to a larger size. In some embodiments, the change in the size of the hour indicator is temporary. In some embodiments, displaying the hour indicator in an enlarged size includes displaying an animation of the hour indicator in an enlarged size, and then collapsing the hour indicator to be displayed in its previous size. In some embodiments, in response to detecting the rotational input, the computer system temporarily reduces the size of the display of at least one hour indicator (e.g., 1006e as shown in fig. 10G-10H). In some implementations, in response to detecting the rotational input, the computer system temporarily reduces the size of the displayed multiple hour indicators. In some embodiments, in response to detecting the rotational input, the computer system temporarily increases the size of the displayed multiple hour indicators. Temporarily increasing the size of the display hours indicator in response to detecting the rotational input provides visual feedback of receipt of the rotational input. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, in response to detecting the rotational input (e.g., 1060 a), the computer system (e.g., 1000) temporarily increases the size of the display of the multiple hour indicators (e.g., 1006a-1006l as shown in fig. 10G-10J). In some embodiments, temporarily increasing the size of the displayed multiple hour indicators includes sequentially animating the hour indicators displayed around the scale. In some embodiments, animating the hour indicators includes, for a plurality of different hour indicators: displaying the hour indicator at a corresponding initial size (e.g., 1006a as shown in fig. 10G); updating the hour indicators to be displayed at respective enlarged sizes that are different from the respective initial sizes (e.g., 1006a as shown in fig. 10H); and after updating the hour indicators to be displayed at the respective enlarged sizes that are different from the respective initial sizes, updating the hour indicators to be displayed at the respective initial sizes (e.g., 1006a as shown in fig. 10G). In some embodiments, temporarily increasing the size of the display of the plurality of hour indicators includes animating the hour indicators displayed around the analog dial once per hour indicator (e.g., as shown in fig. 10G-10J) (e.g., clockwise or counterclockwise, in numerical order or in reverse numerical order) (e.g., starting with the hour indicator corresponding to the current time). In some embodiments, if the current time is 12:00 pm, temporarily increasing the size of the display of the multiple hour indicator comprises: i) Displaying the 1 number in an initial size (e.g., 1006a as shown in fig. 10G), then displaying the 1 number in an enlarged size (e.g., 1006a as shown in fig. 10H), and then displaying the 1 number in an initial size (e.g., 1006a as shown in fig. 10G); and ii) displaying the 2 number in an initial size (e.g., 1006b as shown in fig. 10G), then displaying the 2 number in an enlarged size (e.g., 1006b as shown in fig. 10I), then displaying the 2 number in an initial size (e.g., 1006b as shown in fig. 10G); and iii) displaying the 3 digits in an initial size (e.g., 1006c as shown in fig. 10G), then displaying the 3 digits in an enlarged size (e.g., 1006c as shown in fig. 10J), then displaying the 3 digits in an initial size (e.g., 1006c as shown in fig. 10G); etc.; the numbers continue (e.g., in numerical order) around the scale until xii) 12 returns to its original size (e.g., displayed in its original size) (e.g., 1006l as shown in fig. 10G). In some embodiments, the next number (e.g., the number after the current number) is displayed as increasing (e.g., from the initial size to the enlarged size) while the current hour number is displayed as shrinking (e.g., returning from the enlarged size to the initial size). The visual feedback of receipt of the rotational input is provided by sequentially animating the hour indicators around the dial in response to detecting the rotational input to sequentially increase the size of the displayed hour indicators. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, as part of temporarily increasing the size of the displayed hour indicators, the computer system (e.g., 1000) sequentially increases the size of the displayed hour indicators, starting with the initial one of the hour indicators (e.g., 1006a as shown in fig. 10G), and sequentially increasing the size of the hour indicators in a corresponding order around the simulated scale. In some implementations, in accordance with a determination that the rotational input is in a first direction (e.g., clockwise), the respective order is a clockwise order (e.g., clockwise) around the simulated scale. In some implementations, in accordance with a determination that the rotational input is in a second direction (e.g., counter-clockwise) different from the first direction, the respective order is a counter-clockwise order (e.g., counter-clockwise) around the simulated scale. In some implementations, the initial hour indicator is an hour indicator corresponding to a current time (e.g., as indicated by time indicator 1008 in fig. 10G) (e.g., current hour). In accordance with determining that the rotational input is in a given direction and temporarily increasing the size of the display hour indicator by traversing around the analog scale in a particular direction, visual feedback is provided that the rotational input was received and that the rotational input is in a particular direction. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, when the computer system is in a higher power state (e.g., active state, on state, normal (e.g., non-low power) mode), the computer system (e.g., 1000) displays the hour indicator in different width strokes (e.g., 1006a-1006l as shown in fig. 10A). In some embodiments, each hour indicator is displayed in a different width stroke. In some embodiments, the hour indicator corresponding to the current hour is displayed with the thickest width stroke and the hour indicator corresponding to the upcoming hour (e.g., the next hour) is displayed with the thickest width stroke. Displaying the hour indicators in strokes of different widths provides visual feedback regarding the relative importance of the different hour indicators. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the time indicator (e.g., 1008 as shown in fig. 10A) includes a plurality of clock pointers. In some embodiments, the clock hands are displayed in first width strokes. In some embodiments, an hour indicator corresponding to the current time is displayed in a first width stroke. Displaying the clock pointer in the same width stroke as the hour indicator corresponding to the current time provides visual feedback that the current time indicated by the plurality of clock pointers corresponds to the hour indicator corresponding to the current time. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the computer system (e.g., 1000) transitions to a low power state (e.g., as shown in fig. 10K-10L) (e.g., off state, sleep state, low power mode, battery saving mode, economy mode). In some embodiments, the computer system displays the hour indicator in reduced width strokes (e.g., 1006a-1006l as shown in FIG. 10K) while the computer system is in a low power state. In some embodiments, the width of the reduced width stroke is the same as the width stroke of the finest weight within the range of width strokes. Displaying the hour indicator in reduced width strokes while the computer system is in a low power state provides visual feedback that the computer system is in a low power state. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some implementations, the computer system (e.g., 1000) transitions to a low power state (e.g., off state, sleep state, low power mode, battery saving mode, economy mode). In some embodiments, when the computer system is in a low power state (e.g., as shown in fig. 10K-10L), the computer system displays all hour indicators in the same width stroke. In some implementations, when the computer system is in a low power state, all hour indicators (e.g., 1006a-1006L as shown in FIG. 10L) are displayed in width strokes (e.g., the finest width strokes) that display the upcoming hour (e.g., the next hour) when the computer system is in a higher power state. Displaying the hour indicator in the same width stroke when the computer system is in the low power state provides visual feedback that the computer system is in the low power state, enabling the user to quickly and effectively distinguish at a glance whether the computer system is in the low power state or the higher power state. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, the time indicator (e.g., 1008 as shown in fig. 10A) includes a plurality of clock pointers. In some implementations, when the computer system (e.g., 1000) is in a higher power state (e.g., active state, on state, normal (e.g., non-low power) mode), the computer system displays the clock pointer in a second width stroke (e.g., 1008 as shown in fig. 8A). In some implementations, the computer system transitions to a low power state (e.g., off state, sleep state, low power mode, battery saving mode, economy mode). In some embodiments, the computer system displays the clock pointer in a third width stroke while the computer system is in a low power state (e.g., 1008 as shown in fig. 10K). In some embodiments, the clock hands are shown as outlines (e.g., 1008 as shown in fig. 10L). In some embodiments, the third width stroke is thinner than the second width stroke. Displaying the clock hands as outlines when the computer system is in a low power state provides visual feedback that the computer system is in a low power state, enabling a user to quickly and efficiently distinguish at a glance whether the computer system is in a low power state or a higher power state. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some embodiments, as part of displaying the hour indicator around the analog dial, the computer system displays the hour indicator as a contour line in a third width stroke (e.g., 1006a-1006L as shown in FIG. 10L). In some implementations, the contour corresponding to the clock hands has the same thickness as the contour corresponding to the hour indicator. Displaying the clock hands and hour indicators as outlines when the computer system is in a low power state enables a user to quickly and efficiently distinguish at a glance whether the computer system is in a low power state or a higher power state. Further, displaying the hour indicator and the clock hand as outlines provides visual feedback that both the clock hand and the hour indicator, which are displayed as outlines, collectively indicate time. Providing improved visual feedback to the user enhances the operability of the system and makes the computer system more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to use the system more quickly and efficiently.
In some implementations, a computer system (e.g., 1000) communicates with one or more input devices (e.g., display controller, touch-sensitive display system). In some embodiments, the user interface includes at least a first complex function block (e.g., 1012a as shown in fig. 10A). In some implementations, complex functional blocks refer to any clock face feature other than hours and minutes for indicating time (e.g., clock hands or hour/minute indications). In some implementations, complex functional blocks provide data obtained from applications. In some embodiments, the complex function block includes an affordance that, when selected, launches the corresponding application. In some implementations, the complex function blocks are displayed at fixed predefined locations on the display. In some implementations, the complex function blocks occupy respective positions (e.g., lower right, lower left, upper right, and/or upper left) at particular areas of the dial. In some embodiments, the computer system displays the first complex function block and the hour indicator in a first color. In some embodiments, the computer system displays an editing user interface (e.g., 1026b as shown in fig. 10O) for editing the user interface via the display generating component. In some implementations, when displaying the editing user interface, the computer system receives a first sequence of one or more user inputs (e.g., touch inputs, rotation inputs, press inputs) via one or more input devices. In some embodiments, the computer system changes the color of the user interface in response to receiving the first sequence of one or more user inputs. In some embodiments, after changing the color of the user interface, the computer system displays the first complex function block and the hour indicator in a second color different from the first color via the display generating component. In some embodiments, the editing user interface includes an option for editing the first complex function block. In some embodiments, the editing user interface includes an option for replacing a first complex function block with a second complex function block that is different from the first complex function block. In some implementations, replacing a first complex function block (e.g., 1012a as shown in fig. 10A) with a second complex function block (e.g., 1012 as shown in fig. 10W) includes displaying the second complex function block at a location where the first complex function block was previously displayed. In some embodiments, the editing user interface includes options for displaying the hour indicators in different formats. In some implementations, the editing user interface includes options for displaying hour indicators with different font characteristics (e.g., rounded tail shape, flat tail shape). Editing the color of the display complex function block and the hour number in response to receiving the first sequence of one or more user inputs while displaying the editing user interface reduces the amount of input required to edit the color of the display complex function block and the hour number. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, a computer system (e.g., 1000) displays an editing user interface (e.g., 1014 a) for editing the user interface via a display generation component (e.g., 1002). In some embodiments, upon displaying the editing user interface (e.g., 1026c1 as shown in fig. 10Q), the computer system receives a second sequence of one or more user inputs (e.g., 1060c as shown in fig. 10Q) (e.g., touch inputs, rotation inputs, press inputs) via one or more input devices. In some embodiments, in response to receiving the second sequence of one or more inputs and in accordance with a determination that the second sequence of one or more user inputs corresponds to a request to display a background with color fill, the computer system displays a user interface (e.g., 1026c2 as shown in fig. 10R) via the display generation component, wherein a background portion of the user interface is filled with color. In some embodiments, in response to receiving the second sequence of one or more inputs and in accordance with a determination that the second sequence of one or more user inputs corresponds to a request to display a background that is not color-filled, the computer system displays, via the display generating component, a user interface, wherein a background portion of the user interface is not color-filled. In some implementations, displaying the user interface with the background portion of the user interface not filled with color includes displaying the background as a default background color (e.g., black). Editing the color-fill selection based on a determination as to whether the second sequence of one or more user inputs corresponds to a request to display a background with color fill reduces the number of inputs required to edit the color-fill selection. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in accordance with a determination that the second sequence of one or more user inputs corresponds to a request to display a background with color filling, the computer system (e.g., 1000) displays a user interface (e.g., 1080 as shown in fig. 10W) via a display generating component (e.g., 1002), wherein the small hour indicator (e.g., 1006a-10061 as shown in fig. 10W) is displayed at a first distance from an outer edge of a display area of the display generating component. In some embodiments, in accordance with a determination that the second sequence of one or more user inputs corresponds to a request to display a background without color filling (e.g., 1020 as shown in fig. 10A), the computer system displays the user interface via the display generating component, wherein the small hour indicators (e.g., 1006a-1006l as shown in fig. 10A) are displayed at a second distance from an outer edge of the display area of the display generating component. In some embodiments, the second distance is different from the first distance. In some embodiments, the first distance is greater than the second distance. Based on whether one or more sequences of user inputs correspond to a request to display a background with color filling, a user interface that conditionally displays different distances from an edge of the display provides an improved visual display experience to a user by: allowing the color fill color to be displayed at the edge of the display and the hour number to be displayed in the same location (at the edge of the display) when no color fill is selected, without requiring the user to provide further input corresponding to what distance the user interface should be displayed from the edge of the display. Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
It is noted that the details of the process described above with respect to method 1100 (e.g., fig. 11) also apply in a similar manner to the methods described herein. For example, method 900 optionally includes one or more of the features of the various methods described herein with reference to method 700, method 900, and method 1300. For example, method 1100 optionally includes one or more of the features of the various methods described above with reference to method 700. For example, the device may use as a watch user interface a user interface including hour numbers displayed in strokes of different widths with reference to fig. 10A to 10W or a watch user interface based on media items including depth data as described with reference to fig. 6A to 6U. As another example, a watch user interface as described with reference to fig. 10A-10W may include a plurality of city names oriented based on a current location of a computer system, as described above with reference to fig. 8A-8M. As another example, method 1100 optionally includes one or more of the features of the various methods described below with reference to method 1300. For example, the watch user interfaces of fig. 10A-10W may be created or edited via a process for updating and selecting the watch user interfaces as described with reference to fig. 12A-12W. As another example, method 1100 optionally includes one or more of the features of the various methods described below with reference to method 1500. For example, the watch user interface described with reference to fig. 10A-10W may be configured and/or edited via computer system 1400 before being added to computer system 1000. For the sake of brevity, these details are not repeated hereinafter.
Fig. 12A-12W illustrate exemplary user interfaces for selecting and displaying user interfaces. The user interfaces in these figures are used to illustrate the processes described below, including the process in fig. 13.
Fig. 12A shows computer system 1200 displaying a watch user interface 1206 via display 1202. The computer system 1200 includes a rotatable and depressible input mechanism 1204. In some embodiments, computer system 1200 optionally includes one or more features of device 100, device 300, or device 500. In some embodiments, computer system 1200 is a tablet, phone, laptop, desktop, camera, or the like. In some implementations, the inputs described below can optionally be replaced with alternative inputs, such as a press input and/or a rotation input received via the rotatable and depressible input mechanism 1204.
In some embodiments, computer system 1200 has access to a plurality of watch user interfaces that can be selected and/or displayed by computer system 1200 via display 1202. In fig. 12A, computer system 1200 displays a watch user interface 1206. The watch user interface 1206 includes a graphical element 1206a indicating information about the activity level and a time indication 1208 including a set of clock hands indicating the current time (e.g., current hours and minutes).
At fig. 12A, computer system 1200 detects an input 1250a (e.g., a long press) on watch user interface 1206. At fig. 12B, in response to detecting input 1250a, computer system 1200 displays a selection user interface 1210a. The selection user interface 1210a is a user interface for selecting a watch user interface to be displayed by the computer system 1200. Selection user interface 1210a includes a representation 1214a that is a representation of watch user interface 1206 and includes various features of watch user interface 1206. In some embodiments, the representation 1214a is a static representation of the watch user interface 1206 and includes an indication of a time other than the current time and/or complex function blocks containing information other than real-time update data.
Selection user interface 1210a also includes a partial view of representation 1214a corresponding to a watch user interface other than watch user interface 1206, and a partial view of representation 1214c corresponding to a watch user interface for generating and/or obtaining a new watch user interface for display via computer system 1200.
Selection user interface 1210a also includes a selection focus indicator 1212. The selection focus indicator includes a graphical indicator of an element displayed within the current user interface that is currently focused for selection. In some implementations, a press gesture received via the rotatable and depressible input mechanism 1204 results in selection of an element of the currently displayed user interface having a selection focus. In some implementations, the selection focus indicator 1212 provides a visual indication of which element of the currently displayed user interface has the selection focus to provide improved visual feedback as to which element of the currently displayed user interface will be selected in response to a press input received via the rotatable and depressible input mechanism 1204 at a given point in time.
Selection user interface 1210a also includes a shared user-interactive graphical user interface object 1216 that, when selected, causes computer system 1200 to display a user interface related to transmitting and/or sharing information about watch user interface 1206 to another device (e.g., another computer system). The selection user interface 1210a also includes an editing user interactive graphical user interface object 1218 that, when selected, causes the computer system 1200 to display an editing user interface for editing aspects of the watch user interface 1206. The selection user interface 1210a also includes a dial indicator 1220a that includes a visual and/or textual indication of the name of the watch user interface currently centered in the selection user interface 1210 a. At fig. 12B, dial indicator 1220a indicates that currently indicated watch user interface 1206, which is represented by representation 1214B in selection user interface 1210a, is titled "active simulation".
At fig. 12B, computer system 1200 detects swipe input 1250B1 on watch user interface 1210a in a first direction, and swipe input 1250B2 on watch user interface 1210a in a second direction different from the first direction. At fig. 12B, computer system 1200 also detects rotational input 1260a1 via rotatable input mechanism 1204 in a first direction (e.g., clockwise about an axis of rotation) and rotational input 1260a2 via rotatable input mechanism 1204 in a second direction (e.g., counterclockwise about an axis of rotation) different from the first direction.
At fig. 12C, computer system 1200 displays selection user interface 1210b in response to receiving rotation input 1260a2 or in response to receiving swipe input 1250b 2. Selection user interface 1210b includes representation 1214a, which is a representation of watch user interface 1226, and includes various features of watch user interface 1226. In some embodiments, representation 1214a is a static representation of watch user interface 1226 and includes an indication of a time other than the current time and/or complex function blocks containing information other than real-time data. For example, representation 1214a includes complex function block 1224a and complex function block 1224b, but in some embodiments, the data displayed by complex function block 1224a and complex function block 1224 in representation 1210 is not current and/or accurate. Selection user interface 1210b also includes a partial view of representation 1214d corresponding to a watch user interface other than watch user interface 1226, and a partial view of representation 1214b corresponding to watch user interface 1206.
The selection user interface 1210b also includes a sharing user interactive graphical user interface object 121 6 that, when selected, causes the computer system 1200 to display a user interface related to transmitting and/or sharing information about the watch user interface 1226 to another device (e.g., another computer system). Selection user interface 1210b also includes an editing user interactive graphical user interface object 121 that, when selected, causes computer system 1200 to display an editing user interface for editing aspects of watch user interface 1226. The selection user interface 1210b also includes a dial indicator 1220b that includes a visual and/or textual indication of the name of the watch user interface currently centered in the selection user interface 1210b. At fig. 12C, dial indicator 1220b indicates that currently indicated watch user interface 1226, denoted by representation 1214a in selection user interface 1210b, is titled "information graph module".
At FIG. 12C, computer system 1200 detects a tap input 1250C on representation 1214 a. At fig. 12C, computer system 1200 also detects a press input 1270a on rotatable and pressable input mechanism 1204.
At fig. 12D, in response to receiving tap input 1250c or in response to receiving press input 1270a, computer system 1200 displays watch user interface 1226. Watch user interface 1226 includes a current time 1222 indicating a current time (e.g., a current hour and/or minute). The watch user interface 1226 also includes a plurality of complex function blocks including complex function block 1224a and complex function block 1224b. In some implementations, the complex function blocks 1224a and 1224b include information from applications available on the computer system 1200 (e.g., installed on the computer system). In some implementations, the complex function blocks 1224a and 1224b are updated according to the passage of time to display updated information. In some implementations, selecting the complex function block 1224a (e.g., via a tap) causes the computer system 1200 to launch an application corresponding to the complex function block 1224 a. In some implementations, selecting the complex function block 1224b (e.g., via a tap) causes the computer system 1200 to launch an application corresponding to the complex function block 1224b.
At fig. 12E, computer system 1200 displays selection user interface 1210c in response to receiving rotation input 1260a1 or in response to receiving swipe input 1250b 1. Selection user interface 1210c is a watch user interface for generating and/or obtaining a new watch user interface for display via computer system 1200. Selection user interface 1210c includes a dial indicator 1220c that includes a visual and/or textual indication of selection user interface 1210c titled "add dial". Selecting user interface 1210c also includes adding user interactive graphical user interface object 1228 which, when selected, causes computer system 1200 to display a user interface for generating and/or obtaining a new watch user interface for display via computer system 1200. The add user interactive graphical user interface object 1228 includes an add ("+") sign corresponding to adding the watch user interface to the computer system 1200.
At fig. 12E, computer system 1200 detects a tap input 1250d that adds to user interactive graphical user interface object 1228. At fig. 12E, computer system 1200 also detects a press input 1270b on rotatable and pressable input mechanism 1204.
At fig. 12F, computer system 1200 displays generating user interface 1232a in response to receiving tap input 1250d or in response to receiving press input 1270 b. Generating user interface 1232a includes options for adding various watch user interfaces for display via computer system 1200. The available watch user interface is categorized into disks that are displayed in the generate user interface 1232a. Generating user interface 1232a includes disk 1230a1, which includes name 1234a1, titled "new dial," indicating disk 1230a1, and image 1236a1, which includes a graphical representation of a watch user interface corresponding to disk 1230a 1.
Generating user interface 1232a also includes disk 1230a2, which includes name 1234a2, titled "California," indicating disk 1230a2, and image 1236a2, which includes a graphical representation of a watch user interface corresponding to disk 1230a 2. Disc 1230a2 further comprises: a description 1238a1 comprising a textual description of the watch user interface corresponding to disk 1230a 2; and add user-interactive graphical user interface object 1240a1, which when selected, causes computer system 1200 to add a watch user interface corresponding to disk 1230a2 (e.g., download a watch user interface from a remote server) for display via computer system 1200. In fig. 12F, the disk 1230a1 is displayed in a first background color, and the disk 1230a2 is displayed in a background color. In some embodiments, the background color of the disc indicates whether the disc corresponds to a single available watch user interface or a collection of available watch user interfaces (e.g., multiple watch user interfaces). For example, in FIG. 12F, disk 1230a1 is displayed in a background color indicating that it corresponds to a set of available watch user interfaces, and disk 1230a2 is displayed in a background color indicating that it corresponds to a single available watch user interface.
Generating user interface 1232a also includes disk 1230a3, which includes a name 1234a3, titled "active simulation," indicating disk 1230a3, and an image 1236a3, which includes a graphical representation of the watch user interface corresponding to disk 1230a 3. Disc 1230a3 also includes a description 1238a2 that includes a textual description of the watch user interface corresponding to disc 1230a 3. Generating the user interface 1232a further includes adding a user-interactive graphical user interface object 1240a2 that, when selected, causes the computer system 1200 to add a watch user interface corresponding to the disk 1230a3 (e.g., download a watch user interface from a remote server) for display via the computer system 1200.
Generating the user interface 1232a further includes canceling the user-interactive graphical user interface object 1232a, which when selected causes the computer system to display the select watch user interface 1210c. Generating user interface 1232a also includes a search bar 1234 that, when selected, causes computer system 1200 to display a user interface that includes options for searching among a plurality of available watch user interfaces (e.g., by entering letters corresponding to the name or title of the watch user interface using voice and/or touch input).
At FIG. 12F, computer system 1200 detects a tap input 1250e on disk 1230a 1. At fig. 12F, computer system 1200 also detects a press input 1270c on rotatable and pressable input mechanism 1204 when disk 1230a1 has a selected focus. In some embodiments, generating the user interface 1232a includes displaying around the disc 1230a1 a selection focus indicator 1212 that indicates that the disc 1230a1 currently has a selection focus.
At fig. 12G, computer system 1200 displays generate user interface 1232b in response to receiving tap input 1250e or in response to receiving press input 1270c. Generating user interface 1232b includes options for adding various watch user interfaces for display via computer system 1200. The available watch user interface is categorized into disks that are displayed within the generate user interface 1232b. Generating user interface 1232b includes disk 1230b1, which includes name 1234b1, titled "seeker," indicating disk 1230b1, and image 1236b1, which includes a graphical representation of the watch user interface corresponding to disk 1230b 1. Disc 1230b1 further comprises: a description 1238b1 comprising a textual description of the watch user interface corresponding to disk 1230b 1; and add user-interactive graphical user interface object 1240b1, which when selected, causes computer system 1200 to add a watch user interface corresponding to disk 1230b1 (e.g., download a watch user interface from a remote server) for display via computer system 1200.
Generating user interface 1232b also includes disk 1230b2, which includes name 1234b2, entitled "artist dial," indicating disk 1230b2, and image 1236b2, which includes a graphical representation of a watch user interface corresponding to disk 1230b 2. Disc 1230b2 further includes: a description 1238b2 comprising a textual description of the watch user interface corresponding to disk 1230b 2; and add user-interactive graphical user interface object 1240b2, which when selected, causes computer system 1200 to add a watch user interface corresponding to disk 1230b2 (e.g., download a watch user interface from a remote server) for display via computer system 1200.
Generating user interface 1232b also includes disk 1230b3, which includes name 1234b3, titled "active simulation," indicating disk 1230b3, and image 1236b3, which includes a graphical representation of the watch user interface corresponding to disk 1230b 3. Disc 1230b3 further includes: a description 1238b3 including a textual description of the watch user interface corresponding to disk 1230b 3; and add user-interactive graphical user interface object 1240b3, which when selected, causes computer system 1200 to add a watch user interface corresponding to disk 1230b3 (e.g., download a watch user interface from a remote server) for display via computer system 1200. Generating the user interface 1232b also includes returning the user-interactive graphical user interface 1237a, which when selected causes the computer system 1200 to display the generating user interface 1232a. In some embodiments, generating the user interface 1232b includes displaying around the disc 1230b1 a selection focus indicator 1212 that indicates that the disc 1230b1 currently has a selection focus.
At FIG. 12G, computer system 1200 detects a tap input 1250f on disk 1230b 1. At fig. 12G, computer system 1200 also detects a press input 1270d on rotatable and pressable input mechanism 1204 when disk 1230b1 has a selected focus.
At fig. 12H, computer system 1200 displays generate user interface 1232c in response to receiving tap input 1250f or in response to receiving press input 1270d. Generating user interface 1232c includes information related to adding a "seeker" dial to computer system 1200. Generating the user interface 1232c includes returning the user-interactive graphical user interface 1237b, which when selected causes the computer system 1200 to display the generating user interface 1232b. Generating user interface 1232c also includes image 1236b1, which includes a graphical representation of a watch user interface that may be added to computer system 1200. Generating the user interface also includes adding a dial user-interactive graphical user interface object 1242a that, when selected, causes the computer system 1200 to add a "seeker" watch user interface corresponding to the image 1236b1 (e.g., download a watch user interface from a remote server) for display via the computer system 1200. Generating user interface 1232c also includes description 1238b1, which includes a textual description of the "seeker" watch user interface currently selected for addition to computer system 1200. Generating user interface 1232 also includes further user-interactive graphical user interface objects 1244 which, when selected, cause computer system 1200 to display additional textual descriptions of the "seeker" watch user interface currently selected for addition to computer system 1200. At FIG. 12H, computer system 1200 detects tap input 1250g on more user interactive graphical user interface object 1244.
At fig. 12I, in response to receiving tap input 1250g, computer system 1200 remains generating a display of user interface 1232c and displaying a description 1238c that includes additional text descriptions of the "seeker" watch user interface currently selected for addition to computer system 1200.
At fig. 12I, computer system 1200 detects a tap input 1250h on add dial user interactive graphical user interface object 1242 a. At fig. 12I, computer system 1200 also detects a press input 1270e on rotatable and pressable input mechanism 1204.
At fig. 12J, in response to receiving tap input 1250h or in response to receiving press input 1270e, computer system 1200 displays a selection user interface 1210d. At selection user interface 1210d, a "seeker" dial has been added to computer system 1200 and is represented by representation 1214 e. The representation 1214e is prominently displayed at the center of the selection user interface 1210d and a selection indicator 1212 is displayed therearound to indicate that the representation 1214e currently has a selection focus. Representation 1214e is displayed between a partial view (left side) of representation 1214b and a partial view (right side) of representation 1214 c.
Selection user interface 1210d also includes a shared user-interactive graphical user interface object 1216 that, when selected, causes computer system 1200 to display a user interface related to transmitting and/or sharing information about watch user interface 1246a to another device (e.g., another computer system). Selection user interface 1210d also includes an editing user interactive graphical user interface object 1218 that, when selected, causes computer system 1200 to display an editing user interface for editing aspects of watch user interface 1246 a. The selection user interface 1210d also includes a dial indicator 1220d that includes a visual and/or textual indication of the name of the watch user interface currently centered in the selection user interface 1210d. At fig. 12J, dial indicator 1220d indicates the currently indicated watch user interface 1246a, titled "explorer," which is represented by representation 1214e in selection user interface 1210 b.
At fig. 12J, computer system 1200 detects tap input 1250i on representation 1214 e. At fig. 12J, computer system 1200 also detects a press input 1270f on rotatable and pressable input mechanism 1204.
At fig. 12K, in response to receiving tap input 1250i or in response to receiving press input 1270f, computer system 1200 displays watch user interface 1246a. Watch user interface 1246a includes: dial 1246a1, which includes a circle of dots representing hours of the day; a time indication 1246a2, which includes an analog hour hand representing the current time (e.g., time, minutes, and/or seconds); complex function blocks 1246a3, which include complex function blocks representing applications available on computer system 1200 and display information from the corresponding applications; and complex function blocks 1246a4, which include complex function blocks representing applications available on computer system 1200 and display information from the corresponding applications. At fig. 12K, computer system 1200 detects an input 1250j (e.g., a long press) on watch user interface 1246a.
At FIG. 12L, in response to detecting input 1250j, computer system 1200 displays a selection user interface 1210e. The selection user interface 1210e substantially matches the selection user interface 1210d. At fig. 12L, computer system 1200 receives tap input 1250k1 on shared user interactive graphical user interface object 1216 and detects tap input 1250k2 on edit user interactive graphical user interface object 1218.
At fig. 12M, in response to detecting tap input 1250k1, computer system 1200 displays shared user interface 1248a. Shared user interface 1248a includes: an indication 1252, the indication comprising that shared user interface 1248a is a text indication for generating a new message; an add contact user interactive graphical user interface object 1254a, which when selected, causes the computer system 1200 to display a user interface for adding a recipient of the received message. The sharing user interface 1248a also includes a dial user-interactive graphical user interface object 1254b that, when selected, causes the computer system to display a user interface containing information about the watch user interface currently selected for sharing. Shared user interface 1248a also includes a create message user-interactive graphical user interface object 1254c that, when selected, causes computer system 1200 to display a user interface for creating (e.g., drafting, typing) a message to be transmitted. The shared user interface 1248a also includes a send user-interactive graphical user interface object 1254d that, when selected, causes the computer system 1200 to transmit (e.g., send) information related to the watch user interface 1246a to the selected recipient(s). At fig. 12M, computer system 1200 detects a tap input 12501 on add contact user interactive graphical user interface object 1254 a.
At fig. 12N, in response to receiving tap input 12501, computer system 1200 displays shared user interface 1248b. Shared user interface 1248b includes a cancel user interactive graphical user interface object 1258 which, when selected, causes computer system 1200 to display select user interface 1210e. In some implementations, selecting the cancel user interactive graphical user interface object 1258 causes the computer system 1200 to display the shared user interface 1248a. The shared user interface 1248b also includes a voice user-interactive graphical user interface object 1256a that, when selected, causes the computer system 1200 to display an option for adding a recipient using a voice method (e.g., using a microphone). The shared user interface 1248b also includes an add-on contact user-interactive graphical user interface object 1258b that, when selected, causes the computer system 1200 to display an option to add additional recipients via a contact list accessible via the computer system 1200. The shared user interface 1248b also includes a dial-up contact user-interactive graphical user interface object 1258c that, when selected, causes the computer system 1200 to display options for adding the recipient using touch input (e.g., by typing the recipient's telephone number on a digital touch keypad). Shared user interface 1248b also includes options related to suggested contacts that may be added via contact user interactive graphical user interface object 1262a, contact user interactive graphical user interface object 1262b, or contact user interactive graphical user interface object 1262 c. Each of the contact user interactive graphical user interface objects 1262a, 1262b, and 1262c includes an image and/or text representation of the potential recipient of the message (e.g., corresponding to the name and/or image of the potential recipient). In response to selection of the contact user interactive graphical user interface object (e.g., 1262a, 1262b, or 1262 c), the computer system 1200 selects a recipient corresponding to the selected contact user interactive graphical user interface object to transmit the message to the recipient. At fig. 12N, computer system 1200 receives tap input 1250m on contact user interactive graphical user interface object 1262 c.
At fig. 12O, in response to receiving tap input 1250m, computer system 1200 displays a shared user interface 1248c that is substantially identical to shared user interface 1248a, except that add user interactive graphical user interface object 1254a has been replaced with recipient 1264a, which corresponds to the recipient selecting "Ann Smith" as the recipient of the message.
At fig. 12P, in response to detecting tap input 1250k2, computer system 1200 displays editing user interface 1266a. Editing user interface 1266a includes an aspect indicator 1268a that includes a visual and/or textual representation of an aspect of watch user interface 1246a that is currently selected for editing. At fig. 12P, the aspect indicator 1268a indicates that the currently selected aspect of the watch user interface 1246a for editing is a "style". Editing user interface 1266a also includes a partial view of an aspect indicator 1268b that corresponds to a different editable aspect (e.g., "stripe") of watch user interface 1246 a.
Editing user interface 1266a also includes selection indicator 1274a that includes a visual and/or textual representation of the currently selected option of the editable aspect of watch user interface 1246 a. At fig. 12P, selection indicator 1274a indicates that the currently selected "style" option of watch user interface 1246a is "full screen".
Editing user interface 1266a also includes a location indicator 1272a. The location indicator 1272a includes a graphical indication of the number of selectable options of the editable aspect of the watch user interface 1246a currently being edited and the location of the currently selected option in the list of selectable options. For example, location indicator 1272a indicates that the currently selected option "full screen" of the "style" aspect of watch user interface 1246a is toward the top of the list of at least two possible options of the "style" aspect of watch user interface 1246 a.
Editing user interface 1266a also includes a representation 1214e that indicates that the currently being edited watch user interface is a watch user interface corresponding to representation 1214e that is watch user interface 1246 a. At FIG. 12P, computer system 1200 detects swipe input 1250n.
At fig. 12Q, in response to a sequence of one or more user inputs including swipe input 1250n (e.g., two or more swipe inputs including swipe input 1250 n), computer system 1200 displays editing user interface 1266b. Editing user interface 1266b includes an aspect indicator 1268c that includes a visual and/or textual representation of an aspect of watch user interface 1246a that is currently selected for editing. At fig. 12Q, an aspect indicator 1268c indicates that the currently selected aspect of the watch user interface 1246a for editing is a "complex function block.
Editing user interface 1266b also includes a representation 1214e that indicates that the currently being edited watch user interface is a watch user interface corresponding to representation 1214e that is watch user interface 1246 a. At fig. 12Q, computer system 1200 detects tap input 1250o on a portion of representation 1214e that corresponds to complex function block 1246a3 of watch user interface 1246 a.
At fig. 12R, in response to detecting tap input 1250o, computer system 1200 displays an editing user interface 1266c that includes a plurality of selectable complex function block options to be displayed with watch user interface 1246 a. In some embodiments, the selectable complex function blocks are categorized into a plurality of categories based on associated features and/or applications associated with the selectable complex function blocks. Editing user interface 1266c includes category 1278a, which includes visual and/or textual indications of complex function blocks under category 1278a that are related to "weather". Editing user interface 1266c also includes category 1278b, which includes visual and/or textual indications of complex function blocks under category 1278b that are related to "music". In some embodiments, a category includes multiple complex function blocks, in which case multiple complex function blocks associated with a given category are displayed below text and/or visual indications associated with the category. In some implementations, the editing user interface 1266c is initially displayed centered on the selected complex function block from the previous user interface (e.g., editing user interface 1266 b) and/or with a selection focus. In some implementations, the computer system navigates from one complex function block option to another complex function block option (e.g., moves the selection focus) by scrolling through swipe inputs on editing user interface 1266c and/or rotational inputs via rotatable and pressable input mechanism 604. Editing user interface 1266c also includes a cancel user interactive graphical user interface object 1276 that, when selected, causes computer system 1200 to cease displaying editing user interface 1266c and display editing user interface 1266b.
Editing user interface 1266c also includes a location indicator 1272b. The location indicator 1272b includes a graphical indication of the number of selectable options for the complex function block displayed with the watch user interface 1246a and the location of the complex function block currently having the selection focus within the list of selectable complex function block options.
At fig. 12R, a position indicator 1272b indicates the relative position of the complex function block 1282a to be displayed with the watch user interface 1246a within the list of selectable complex function block options. The editing user interface 1266c also includes a selection focus indicator 1284 surrounding the complex function block 1282a that indicates that the complex function block 1282a currently has a selection focus in the editing user interface 1266 c.
At fig. 12R, computer system 1200 detects rotational input 1260b via a rotatable and depressible input mechanism when complex function block 1282a has a selection focus. At fig. 12S, in response to detecting the rotational input 1260b, the computer system 1200 displays an editing user interface 1266d in which the selection focus has moved from complex function block 1282a to complex function block 1282b. Thus, the selection focus indicator 1284 is now displayed around the complex function block 1282b.
At fig. 12S, computer system 1200 detects a press input 1270g via rotatable and pressable input mechanism 1204. At fig. 12T, in response to pressing input 1270g, computer system 1200 displays an editing user interface 1266e that includes a modified version of representation 1214e that includes complex function block 1282b instead of complex function block 1282a. Thus, edit user interface 1266e indicates: the representation 1214e has been edited in response to the press input 1270g and in response to receiving the press input 1270g, the computer system 1200 edits the representation 1214e to include a complex function block option (e.g., complex function block option 1282 b) that has a selection focus when the press input 1270g is received.
At fig. 12T, computer system 1200 detects a press input 1270h on rotatable and pressable input mechanism 1204. At fig. 12U, in response to receiving the press input 1270h, computer system 1200 displays a watch user interface 1246b. Watch user interface 1246b is an edited version of watch user interface 1246a in which complex function block 1246a3 corresponding to the air quality complex function block has been replaced with complex function block 1246b3 of the music complex function block. Watch user interface 1246b includes: dial 1246b1, which includes a circle of dots representing hours of the day; a time indication 1246b1, which includes an analog hour hand representing the current time (e.g., time, minutes, and/or seconds); complex function block 1246b3, which includes complex function blocks representing applications available on computer system 1200 and displays information from music applications; and complex function blocks 1246b4, which include complex function blocks representing applications available on computer system 1200 and display information from the corresponding applications.
At fig. 12V, computer system 1200 displays notification user interface 1286. The notification user interface includes a dial notification 1288 that includes visual and/or textual information related to the availability of the new watch user interface. At fig. 12V, dial notification 1288 includes text indicating that a new watch user interface is available. At fig. 12V, computer system 1200 detects tap input 1250w on notification 1288 and press input 1270i on rotatable and pressable input mechanism 1204.
At fig. 12W, in response to receiving tap input 1250W or press input 1270i, computer system 1200 displays an add user interface 1290. Adding user interface 1290 includes returning user interactive graphical user interface 1292 which, when selected, causes computer system 1200 to display notification user interface 1286. The add user interface 1290 also includes an image 1294 corresponding to a watch user interface currently shown as being available for adding to (e.g., downloading to) the computer system 1200. Adding the user interface 1290 also includes adding a dial user-interactive graphical user interface object 1296 that, when selected, causes the computer system 1200 to add a watch user interface corresponding to the image 1294 (e.g., download a watch user interface from a remote server) for display via the computer system 1200. The add user interface 1290 also includes a description 1298 that includes a textual description of the watch user interface corresponding to the image 1294. In some implementations, in response to a press input on rotatable and pressable input mechanism 1204 when adding user interface 1290 is displayed, computer system 1200 downloads a watch user interface corresponding to image 1294 from a remote server for display via computer system 1200.
Fig. 13 is a flow chart illustrating a method associated with a user interface for time management, according to some embodiments. The method (1300) is performed at a computer system (e.g., 1200) (e.g., a smart watch, wearable electronic device, smart phone, desktop computer, laptop computer, tablet computer) in communication with a display generation component (e.g., 1202) (e.g., a display controller, touch-sensitive display system) and one or more input devices including a rotatable input mechanism (e.g., 1204). In some implementations, the computer system communicates with one or more input devices (e.g., touch-sensitive surfaces). Some operations in method 1300 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 1300 provides an intuitive way for managing a time-dependent user interface. The method reduces the cognitive burden on the user to manage the time-dependent user interface, thereby creating a more efficient human-machine interface. For battery-powered computing devices, enabling a user to manage a time-dependent user interface faster and more efficiently saves power and increases the time between battery charges.
The computer system displays (1302) a selection user interface (e.g., 1210 a) (e.g., dial selection user interface, dial generation user interface) (e.g., representation of a dial, representation of a collection of dials) via a display generation component.
Upon displaying the selection user interface (e.g., 1210 a), the computer system (e.g., 1200) detects (1304) rotation of the rotatable input mechanism about the axis of rotation (e.g., 1260a 1) (e.g., clockwise rotation input, counter-clockwise rotation input) (or, in some embodiments, non-rotation input (e.g., flick gesture, swipe gesture, and/or mouse click)).
In response to detecting rotation of the rotatable input mechanism (or, in some embodiments, in response to detecting a non-rotational input (e.g., a flick gesture, a swipe gesture, and/or a mouse click)), the computer system displays (1306) a graphical indication of a selection focus that changes as the selection focus moves between a plurality of selectable objects (e.g., a representation of a dial, a representation of a collection of dials) (e.g., 1212 as shown in fig. 12B). In some embodiments, the second set of content does not include the first graphical element.
After changing the selection focus throughout the plurality of selectable objects, the computer system (e.g., 1200) detects (1308) a press input (e.g., 1270 a) on the rotatable input mechanism (e.g., in a direction that includes a component parallel to the axis of rotation) (e.g., the press input is primarily or substantially in a direction parallel to the axis of rotation) (in some implementations, a non-press input (e.g., a swipe gesture, a flick gesture, and/or a mouse click)).
In response to detecting the press input (e.g., 1270 a), the computer system (e.g., 1200) selects (1310) one of a plurality of selectable objects (e.g., 1214 c), the selecting comprising: in accordance with a determination that a first selectable object of the plurality of selectable objects has a selection focus when the press input is detected, the computer system selects (1312) the first selectable object (e.g., does not select a second selectable object of the plurality of selectable objects); and in accordance with a determination that a second selectable object (e.g., 1214 b) of the plurality of selectable objects that is different from the first selectable object has a selection focus when the press input is detected, the computer system selects (1314) the second selectable object (e.g., does not select the first selectable object of the plurality of selectable objects). Selecting one of the plurality of selectable objects based on which selectable object has a selection focus when the press input is detected enables the user to easily and intuitively select a desired selectable object. In particular, changing the selection focus in response to rotation of the rotatable input mechanism about the axis of rotation and selecting the selectable object in response to a pressing input on the rotatable input mechanism allows navigation among and selection of the selectable object without requiring interaction with and/or input from multiple input devices. Providing improved control options enhances the operability of the computer system and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the device.
In some implementations, the selection focus is indicated by the position of the selectable object (e.g., 1224b as shown in fig. 12C) in the selection user interface (e.g., 1210 b). In some implementations, the selection focus corresponds to a selectable object that is substantially centered in the selection user interface. In some implementations, the graphical indication of the selection focus corresponds to a location of a selectable object included in the selection user interface. Indicating the selection focus by the position of the selectable object in the selection user interface provides improved visual feedback as to which selectable object has the selection focus, as the selectable object having the selection focus will be indicated as having the selection focus by its position in the selection user interface. Providing improved visual feedback enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user know which of the selectable objects being displayed has a selection focus to reduce the number of user inputs and prevent the user from erroneously selecting an incorrect selectable object), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, a computer system (e.g., 1200) displays a visual indication (e.g., 1212 as shown in fig. 12B) corresponding to a selectable object having a selection focus (e.g., cursor, outline, shadow, color, transparent overlay, etc.) via a display generation component. In some embodiments, when the selection focus moves between the plurality of selectable objects, a visual indication corresponding to the selectable object is displayed as panning to the current selectable object. Indicating the selection focus by displaying a visual indication corresponding to a selectable object having the selection focus provides improved visual feedback as to which selectable object has the selection focus. Providing improved visual feedback enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user know which of the selectable objects being displayed has a selection focus to reduce the number of user inputs and prevent the user from erroneously selecting an incorrect selectable object), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, upon displaying the selection user interface, the computer system (e.g., 1200) detects a swipe input (e.g., 1250b 1) (or, in some embodiments, a non-swipe gesture (e.g., a tap gesture, a press input, and/or a mouse click)). In some embodiments, in response to detecting a swipe input (or, in some embodiments, in response to detecting a non-swipe gesture (e.g., a tap gesture, a press input, and/or a mouse click)), the computer system changes the selection focus from a third selectable object (e.g., 1214b as shown in fig. 12C) to a fourth selectable object (e.g., 1214C as shown in fig. 12E) (e.g., a representation of a dial, a representation of a collection of dials). Changing the selection focus from one selectable object to another selectable object in response to a swipe input enables a user to change the selectable object with the selection focus in an easy, intuitive manner. Providing additional control options enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system (e.g., 1200) detects a tap input (e.g., 1250 c) (e.g., a touch input) (or, in some embodiments, a non-tap gesture (e.g., a swipe gesture, a press and hold gesture, and/or a mouse click)). In some embodiments, in response to detecting a tap input (or, in some embodiments, in response to detecting a non-tap gesture (e.g., a swipe gesture, a hold gesture, and/or a mouse click)), the computer system selects one of the plurality of selectable objects (e.g., 1214C as shown in fig. 12C), and in accordance with a determination that the tap input is on a corresponding portion of the third selectable object, the computer system performs a first operation that includes selecting the third selectable object (e.g., not selecting a different selectable object). Selecting the selectable object in response to the tap input enables the user to select the selectable object in an easy, intuitive manner. Providing additional control options enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting the tap input (e.g., 1250 c), the computer system (e.g., 1200) selects one of the plurality of selectable objects, and in accordance with a determination that the tap input is on a fourth selectable object (e.g., 1214c as shown in fig. 12E) that is different from the corresponding portion of the third selectable object (e.g., 1214 b), the computer system performs a second operation (e.g., selects the fourth selectable object or displays additional information about the third selectable object) that is different from the first operation (e.g., does not select the third selectable object of the plurality of selectable objects). Selecting the selectable object in response to the tap input and in accordance with a determination that the tap input is on the selectable object being selected enables the user to select the selectable object in an easy, intuitive manner. Providing additional control options enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the computer system displays a first dial (e.g., 1206) (e.g., a user interface including an indication of the current time; a clock user interface of the smartwatch). In some implementations, while the first dial is displayed, the computer system detects a first user input (e.g., 1250 a) (e.g., a long press touch input, a tap gesture, a press input, and/or a mouse click) corresponding to a user request to select the dial. In some implementations, in response to detecting the first user input, the computer system displays a selection user interface (e.g., 1210 a). In some embodiments, the selection user interface is a dial selection user interface. In some embodiments, in response to detecting the first user input, the computer system visually distinguishes the first dial (e.g., 1214B as shown in fig. 12B) to instruct the dial to select the user interface. In some embodiments, upon displaying the dial-selection user interface, the computer system detects a second user input (e.g., 1250a 1) (e.g., a rotational input via a rotational input mechanism, a swipe input, a press input, and/or a mouse click), and in response to detecting the second user input, the computer system visually distinguishes a second dial (e.g., 1214a as shown in fig. 12C) that is different from the first dial; and (e.g., move the second dial to a predetermined location in the user interface, such as substantially in the center of the user interface, to the right of the user interface, or to the left of the user interface). In some embodiments, upon displaying the dial selection user interface and upon displaying the second dial, the computer system detects a second press input (e.g., 1270 a) on the rotatable input mechanism (e.g., 1204) (e.g., in a direction including a component parallel to the axis of rotation) (e.g., the press input is primarily or substantially in a direction parallel to the axis of rotation) (or, in some embodiments, detects a non-press input (e.g., a flick gesture, a swipe gesture, and/or a mouse click)). In some embodiments, in response to detecting the second press input (or, in some embodiments, in response to detecting the non-press input (e.g., a tap gesture, a swipe gesture, and/or a mouse click)), the computer system selects the second dial as the currently selected dial of the computer system (e.g., for display by the computer system). In some implementations, selecting the second dial for display by the computer system includes setting the second dial as a default dial for display by the computer system (e.g., upon waking up). Selecting the second dial in response to the second press input received while the second dial is displayed (e.g., 1214C as shown in fig. 12C) provides improved feedback by allowing the user to select the second dial while the second dial is displayed, thereby providing improved visual feedback, enhancing operability of the device, and making the user-device interface more efficient (e.g., by helping the user know which dial is currently available for selection), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the computer system displays a third dial (e.g., 1246 a) (e.g., dial; user interface including an indication of the current time; clock user interface of the smartwatch). In some embodiments, while displaying the third dial, the computer system (e.g., 1200) receives, via one or more input devices, a first sequence of one or more user inputs (e.g., long press touch input, tap input, rotation input, press input) corresponding to a request to edit the third dial (as shown in fig. 12K-12L). In some embodiments, in response to receiving a first sequence of one or more user inputs, the computer system enters a dial editing mode of the computer system (e.g., as shown in fig. 12P). In some embodiments, in response to receiving the first sequence of one or more user inputs, the computer visually distinguishes between elements of the third dial for editing (e.g., 1268 a). In some embodiments, the visually differentiated element is a first selectable option of the visually differentiated element of the third dial. In some embodiments, while the computer system is in dial editing mode, the computer system receives a second sequence of one or more user inputs (e.g., touch inputs, rotation inputs, press inputs) via the one or more input devices, and in response to receiving the second sequence of one or more user inputs, the computer system displays a second selectable option of a visually distinguished element of the third dial. In some embodiments, the computer system detects a third press input (e.g., 1270 h) on the rotatable input mechanism (e.g., in a direction including a component parallel to the axis of rotation) while the computer system is in dial edit mode and while displaying a second selectable option of a visually distinguished element of the third dial (e.g., the press input is primarily or substantially in a direction parallel to the axis of rotation) (or, in some embodiments, detects a non-press input (e.g., a tap gesture, swipe gesture, and/or mouse click)). In some embodiments, in response to detecting the third press input (or, in some embodiments, in response to detecting a non-press input (e.g., a tap gesture, a swipe gesture, and/or a mouse click)), the computer system selects a second selectable option of a visually differentiated element of the third dial. In some embodiments, selecting the second selectable option of the visually distinguished element of the third dial comprises selecting the second option of the visually distinguished element for display in the third dial. Editing elements of the dial by detecting the third press input while displaying the second selectable option of the visually differentiated element of the third dial enables the user to quickly and easily select the edited element based on input received while displaying the selectable option of the element of the dial being edited, thereby providing improved visual feedback, enhancing operability of the device, and making the user-device interface more efficient (e.g., by helping the user see which element of the dial is being edited while inputting the third press input), which in turn reduces power usage and prolongs battery life of the device by enabling the user to more quickly and efficiently use the device.
In some implementations, the computer system displays a fourth dial (e.g., 1246 a) (e.g., dial; user interface including an indication of the current time; clock user interface of the smartwatch). In some implementations, while displaying the fourth dial, the computer system (e.g., 1200) receives, via the one or more input devices, a third sequence of one or more user inputs (e.g., touch inputs, rotation inputs, press inputs) corresponding to a request to edit the fourth dial (e.g., as shown in fig. 12K-12L). In some embodiments, in response to receiving a third sequence of one or more user inputs, the computer system enters a dial editing mode of the computer system (e.g., as shown in fig. 12P). In some implementations, in response to receiving the third sequence of one or more inputs, the computer system visually distinguishes a complex function block (e.g., 1282 a) of the fourth dial for editing. In some embodiments, the computer system displays a first complex function block option for the complex function block while the computer system is in dial editing mode. In some embodiments, while the computer system is in dial editing mode, the computer system receives a fourth sequence of one or more user inputs (e.g., touch inputs, rotation inputs, press inputs) via one or more input devices, and in response to receiving the fourth sequence of one or more user inputs, the computer system displays a second complex function block option (e.g., 1282 b). In some embodiments, the computer system detects a fourth press input (e.g., 1270 g) on the rotatable input mechanism (e.g., in a direction including a component parallel to the axis of rotation) while the computer system is in dial edit mode and while the second complex function block option is displayed (e.g., the press input is primarily or substantially in a direction parallel to the axis of rotation) (or, in some embodiments, detects a non-press input (e.g., a flick gesture, a swipe gesture, and/or a mouse click)). In some embodiments, selecting the second complex function block option includes selecting a second option of the visually distinguished element for display in the fourth dial, in some embodiments, in response to detecting the fourth press input (or in some embodiments, in response to detecting the non-press input (e.g., a tap gesture, a swipe gesture, and/or a mouse click).
In some embodiments, after selecting the second complex function block option (e.g., as shown in fig. 12S), the computer system (e.g., 1200) detects a fifth press input (e.g., 1270 h) on the rotatable input mechanism (e.g., 1204) (e.g., in a direction including a component parallel to the axis of rotation) (e.g., the press input is primarily or substantially in a direction parallel to the axis of rotation) (or, in some embodiments, detects a non-press input (e.g., a tap gesture, swipe gesture, and/or mouse click)). In some embodiments, in response to detecting the fifth press input (or, in some embodiments, in response to detecting the non-press input (e.g., a tap gesture, a swipe gesture, and/or a mouse click)), the computer system selects a fourth dial for display by the computer system (e.g., as shown in 12U). In some implementations, selecting the fourth dial for display by the computer system includes setting the fourth dial including the selected second complex function block option as a default dial for display by the computer system (e.g., upon waking up). Selecting the fourth dial for display in response to the fifth press input received after selecting the second complex function block option enables the user to quickly and easily select the dial including the edited complex function block, thereby providing improved visual feedback, enhancing operability of the device, and making the user-device interface more efficient (e.g., by helping the user select the edited dial including the second complex function block as the current dial), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the device.
In some implementations, the computer system (e.g., 1200) displays a fifth dial (e.g., 1246 a) (e.g., dial; user interface including an indication of the current time; clock user interface of the smartwatch). In some embodiments, while displaying the fifth dial, the computer system receives a fifth sequence of one or more user inputs (e.g., touch inputs, rotation inputs, press inputs) corresponding to a request to send the fifth dial to the recipient via one or more input devices (e.g., as shown in fig. 12K-12L). In some implementations, sending the dial to the user includes transmitting the dial to a recipient device (e.g., a device associated with the recipient). In some embodiments, in response to receiving a fifth sequence of one or more user inputs, the computer system displays a recipient selection user interface; (e.g., a user interface of the dial that includes the names of one or more potential recipients). In some embodiments, the computer system displays the name of the recipient when the recipient selection user interface (e.g., 1248 b) is displayed. In some embodiments, the name of the recipient has a selection focus. In some embodiments, upon displaying the recipient selection user interface, the computer system detects a sixth press input (e.g., in a direction including a component parallel to the axis of rotation) on the rotatable input mechanism (e.g., the press input is primarily or substantially in a direction parallel to the axis of rotation) (or, in some embodiments, detects a non-press input (e.g., a flick gesture, a swipe gesture, and/or a mouse click)). In some embodiments, in response to detecting the sixth press input (or, in some embodiments, in response to detecting the non-press input (e.g., a flick gesture, a swipe gesture, and/or a mouse click)), the computer system transmits information associated with the fifth dial to the recipient. In some embodiments, transmitting information associated with the fifth dial to the recipient includes transmitting a representation of the dial specifying an arrangement of user interface elements including a first user interface element corresponding to a first application and one or more other user interface elements corresponding to software different from the first application. In some embodiments, transmitting information associated with the dial to the recipient includes transmitting data identifying a plurality of independently configurable graphical elements that make up the dial. Detecting a sixth press input while displaying the recipient selection user interface and transmitting information associated with the fifth dial to the recipient in response to detecting the sixth press input enables the user to quickly and easily select the recipient to receive the selected dial, thereby enhancing operability of the device and making the user-device interface more efficient (e.g., by helping the user easily transition from viewing the dial to the recipient to select the dial), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the device.
In some embodiments, a computer system (e.g., 1200) displays, via a display generation component (e.g., 1202), a dial gallery user interface (e.g., 1232 a) for viewing selectable dials included in a dial gallery of the computer system (e.g., a user interface for obtaining dials, a user interface including dials downloadable onto the computer system, a user interface including a set of dials). In some embodiments, a dial gallery user interface (e.g., 1232a as shown in fig. 12F) for viewing a selectable dial includes a plurality of selectable graphical elements corresponding to dials that are downloadable onto a computer system. In some implementations, a dial gallery user interface for viewing selectable dials displays dials that are not available (e.g., not yet downloaded) on a computer system. In some embodiments, the dial gallery user interface for viewing the selectable dials includes a search bar (e.g., in a top portion of the dial gallery user interface for viewing the selectable dials) for searching for dials that are downloadable onto the computer system to obtain a particular dial. Displaying a dial gallery user interface for viewing selectable dials enables a user to quickly and easily view available dials within the dial gallery, thereby enhancing operability of the device, and making the user-device interface more efficient (e.g., by helping the user to quickly view available dials and add them to a computer system), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the device.
In some embodiments, the computer system (e.g., 1200) displays a selection user interface (e.g., 121 c). In some embodiments, the selection user interface is a dial selection user interface. In some embodiments, upon displaying the dial selection user interface, the computer system displays a dial generated affordance (e.g., 1214c as shown in fig. 12E) (e.g., affordances for obtaining the dial on the computer system, adding affordances). In some implementations, the computer system receives a third user input (e.g., 1250 d) (e.g., tap input, rotation input via a rotation input mechanism, swipe input) (e.g., press input, tap input) corresponding to the dial generation affordance via one or more input devices. In some embodiments, the user input corresponding to the dial generating the affordance includes a press input (e.g., in a direction including a component parallel to the axis of rotation) on the rotatable input mechanism received when the dial generating the affordance has the selection focus (e.g., the press input is primarily or substantially in a direction parallel to the axis of rotation). In some implementations, in response to receiving the third user input, a dial gallery user interface is displayed for viewing the selectable dials. Displaying the dial-generated affordance when the dial-selection user interface is displayed enables a user to quickly and easily transition from selecting among dials already available (e.g., downloaded) on the device to viewing options available for download, thereby enhancing operability of the device and making the user-device interface more efficient by reducing the number of inputs required to transition from selecting a downloaded dial to downloading a new dial. Reducing the number of inputs required to perform an operation enhances the operability of the system and makes the computer system more efficient (e.g., by helping a user provide appropriate inputs and reducing user errors in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the system.
In some embodiments, the dial gallery user interface (e.g., 1234a1 as shown in fig. 12F) for viewing the selectable dials includes a representation (e.g., 1236a 2) of the sixth dial. In some embodiments, the dial gallery user interface for viewing the selectable dials includes a third selectable option (e.g., 1244) for displaying additional information (e.g., description) related to the sixth dial. In some embodiments, the dial gallery user interface for viewing selectable dials includes a fourth selectable option, such as 1240a1, for adding a sixth dial to the dial gallery of the computer system (e.g., for downloading and/or installing an affordance of dials). Simultaneously displaying a representation of the sixth dial, a third selectable option for displaying additional information related to the sixth dial, and a fourth selectable option for adding the sixth dial to a gallery of dials of the computer system enables a user to quickly and easily select among various options related to the available dials through a reduced number of inputs. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, upon displaying the dial gallery user interface (e.g., 1232 b) for viewing the selectable dial, the computer system (e.g., 1200) displays a graphical element (e.g., a disk with text) corresponding to the seventh dial (e.g., 1236b1 as shown in fig. 12G). In some implementations, the computer system receives a fourth user input (e.g., 1250 f) via one or more input devices (e.g., tap input, rotation input via a rotation input mechanism, swipe input). In some embodiments, in response to receiving the fourth user input and in accordance with a determination that the fourth user input corresponds to a tap on a graphical element corresponding to the seventh dial, the computer system displays additional information (e.g., a description) about the seventh dial. In some implementations, in response to receiving the fourth user input and in accordance with a determination that the fourth user input corresponds to a seventh press input on the rotatable input mechanism (e.g., in a direction that includes a component parallel to the axis of rotation) (e.g., the press input is primarily or substantially in a direction parallel to the axis of rotation), the computer system adds (e.g., downloads) the seventh dial to a gallery of dials of the computer system. In accordance with a determination that the fourth user input is a tap or press input on a graphical element corresponding to the seventh dial, selectively displaying additional information about the seventh dial or adding the seventh dial to a gallery of dials of the computer system provides visual feedback about options available to the user such that the user can quickly and easily choose to view more information about the seventh dial or download the dials, thereby reducing the number of inputs required to perform the operation. Providing improved visual feedback to the user and reducing the number of inputs required to perform the operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the dial gallery user interface (e.g., 1232 b) for viewing the selectable dials includes an affordance (e.g., 1238 a) (e.g., back button) for returning to a previously displayed user interface (e.g., 1232 a). In some implementations, the computer system receives an input (e.g., a tap gesture, swipe, press input, and/or mouse click) corresponding to a selection of an affordance for returning to a previously displayed user interface. In some embodiments, in response to receiving an input corresponding to a selection of an affordance for returning to a previously displayed user interface, the computer system displays the previously displayed user interface via the display generation component (e.g., 1232a as shown in fig. 12F). Displaying an affordance for returning to a previously displayed user interface in a dial gallery user interface for viewing a selectable dial enables a user to quickly and easily return from the dial gallery to the previously displayed user interface without requiring multiple user inputs. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, upon displaying the dial gallery user interface (e.g., 1232 c) for viewing the selectable dials, the computer system simultaneously displays a second graphical element (e.g., 1236b 1) (e.g., a disk with text) corresponding to the eighth dial and an affordance (e.g., 1242 a) for adding the eighth dial to the dial gallery of the computer system (e.g., 1200) (e.g., an affordance for downloading and/or installing the dials). In some implementations, the computer system receives a fifth user input (e.g., 1250 h) via one or more input devices (e.g., tap input, rotation input via a rotation input mechanism, swipe input). In some embodiments, in response to receiving the fifth user input and in accordance with a determination that the fifth user input is a tap input (e.g., a tap input on an affordance) for adding the eighth dial to the gallery of dials of the computer system, the computer system adds (e.g., downloads) the eighth dial to the gallery of dials of the computer system. In some implementations, in response to receiving the fifth user input and in accordance with a determination that the fifth user input corresponds to an eighth press input (e.g., 1270 e) on the rotatable input mechanism (e.g., in a direction including a component parallel to the axis of rotation) (e.g., the press input is primarily or substantially in a direction parallel to the axis of rotation), the computer system adds (e.g., downloads) the eighth dial to a gallery of dials of the computer system. Simultaneously displaying the second graphical element corresponding to the eighth dial and the affordance for adding the eighth dial to the gallery of dials of the computer system enables a user to quickly and easily view the graphical element corresponding to the eighth dial (e.g., a disc having text describing the dial, a representation of the dial), and has the option of adding the eighth dial to the gallery of dials of the computer system without further input via the affordance for adding the eighth dial, thereby reducing the amount of input required to add dials on the computer system. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, upon displaying a dial gallery user interface (e.g., 1232a as shown in fig. 12F) for viewing the selectable dial, the computer system (e.g., 1200) detects a sixth user input (e.g., rotation of the rotatable input mechanism about the axis of rotation, swipe input, tap input, and/or mouse click). In some implementations, in response to detecting the sixth user input, the computer system displays a third graphical indication of the selection focus that changes as the selection focus moves between the second plurality of selectable objects (e.g., a representation of a dial, a representation of a collection of dials). A third graphical indication showing the selection focus changing as the selection focus moves among the second plurality of selectable objects provides visual feedback as to which selectable object has the selection focus. Providing improved visual feedback enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user know which of the selectable objects being displayed has a selection focus to reduce the number of user inputs and prevent the user from erroneously selecting an incorrect selectable object), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, a dial gallery user interface (e.g., 1232a as shown in fig. 12F) for viewing selectable dials includes a third graphical element (e.g., 1230a 2) (e.g., a disc with text) corresponding to a single dial and a fourth graphical element (e.g., 1230a 1) corresponding to multiple dials. Displaying a dial gallery user interface for viewing selectable dials including a third graphical element corresponding to a single dial and a fourth watch graphical element corresponding to a plurality of dials enables a user to quickly and easily view both individual dials and a set of dials within the dial gallery, thereby enhancing operability of the device, providing improved visual feedback, and making the user-device interface more efficient (e.g., by helping the user to quickly view and add to a computer system) which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the device.
In some embodiments, the graphical element corresponding to a single dial (e.g., 1230a2 as shown in fig. 12F) includes a background of a first color (e.g., a portion of the graphical element having text and/or additional graphical features overlaid thereon), and the graphical element corresponding to multiple dials (e.g., 1232a1 as shown in fig. 12F) includes a background of a second color different from the first color. Displaying graphical elements corresponding to a single dial with a background of a first color and graphical elements corresponding to multiple dials with a background of a second color provides improved visual feedback as to whether the graphical elements correspond to one dial or multiple dials. Providing improved visual feedback to the user enhances the operability of the device, and makes the user device interface more efficient (e.g., by helping the user know which of the graphical elements corresponds to multiple dials), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system receives a selection (e.g., 1250 e) of a graphical element (e.g., 1230a1 as shown in fig. 12F) corresponding to the plurality of dials (e.g., corresponding to a selected user input) (e.g., tap input, swipe input, press input, and/or mouse click) via one or more input devices. In some implementations, the selection of the graphical element corresponding to the plurality of dials is a tap input (e.g., 1250 e) or a press input (e.g., 1270 c) on the graphical element corresponding to the plurality of dials. In some implementations, the selection of the graphical elements corresponding to the plurality of dials is a pressing input on the rotatable input mechanism (e.g., in a direction including a component parallel to the axis of rotation) when the graphical elements corresponding to the plurality of dials (e.g., 1230a 1) have a selection focus. In some embodiments, in response to receiving a selection of a graphical element corresponding to a plurality of dials, the computer system displays a plurality of dials (e.g., 1230b1, 1230b2, and 1230b3 as shown in fig. 12G) (e.g., a list or grid or stack of dials) that may be individually selected to be added to a gallery of dials of the computer system (e.g., as shown in fig. 12G). Displaying the plurality of selectable dials in response to receiving a selection of a graphical element corresponding to the plurality of dials enables a user to quickly and easily view the selectable dials without having to view the plurality of entries in a dial gallery user interface for viewing the selectable dials one by one, thereby reducing the number of inputs required to view the plurality of related dials. Reducing the number of inputs required to perform an operation enhances the operability of the system and makes the computer system more efficient (e.g., by helping a user provide appropriate inputs and reducing user errors in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the system.
In some embodiments, a computer system (e.g., 1200) receives, via one or more input devices, a selection of a watch user interface (e.g., dial, user interface of a watch including an indication of time and/or date). In some implementations, the selection of the watch user interface is a tap input (e.g., 1250 f) or a press input (e.g., 1270 d) on the representation of the watch user interface. In some implementations, the selection of the watch user interface is a representation (e.g., 1230b 1) of the watch user interface having a press input (e.g., in a direction including a component parallel to the axis of rotation) on a selection focus rotatable input mechanism. In some embodiments, in response to receiving a selection of a dial user interface (e.g., 1270d, 1270e, 1250 h), the computer system displays a dial editing user interface (e.g., 1266 a) via the display generating component. In some embodiments, the dial-editing user interface includes a representation of a layout of the watch user interface including a time region for displaying a current time (e.g., time of day; time in the current time zone, coordinated with and/or intended to reflect coordinated universal time with an offset based on the currently selected time zone), and one or more complex function block regions for displaying complex function blocks on the watch user interface. In some implementations, complex functional blocks refer to any clock face feature other than hours and minutes (e.g., clock hands and/or hour/minute indications) for indicating time. In some implementations, complex functional blocks provide data obtained from applications. In some embodiments, the complex function block includes an affordance that, when selected, launches the corresponding application. In some implementations, the complex function blocks are displayed at fixed predefined locations on the display. In some implementations, the complex function blocks occupy respective positions (e.g., lower right, lower left, upper right, and/or upper left) at particular areas of the dial. In some implementations, when the dial editing user interface is displayed, a sequence of one or more inputs (e.g., tap inputs, press inputs on a rotatable input mechanism (e.g., in a direction including a component parallel to the axis of rotation)) including seventh user inputs directed to a complex function block region of the one or more complex function block regions (e.g., upper left, upper right, lower left, lower right; bezel regions) is detected via one or more input devices (e.g., as shown in fig. 12Q-12T). In some embodiments, the computer system changes which complex function block is assigned to the complex function block region of the watch user interface in response to detecting a sequence of one or more inputs including a seventh user input directed to a complex function block region of the one or more complex function block regions. In some embodiments, in response to detecting the seventh user input directed to a complex function block region of the one or more complex function block regions, the computer system displays a complex function block selection user interface (e.g., 1266 c) via the display generating component, wherein displaying the complex function block selection user interface comprises simultaneously displaying: an indication (e.g., a name thereof, a graphical indication thereof, an icon corresponding thereto, a category thereof) of a first application (e.g., an application installed on, launched on, and/or accessible from a computer system); a first complex function block preview (e.g., 1282a as shown in fig. 12R) (e.g., a graphical preview of how the first complex function block is to be displayed in a watch user interface) corresponding to a first complex function block configured to display a first set of information obtained from a first application (e.g., information based on features, operations, and/or characteristics of the first application) on the watch user interface, wherein the first complex function block preview includes a graphical representation of the first complex function block displaying the first set of information (e.g., an exemplary representation of the first complex function block having an example of the first set of information); and a second complex function block preview (e.g., 1282b as shown in fig. 12R) corresponding to a second complex function block (e.g., a graphical preview of how the second complex function block is to be displayed in the watch user interface) configured to display a second set of information (e.g., information based on features, operations, and/or characteristics of the first application) obtained from the first application on the watch user interface, wherein the second complex function block preview includes a graphical representation of the second complex function block displaying the second set of information (e.g., an exemplary representation of the second complex function block having an example of the second set of information). In some implementations, upon displaying the complex function block selection user interface, the computer system detects user input directed to selecting a preview of the respective complex function block via one or more input devices (e.g., via a rotational input device; via a touch-sensitive surface); and in response to detecting a user input directed to selecting a respective complex function block preview, displaying, via the display generating component, a representation of the watch user interface with a representation of the selected complex function block corresponding to the respective complex function block preview displayed at the first complex function block region of the watch user interface, wherein: in accordance with a determination that the corresponding complex function block preview is a first complex function block preview, displaying the first complex function block in a first complex function block region of the watch user interface; and in accordance with a determination that the corresponding complex function block preview is a second complex function block preview, displaying the second complex function block in the first complex function block region of the watch user interface. Entering an edit mode for editing features of the selected dial in response to receiving a selection of the dial enables the user to quickly edit the dial without further user input, allowing the user to quickly and efficiently edit the selected dial after it has been selected to match their preferences without specifically selecting the option to edit it, thereby reducing the amount of input required to transition from selecting and editing the dial to match the user's preferences. Reducing the number of inputs required to perform an operation enhances the operability of the system and makes the computer system more efficient (e.g., by helping a user provide appropriate inputs and reducing user errors in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the system.
In some implementations, the computer system (e.g., 1200) displays a notification (e.g., 1288) corresponding to the availability of the dial (e.g., a notification displayed in accordance with a determination that a new dial is available for download). In some implementations, the computer system receives an eighth user input (e.g., 1250 x) corresponding to a notification of availability corresponding to the ninth dial (e.g., tap input, press input on the rotatable input mechanism (e.g., in a direction including a component parallel to the axis of rotation)) via one or more input devices. In some implementations, in response to receiving the eighth user input, the computer system displays a user interface (e.g., 1290) for adding the dial associated with the notification of the availability corresponding to the ninth dial to a gallery of dials of the computer system (e.g., a user interface for downloading the dial associated with the notification). Displaying a user interface for adding a dial associated with the notification to a gallery of dials of the computer system in response to the input on the notification enables a user to quickly and easily view and/or add dials after receiving the notification regarding availability of the ninth dial, thereby reducing the amount of input required to transition from displaying the notification to viewing information regarding the dials and/or downloading the dials. Reducing the number of inputs required to perform an operation enhances the operability of the system and makes the computer system more efficient (e.g., by helping a user provide appropriate inputs and reducing user errors in operating/interacting with the system), which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the system.
In some embodiments, when the computer system (e.g., 1200) displays the tenth dial via the display generating component (e.g., 1202), and when the computer system is in the unlocked state, the computer system receives communications from a remote computer (e.g., a remote server, a software update server that provides a cryptographic key for unlocking the dial, the dial stored on the computer system but locked prior to receipt of the cryptographic key). In response to receiving the communication from the remote server, the computer system displays a notification (e.g., 1288) of availability corresponding to the ninth dial. In response to receiving a communication from the remote server while the device is displaying the tenth dial and while the computer is in the unlocked state, displaying a notification relating to the availability of the ninth dial, the user being provided with relevant information regarding the availability of the ninth dial based on the availability of the ninth dial without requiring the user to provide further input while configuring the device at a different location. Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
It is noted that the details of the process described above with respect to method 1300 (e.g., fig. 13) also apply in a similar manner to the methods described herein. For example, method 1300 optionally includes one or more of the features of the various methods described herein with reference to method 700, method 900, and method 1100. For example, method 700 optionally includes one or more of the features of the various methods described above with reference to method 1300. For example, the watch user interface as described with reference to fig. 12A-12W may include and be used to select and/or edit a watch user interface as described with reference to fig. 6A-6U. As another example, method 900 optionally includes one or more of the features of the various methods described above with reference to method 1300. For example, the watch user interface as described with reference to fig. 12A-12W may include and be used to select and/or edit a watch user interface as described with reference to fig. 8A-8M. As another example, method 1100 optionally includes one or more of the features of the various methods described above with reference to method 1300. For example, the device may use a time user interface as described with reference to fig. 12A to 12W or a watch user interface as described with reference to fig. 10A to 10W as a watch user interface. As another example, method 1300 optionally includes one or more of the features of the various methods described below with reference to method 1500. For example, a user may add a watch user interface to computer system 1200 via input received using a rotatable and depressible input mechanism as described above with reference to fig. 12A-12W, and then edit the added watch user interface via a computer system in communication with computer system 1200 as described below with reference to fig. 14A-14R. Fig. 14A through 14R illustrate user interfaces including user-interactive graphical user interface objects for performing various functions, described below as affordances. For the sake of brevity, these details are not repeated hereinafter.
Fig. 14A-14R illustrate an exemplary user interface for editing and displaying a user interface based on media items including depth data. The user interfaces in these figures are used to illustrate the processes described below, including the process in fig. 15.
In fig. 14A, computer system 1400 displays add-on portrait user interface 1404A via display 1402. The add portrait user interface 1404a is a user interface for adding a portrait user interface (e.g., a user interface based on media items including depth data (e.g., a watch user interface)) to a computer system (e.g., computer system 1400 and/or computer system 600).
The add portrait user interface 1404a includes an option for configuring a set of one or more media items that include depth data and are selected for use with the portrait user interface (e.g., the selection is based on determining that the selected media item meets criteria (e.g., includes depth data)). The watch user interface 1404a includes a content header 1422 that indicates that the options displayed below the content header 1422 are related to configuring the content of the portrait user interface (e.g., by selecting photos to be used with the portrait user interface). Below the content header 1422, the add portrait user interface 1404a includes a select photo affordance 1424 that, when selected, causes the computer system 1400 to display options for selecting media items including depth data to be used in the portrait user interface, as shown in fig. 14B described below. The add portrait user interface 1404a also includes a photo restriction indicator 1426 that provides a visual and/or textual indication of a restriction on the number of media items (e.g., the maximum number of photos) that can be selected for use with the portrait user interface. In FIG. 14A, the computer system 1400 receives an input 1450a on the pick photo affordance 1424 and, in response, displays a user interface for selecting media items. In some embodiments, selecting the pick photo affordance 424 causes the computer system to display a photo picker user interface 1424a, as shown in FIG. 14B. In some embodiments, selecting the pick photo affordance 424 causes the computer system to display a photo picker user interface 1424b, as shown in FIG. 14C.
The add portrait user interface 1404a also includes a user interface title 1414 that includes a text indication of the name of the portrait user interface available for addition to the computer system. Adding the portrait user interface 1404a also includes adding an affordance 1416 that, when selected, causes the portrait user interface to be added to (e.g., downloaded to) the computer system (e.g., 600 and/or 1400). The add portrait user interface 1404a also includes a preview image 1412a that includes representations of a portrait user interface (e.g., a currently selected layout and/or media items selected for use with the portrait user interface) that may be added to a computer system (e.g., 600 and/or 1400) by selecting an add affordance 1416. The add portrait user interface also includes a description 141 that includes a textual description of the portrait user interface. The add portrait user interface 1404a also includes a further affordance 1420 that, when selected, expands the description 1418 to include additional text describing the portrait user interface. The add portrait user interface 1404a also includes a return affordance 1408 that, when selected, causes the computer system 1400 to display a previously displayed user interface (e.g., a user interface that the computer system 1400 displayed immediately prior to displaying the add portrait user interface 1404 a). The add portrait user interface 1404a also includes a time indication 1406a that includes a representation of the current time. Adding the portrait user interface 1404 also includes a sharing affordance 1410 that, when selected, causes the computer system 1400 to display options for transmitting information related to the portrait user interface to a recipient (e.g., a recipient electronic device).
The add figure user interface 1404a also includes a my hand representation representable 1428 that, when selected, enables the computer system 1400 to display a user interface that includes representations of one or more watch user interfaces that are currently available (e.g., have been selected (and optionally configured) by a user to be included in a library of dials, or have been downloaded) on a computer system (e.g., a computer system that communicates (e.g., pairs) with the computer system 1400 (e.g., computer system 600). The add portrait user interface 1404a also includes a dial gallery affordance 1430 that, when selected, causes the computer system 1400 to display a user interface that includes a representation of one or more watch user interfaces that can be selected to be downloaded and/or installed on a computer system (e.g., a computer system in communication (e.g., paired) with the computer system 1400 (e.g., the computer system 600). The add-on portrait user interface 1404a also includes an application store affordance 1432 that, when selected, causes the computer system 1400 to display an application store user interface for downloading and/or installing applications (e.g., complex function blocks) onto a computer system (e.g., a computer system in communication (e.g., pairing) with the computer system 1400 (e.g., the computer system 600).
In fig. 14B, in response to receiving tap input 1450a on pick photo affordance 1424, the computer system displays a photo picker user interface 1434a. Photo picker user interface 1434a is a user interface for selecting one or more photos to be selected for use with the portrait user interface. Photo picker user interface 1434a includes a plurality of selectable media items that can be selected for use with a portrait user interface via user input (e.g., tap input). In some implementations, the plurality of selectable media items (e.g., photos, videos, GIFs, and/or animations) include media items based on a determination that the included media items meet certain criteria. For example, in some implementations, the plurality of media items includes photographs with depth data, and does not include media items without depth data. In some implementations, the plurality of media items includes media items having a particular shape and/or a threshold degree of separation between foreground elements (e.g., humans and/or pets) and background elements. In some implementations, the plurality of media items includes media items having identified topics (e.g., people, pets) and does not include media items not having identified topics. Photo picker user interface 1434a also includes album selection affordance 1440a which can represent a representation of an album (e.g., "full portrait") that includes an indication of the currently selected album, and which when selected, causes computer system 1400 to display a representation of an album (e.g., photo album) from which media items can be selected for use with the portrait user interface.
Photo picker user interface 1434a also includes a cancel affordance 1438 that, when selected, causes computer system 1400 to cease displaying photo picker user interface 1434a and display a previous user interface (e.g., add portrait user interface 1404 a). Photo picker user interface 1434a also includes an add affordance 1442 that, when selected, causes computer system 1400 to configure a portrait user interface for use with media items (e.g., photos) that have been selected when add affordance 1442 was selected. In some implementations, adding the affordance 1442 is not selectable when one or more media items are not selected for use with a user interface based on media items that include depth data. In FIG. 14B, computer system 1400 receives input 1450B on album selection affordance 1440a and, in response, displays photo picker user interface 1434B, as shown in FIG. 14C.
In fig. 14C, computer system 1400 displays photo picker user interface 1434b. Photo picker user interface 1434b is a user interface for selecting albums (e.g., photo albums) from which to select media items for use with the portrait user interface. In some embodiments, photo picker user interface 1434b includes options corresponding to photo albums available on computer system 1400 (e.g., stored locally on the computer system and/or accessible by the computer system via cloud storage). In some implementations, the photo picker user interface 1434b foregoes displaying affordances for selecting photo albums (e.g., empty photo albums and/or photo albums that include only media items that do not include depth data) available on the computer system 1400 that do not include media items that include depth data.
Photo picker user interface 1434b includes several affordances for selecting albums from which media items may be selected for use with a portrait user interface. Photo picker user interface 1434b includes album affordance 1448a that corresponds to a recommended album including media items, the media items included in the recommended album being recommended for use with the portrait user interface based on determining that the media items meet criteria (e.g., the included media items contain one or more salient portions, the included media items include foreground elements and background elements, with a threshold distance between the foreground elements and the background elements contained in the media items). Photo picker user interface 1434b also includes album affordance 1448b, which can represent albums corresponding to the most recent media items (e.g., the most recently created media items, the media items that have recently become available to computer system 1400, and/or photos taken within a threshold period of time (e.g., 7 days or 30 days)). Photo picker user interface 1434b also includes album affordance 1448c, which can represent a photo corresponding to a person (e.g., a person named "Adam") that has been automatically identified by the device as having the same face and that is labeled by the user with a name. In some embodiments, the computer system generates an album and/or album affordance containing media items including the first person based on determining that the computer system 1400 has access to one or more media items including the first person and/or based on determining that the computer system 1400 has identified a face of the first person among a plurality of media items available to the computer system 1400. Similarly, photo picker user interface 1434b includes 1448d, which corresponds to an album that includes media items of a second person (e.g., a person named "Athena") different from the first person. In some implementations, one or more media items included in the album corresponding to album affordance 1448c are also included in the album corresponding to album affordance 1448d (e.g., pictures of both Adam and Athena). Photo picker user interface 1434b also includes album affordance 1448e, which can represent an album corresponding to portrait media items (e.g., media items taken in portrait mode and/or media items including depth data) available to computer system 1400. In FIG. 14C, the computer system 1400 receives input 1450C on the album affordance 1448b and, in response, displays a user interface for selecting media items from the corresponding album (e.g., the "most recent" photo album).
In fig. 14D, computer system 1400 displays photo picker user interface 1438C, which is a user interface for selecting media items to be used with the portrait user interface from the album selected in fig. 14C (e.g., photo album) (e.g., the "most recent" photo album). Photo picker user interface 1438c includes a plurality of media items with depth data including a first media item represented by preview image 1454a, a second media item represented by preview image 1454b, and a third media item represented by preview image 1454 c. Photo picker user interface 1438c also includes album selection affordance 1440b, which can represent an indication that includes the currently selected album (e.g., "most recent") and which, when selected, causes computer system 1400 to display an option (e.g., return to photo picker user interface 1438 b) for selecting a photo album from which media items can be selected for use with the portrait user interface.
In fig. 14D, the computer system 1400 receives input (e.g., tap input) corresponding to selection of three media items to be used with a user interface based on media items that include depth data. In particular, the computer system 1400 receives an input 1450d on the preview image 1454a, an input 1450e on the preview image 1454b, and an input 1450f on the preview image 1454 c.
At fig. 14E, computer system 1400 displays photo picker user interface 1434d, which is an updated version of photo picker user interface 1434d after computer system 1400 has received inputs 1450d, 1450E, and 1450f corresponding to selections for preview image 1454a, preview image 1454b, and preview image 1454c, respectively. In photo picker user interface 1434d, selected preview images 1454a, 1454b, and 1454c are each displayed with visual indications (e.g., visual indication 1458a, visual indication 1458b, and visual indication 1458c, respectively) that they are currently selected for use with the portrait user interface (e.g., check marks). At photo picker user interface 1434d, add affordance 1442 is also displayed as not grayed out, indicating that it is selectable. In some implementations, the add affordance 1442 is (e.g., becomes) selectable based at least in part on determining that one or more media items have been selected for use with the portrait user interface. Photo picker user interface 1434d includes a selection counter 1456 that indicates the number of media items currently selected for use with the portrait user interface. In fig. 14E, computer system 1400 receives input 1450g on add affordances 1442 and, in response, adds the selected photos (e.g., preview image 1454a, preview image 1454b, and preview image 1454 c) for use with the portrait user interface.
In FIG. 14F, in response to receiving input 1450 on the add affordance 1442, the computer system 1400 displays a layout editing user interface 1462a that shows a preview of how the portrait user interface would appear once it was added to the computer system. The layout editing user interface 1462a includes a preview user interface 1466a that includes a representation of a portrait user interface having one of the media items selected for use with the portrait user interface in fig. 14D-14E, as described above. In fig. 14F, preview user interface 1466a includes media items corresponding to preview image 1454a as described above.
As shown in preview user interface 1466a, the portrait user interface includes media items including background elements 1466a1, foreground elements 1466a2, and system text 1466a3. The system text 1466a3 includes a representation of a representative time (e.g., 10:09) that is different from the current time 1406a (e.g., 2:1:5). In the layout editing user interface 1462a, the foreground elements 1466a2 are displayed in a landscape position such that the tops of the foreground elements are displayed laterally under the system text 1466a3. The preview user interface 1466a also includes a layout indicator 1468a that includes a stacked arrangement (e.g., "back") of the system text 1466a3 relative to the foreground element 1466a2 and an indication of the location (e.g., "top") of the system text 1466a3 within the preview user interface 1466a (e.g., lateral position of the system text 1466a 3). In the layout editing user interface 1462a, the layout indicator 1468a indicates that the system text 1466a3 is currently configured to be displayed in a "top-rear" layout, which corresponds to displaying the system text 1466a3 in an upper portion of the preview user interface 1466a and in a stacked arrangement behind (e.g., stacked behind and/or overlaid by) the foreground element 1466a 2.
At the layout editing user interface 1462a, the layout of the watch user interface based on the media including the depth data may be edited via user input received at the computer system 1400. For example, the position of the media item may be translated or scaled and the layering arrangement of the system text 1466a3 relative to the foreground element 1466a2 may be updated. In particular, the layout editing user interface 1462a includes an affordance for changing the layering arrangement of the system text 1466a3 relative to the foreground elements 1466a 2. The layout editing user interface 1462a includes a top rear affordance 1470a that, when selected, causes the system text 1466a3 to be displayed in an upper portion of the preview user interface 1466a and in a stacked arrangement behind the foreground element 1466a 2. The layout editing user interface 1462a also includes a top front affordance 1470b that, when selected, causes system text 1466a3 to be displayed in an upper portion of the preview user interface 1466a and in a stacked arrangement in front of the foreground element 1466a2 (e.g., stacked on top of and/or at least partially overlaying the foreground element). The layout editing user interface 1462a includes a bottom rear affordance 1470c that, when selected, causes system text 1466a3 to be displayed in a lower portion of the preview user interface 1466a and in a stacked arrangement in front of the foreground element 1466a2 (e.g., stacked on top of and/or at least partially overlaying the foreground element). The layout editing user interface 1462a includes a bottom front affordance 1470d that, when selected, causes system text (e.g., system text 1466a 3) to be displayed in a lower portion of the preview user interface 1466a and in a stacked arrangement behind the foreground element 1466a 2.
Layout editing user interface 1462a also includes a cancel affordance 1472 that, when selected, causes computer system 1400 to cancel the process for configuring and/or editing the portrait user interface (e.g., and return to displaying add portrait user interface 1404 a). Layout editing user interface 1462a also includes a recycle bin affordance 1474 that, when selected, causes computer system 1400 to discard one or more of the edits that the user has made to the layout of the portrait user interface. The layout editing user interface 1462a also includes a completion affordance 1476 that, when selected, causes the computer system to complete and/or advance the portrait user interface with any edits as shown in the preview user interface 1466 a. In some embodiments, completion affordance 1474 is not selectable if the currently selected layout of the portrait user interface meets certain criteria (e.g., if the currently selected layout would cause system text to be obscured by at least a threshold amount). In some embodiments, completion affordance 1474 is gray when it is not selectable to indicate that it cannot be selected.
In fig. 14G, the computer system 1400 receives an input 1450h1 on the preview user interface 1466 b. Fig. 14G-14H illustrate a process by which a user may pan a portion of a media item to be displayed in a portrait user interface by directly manipulating the portion of the media item displayed within preview user interface 1466b via touch input (e.g., via touch and drag input). The layout user editing user interface 1462b includes a preview user interface 1466b that is based on media items including background elements 1466b1, foreground elements 1466b2, and system text 1466b 3. In response to detecting input 1450h1, at fig. 14G, computer system 1400 displays a guide line 1478 indicating a location below which foreground element 1466b2 should be positioned to avoid obscuring system text 1466b3 by at least a threshold amount. Thus, the guide wire 1478 helps the user locate the media item so that it does not obscure too much of the system text 1466b3 (which would make the system text less readable). The layout editing user interface 1462b also includes instructions 1480a that include an indicator to the user of where to locate an aspect of the portrait user interface (e.g., an element of a media item). In particular, the instructions 1480a indicate that the user should position the foreground element 1466b2 under the line (e.g., guide line 1478) to make it clearer to the user where to drag the media item within the portrait user interface to configure the portrait user interface without obscuring too much of the system text 1466b 3. In some implementations, the computer system 600 provides tactile feedback when the foreground element 1446b2 reaches the guide wire 1478, when the foreground element 1446b2 moves over (e.g., over) the guide wire 1478, and/or when the foreground element 1446b2 moves from over the guide wire 1478 to under the guide wire 1478.
Note that input 1450h1 represents an initial position at which a touch input is displayed on preview user interface 1466 b. In response to dragging the input to the second location (e.g., as shown by input 1450H2 in fig. 14H below), the computer system updates the location of the media item within the preview user interface 1466b by an amount based on the difference between the initial input location entered on the display 1402 and the end location on the display 1402 (e.g., the distance between the input 1450H1 and the input 1450H 2). In some implementations, the position of the media item is directly manipulated such that a particular distance (e.g., 0.05 inches, 0.1 inches, and/or 0.5 inches) on the display 1402 that will be entered on the mobile display causes the media item to move a corresponding amount within the preview user interface 1466 c. In some implementations, the media item is manipulated directly to a point when the boundary (corner, edge) of the media item is aligned with the boundary of the preview user interface 1466c, at which point the media item does not move further in response to additional movement of the user input beyond the boundary of the media item.
In some implementations, the instructions 1480a are displayed and/or updated after a user input has been received (e.g., when a touch input corresponding to the input 1450h1 has lifted off the display 1402). In some implementations, displaying instructions 1480a after input has been received, rather than displaying and/or updating instructions while input is being received, extends battery life by reducing the processing power required to display and/or update instructions 1480a.
In fig. 14H, the user input 1450H1 has been moved (e.g., dragged across the display 1402) to a position indicated by the input 1450H2 (e.g., a drag input from a first position to a second position). In response to receiving the drag input, the preview user interface 1466c includes media items displayed in an updated position. The layout user editing user interface 1462c includes a preview user interface 1466c that is based on media items including background elements 1466c1, foreground elements 1466c2, and system text 1466c 3. In the preview user interface 1466c, the foreground element 1466c2 has crossed the guide line 1478 and obscured the system text 1466c3 by more than a threshold amount, thereby reducing the readability of the system text 1466c 3. Furthermore, in contrast to layout user interface 1462b including instruction 1480a, layout user interface 1462c has been updated to include instruction 1480b, which includes an indication that system text 1466c3 (which includes representative time 10:09) was obscured (e.g., obscured by foreground element 1466c 2).
At fig. 14I, upon receiving input 1450h2, computer system 1400 displays a layout editing user interface 1462d that includes foreground elements 1466d2 displayed at updated locations of the masking system text 1466d 3. The layout user editing user interface 1462d includes a preview user interface 1466d that is based on media items including background elements 1466d1, foreground elements 1466d2, and system text 1466d 3. Note that completion affordance 1476 is shown as grayed out, indicating that it is not selectable. In some embodiments, completion affordance 1476 is grayed out in accordance with a determination that at least a threshold amount of system text 1466d3 is obscured by foreground element 1466d2. At fig. 14I, the computer system 1400 receives an input 1450I (e.g., a tap input) on the top front affordance 1470 b.
At fig. 14J, in response to receiving input 1450I on top front affordance 1470b as shown in fig. 14I, computer system 1400 updates system text 1466e3 to be in an upper portion of preview user interface 1466e and displayed in a stacked arrangement in front of foreground element 1466e2 (e.g., stacked on top of and/or at least partially overlaying the foreground element). The layout user editing user interface 1462e includes a preview user interface 1466e that is based on media items including background elements 1466e1, foreground elements 1466e2, and system text 1466e 3. In fig. 14J, computer system 1400 displays a layout editing user interface 1462e in which, in contrast to fig. 14I, system text 1466e3 is displayed on top of (e.g., layered over) foreground element 1466e2 instead of behind foreground element 1466e 2. The layout editing user interface 1462e also includes a layout indicator 1468b that indicates an updated layout selection (e.g., "top front") for the portrait user interface. Further, based on determining that the face of the boy shown by foreground element 1466e2 is obscured by system text 1466e3, layout editing user interface 1462e includes 1480c, which provides a visual indication that the face in the media item is obscured. In FIG. 14J, the computer system 1400 receives an input 1450J on the bottom front affordance 1470 c.
In FIG. 14K, in response to receiving input 1450j on the bottom front affordance 1470c, the computer system displays a layout editing user interface 1462f that includes a preview user interface 1466f displayed in an updated layout. The layout user editing user interface 1462f includes a preview user interface 1466f that is based on media items including background elements 1466f1, foreground elements 1466f2, and system text 1466f 3. In preview user interface 1466f, in response to selecting bottom front affordance 1470c, system text 1466f3 has been updated to be displayed in a lower portion of preview user interface 1466f and in a stacked arrangement in front of the foreground element (e.g., 1466f 2).
The layout editing user interface 1462f includes a layout indicator 1468c that indicates an updated layout (e.g., instead of the layout indicator 1468b included in fig. 14J) ("bottom front"). Further, the layout editing user interface 1462f does not include instructions (e.g., instructions 1480c, as shown in fig. 14J). In some implementations, displaying a layout editing user interface without instructions (e.g., 1480b or 1480 c) indicates that a currently selected layout of a portrait user interface meets certain criteria (e.g., the current layout does not cause system text to be obscured beyond a threshold amount).
In fig. 14L, the computer system 1400 displays a layout editing user interface 1462g that substantially corresponds to the layout editing user interface 1462f. Layout user interface 1462g includes a preview user interface 1466 that corresponds to an updated layout for displaying portrait user interfaces that includes media items having background elements 1466g1 and foreground elements 1466g 2. The layout user editing user interface 1462g includes a preview user interface 1466g that is based on media items including background elements 1466g1, foreground elements 1466g2, and system text 1466g 3. The preview user interface 1466g also includes system text 1466g3 that is displayed in a lower portion of the preview user interface 1466g and in front of the foreground element 1466g2 (e.g., in a "bottom front" arrangement corresponding to the layout indicator 1468 c). At fig. 14L, computer system 1400 receives input 1460 (e.g., pinch input) on preview user interface 1466 g.
At fig. 14M, in response to receiving the input 1460, the media items included in the preview user interface 1466h are enlarged (e.g., displayed at a second zoom level that is different from the zoom level at which the media items were displayed in fig. 14L). The layout user editing user interface 1462h includes a preview user interface 1466h that is based on media items including background elements 1466h1, foreground elements 1466h2, and system text 1466h3. In some implementations, the zoom level difference between the zoom level at which the media item is displayed in the preview user interface 1466g and the zoom level at which the media item is displayed in the preview user interface 1466h is based at least in part on the length and/or magnitude of the input 1460. Note that zooming in on the media item includes zooming in on elements of the media item (e.g., foreground element 1466h2 and background element 1466h 1) without zooming in on additional features included in the preview user interface 1466h, such as system text 1466h3. In other words, the zoom level of the system text 1466h3 is maintained while editing the zoom level of the media item included in the portrait user interface. At fig. 14M, the computer system 1400 detects an input 1450k (e.g., a tap input) on the completion affordance 1476.
At FIG. 14N, in response to receiving input 1450k on completion affordance 1476 in FIG. 14M, computer system 1400 displays add portrait user interface 1404b. The add portrait user interface 1404B is an updated version of the add portrait user interface 1404a in which the preview image 1412B has been updated to include the selected media items and layout for the portrait user interface selected in fig. 14B through 14M as discussed above. Preview image 1412b represents a portrait user interface that includes background elements, foreground elements, and system text indicating a representative time that is different from current time 1406 a. In particular, preview image 1412b corresponds to the media item and/or layout selected for the portrait user interface as represented by preview user interface 1466h displayed when the completion affordance 1476 is selected.
In fig. 14N, the computer system communicates with computer system 600 (e.g., paired with computer system 600, or logged into the same user account as computer system 600). In fig. 14N, computer system 600 displays via display 602 a watch user interface 1494a, which is a watch interface that is not based on media items that include depth data. In FIG. 14N, the computer system 1400 detects an input 1450l on the add affordance 1416, which corresponds to a request to add a portrait user interface to the computer system 600.
In FIG. 14O, in response to receiving input 1450l on the add-on affordance 1416, the computer system 1400 transmits information corresponding to the portrait user interface to the computer system 600. In some embodiments, computer system 1400 transmits a request to display a portrait user interface to computer system 600, and in response to receiving the input, computer system 600 displays watch user interface 1494b via display 602, wherein displaying watch user interface 1494b includes simultaneously displaying: a media item comprising a background element 1494b1, a foreground element 1494b2 segmented from the background element based on depth information, and a system text 1494b3. Note that the media items included in the watch user interface 1494B and the layout of the watch user interface 1494B correspond to the media items and layout selected at the computer system 1400 (e.g., as described above with reference to fig. 14B-14M). The portrait user interface 1494b includes system text 1494b3 that indicates a current time (e.g., 2:15) rather than a representative time indicated by system text 1412b 3.
In fig. 14P, after displaying watch user interface 1494b, computer system 600 displays watch user interface 1494c, which is an updated version of watch user interface 1494b based on media items that are different from the media items of watch user interface 1494 b. The watch user interface 1494c is based on different media items than the watch user interface 1494c and based on different media items previously selected for use with the portrait user interface at the computer system 1400 (e.g., as shown in fig. 14D-14E above). In some implementations, in response to an input received at the computer system 600 (e.g., a tap input), a media item included in the portrait user interface is updated (e.g., changed). In some implementations, media items included in the portrait user interface (e.g., 1494 c) are updated in response to the passage of time (e.g., from 2:15 indicated by system text 1494b3 to 3:12 indicated by system text 1494c 3).
Similar to watch user interface 1494b, watch user interface 1494c includes a media item that includes a background element 1494c1, a foreground element 1494c2, and a system text 1494c3. These elements are different from those displayed in the watch user interface 1494 b. The background element 1494c1 and the foreground element 1494c2 are selected from media items that are different from the media items used in the watch user interface 1494b, and the system text 1494c3 is updated to reflect the current time of update (e.g., 3:12), but the overall layout of the watch user interfaces 1494b and 1494c is the same. For example, the system text 1494c3 (in the lower portion of the watch user interface 1494 c) is displayed in a "bottom front" layout and in a stacked arrangement in front of the foreground element 1494c2, as is the case in the watch user interface 1494 b. Thus, upon transitioning from displaying watch user interface 1494b to displaying watch user interface 1494c, computer system 600 maintains the display of the same portrait user interface layout, applies the layout to the updated media time, and updates the watch user interface based on a change in conditions of computer system 600 (e.g., a current time change). At the same time, computer system 1400 maintains the same display of the user interface as described above with reference to FIG. 14O, updating only the current time (e.g., 3:12 as shown by current time 1406 b). Thus, the change in the portrait user interface illustrated at computer system 600 occurs independently of computer system 1400.
Fig. 14Q illustrates computer system 1400 displaying a layout editing user interface 1482a that shows an initial layout editing screen for editing a layout of a portrait user interface based on media items having more than one theme (e.g., two or more foreground objects and/or two or more faces). The layout editing user interface 1482a includes a preview user interface 1484a based on media items including depth data having a background element 1484a1, a first foreground element 1484a2, a second foreground element 1482a3, and system text 1482a 4. In some implementations, the computer system 1400 initially displays the layout editing user interface 1482a in which both the foreground elements 1482a2 and the foreground elements 1482a3 are framed within the preview user interface 1484 a. In some implementations, the computer system initially displays a layout editing user interface 1482a with media items containing a plurality of foreground objects having pan and/or zoom configurations selected such that one or more foreground objects included in the media items will be framed in the preview user interface 1484 a. As discussed above with reference to fig. 14F-14M, the user may edit the stacked arrangement of system text 1484a4 with respect to foreground objects 1482a2 and 1482a3, pan across media items to edit portions of the media items displayed within preview user interface 1484a, or change the zoom level at which the media items are displayed via user input. In FIG. 14Q, computer system 1400 receives drag input 1485 over preview user interface 1484 a.
In fig. 14R, in response to receiving the drag input 1485 of fig. 14Q, the computer system 1400 displays a layout editing user interface 1482b, which is an updated version of the layout editing user interface 1482a in which the location of the media item contained within the preview user interface 1484b has been edited in response to the drag input 1485. Thus, in FIG. 14R, the position of the media item is edited such that foreground element 1484b2 is framed in preview user interface 1484b, but foreground element 1484b3 is outside of the portion of the media item that is included in preview user interface 1484 b. At the same time, the stacked arrangement of the elements of the media item (background element 1484b1, foreground element 1484b2, foreground element 1484b3, and system text 1484b 4) remains unchanged.
FIG. 15 is a flowchart illustrating a method for editing a user interface based on depth data of a previously captured media item using a computer system, according to some embodiments. The method 1500 is performed at a computer system (e.g., 100, 300, 500, smart phone, smart watch, wearable electronic device, desktop computer, laptop computer, and/or tablet computer) in communication with a display generation component and one or more input devices (e.g., a display controller and/or touch-sensitive display system). Some operations in method 1500 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 1500 provides an intuitive way for editing a user interface based on depth data of previously captured media items. The method reduces the cognitive burden on the user to edit the user interface based on the depth data of previously captured media items, thereby creating a more efficient human-machine interface. For battery-powered computing devices, enabling a user to configure a user interface based on depth data of previously captured media items more quickly and efficiently saves power and increases the time between battery charges.
In some embodiments, method 1500 is used to edit a user interface that performs (e.g., is configured to perform) and/or embodies method 700 (e.g., fig. 7) and/or is a watch user interface as described in fig. 6A-6U.
In some implementations, a computer system (e.g., 1400) detects (1502) input (e.g., a tap gesture, a long press gesture, etc.) corresponding to a request to display an editing user interface via one or more input devices. In response to detecting the input, the computer system displays (1504) an edit user interface (e.g., 1462 a) (e.g., a clip user interface, a user interface for configuring a dial) via the display generation component. In some implementations, displaying the editing user interface includes simultaneously displaying the media item (e.g., photograph, video, GIF, and/or animation) and the system text (e.g., 1466a 3) (e.g., first time and/or current date). In some embodiments, the computer system displays (1506) a media item (e.g., photograph, video, GIF, and/or animation) that includes a background element (e.g., 1466a 1) and a foreground element (e.g., 1466a 2) segmented from the background element based on depth information. In some implementations, the media item includes depth data (e.g., data that may be used to segment a foreground element from one or more background elements, such as data indicating that the foreground element is less than a threshold distance from one or more cameras when the media is captured and the background element is greater than the threshold distance from one or more cameras when the media is captured, or a data set related to a distance between two objects in the media, including a camera sensor and a set of data of relative distances between at least a first object and a second object in a field of view of the camera sensor when the media is captured, and/or multiple layers). In some embodiments, the background element and the foreground element are selected (in some embodiments, automatically) based on depth data (e.g., in accordance with a determination that the background element is positioned behind the foreground element). In some implementations, the depth data is determined based on sensor information (e.g., image sensor information and/or depth sensor information) collected at the time of capturing the media item.
The computer system displays (1508) the system text (e.g., 1466a 3), wherein the system text is displayed in a first stacked arrangement (e.g., position) relative to the foreground element based on the depth information (e.g., in front of (e.g., at least partially visually overlaying) the foreground element(s) and behind (e.g., at least partially visually overlaying) the foreground element (s)) and the foreground element of the media item is displayed in a first position relative to the system text (e.g., the media item is cropped to display a first portion of the media item and not a second portion of the media item).
The computer system (e.g., 1400) detects (1510) a user input (e.g., 1450 i) directed to an editing user interface (e.g., 1462 d) (e.g., a tap input, swipe input, long press input, and/or mouse click). In response to detecting (1512) user input directed to the editing user interface, and in accordance with a determination that the user input is a first type of user input (e.g., input corresponding to a user interactive graphical user interface object for updating a hierarchical arrangement of system text relative to foreground elements), the computer system updates (1514) the system text (e.g., 1466d 3) to be displayed with a second hierarchical arrangement of foreground elements segmented relative to depth information based on the media items. In some implementations, the computer system updates the system text to be displayed in a second stacked arrangement relative to the foreground elements segmented based on the depth information without changing a lateral position of the foreground elements of the media item relative to the system text. In some embodiments, the second stacked arrangement relative to the foreground element is different than the first stacked arrangement relative to the foreground element.
In response to detecting (1512) user input (e.g., 1450h 1) directed to the editing user interface (e.g., 1462 b), and in accordance with a determination that the user input is a second type of user input different from the first type of user input (e.g., in response to detecting user input corresponding to a request to change a clip selection of a media item), the computer system (e.g., 1400) updates (151 6) the media item such that a foreground element (e.g., 1466b 2) of the media item is displayed at a second location relative to the system text, wherein the second location is different from the first location. In some implementations, the computer system updates the media item such that the foreground elements of the media item are displayed at the second position relative to the system text without changing the layering arrangement (e.g., layering order) of the system text relative to the foreground elements. In some embodiments, the second crop selection is different from the first crop selection. In some implementations, updating the media item to be displayed in the second crop selection includes: the portion of the media item that was not displayed when the media item was displayed in the first crop selection is displayed. In some implementations, updating the media item to be displayed in the second crop selection includes: the portion of the media item that was displayed when the media item was displayed in the first crop selection is discarded. Conditionally updating the system text to be displayed in a second stacked arrangement relative to the foreground elements segmented based on the depth information of the media item, or updating the media item such that the foreground elements of the media item are displayed at a second location relative to the system text, different from the first location, based on a determination that the user input is a first type of user input or a second type of user input, reduces the number of inputs required to edit the system text and/or the configuration of the media item, which in turn reduces power usage and prolongs battery life of the device by enabling a user to more quickly and efficiently use the system (e.g., customize and edit the media item and the system text). Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, after detecting a user input (e.g., 1450 i) directed to the editing user interface (e.g., 1462 d), the computer system (e.g., 1400) detects a second user input (e.g., 1450 j) directed to the editing user interface (e.g., 1462 e). In some implementations, in response to detecting a user input directed to the editing user interface, and in accordance with a determination that the user input is a first type of user input (e.g., an input corresponding to a user interactive graphical user interface object for updating a hierarchical arrangement of system text relative to foreground elements), the computer system updates the system text (e.g., 1466e 3) to be displayed in a third hierarchical arrangement relative to the foreground elements (e.g., 1466e 2) segmented based on the depth information of the media item. In some implementations, the computer system updates the system text to be displayed in a third stacked arrangement relative to the foreground elements without changing a lateral position of the foreground elements of the media item relative to the system text. In some embodiments, the third stacking arrangement is different from the second stacking arrangement. In some implementations, in response to detecting a second user input directed to the editing user interface, and in accordance with determining that the second user input is a second type of user input different from the first type of user input (e.g., in response to detecting a user input corresponding to a request to change a clip selection of a media item), the computer system updates the media item such that a foreground element of the media item is displayed at a third location relative to the system text, wherein the third location is different from the first location and the second location. In some implementations, the computer system updates the media item such that the foreground elements of the media item are displayed at a third position relative to the system text without changing the layering arrangement (e.g., layering order) of the system text relative to the foreground elements. Editing the first aspect in response to a first user input and editing the second, different aspect (e.g., the stacking arrangement of system text and/or the location of foreground elements) in response to a second user input reduces the amount of input required to edit the different aspects of system text and/or media items, which in turn reduces power usage and extends battery life of the device by enabling a user to use the system more quickly and efficiently.
In some embodiments, the editing user interface (e.g., 1462 a) includes a collection of one or more user-interactive graphical user interface objects (e.g., affordances) (e.g., 1470a, 1470b, 1470c, and/or 1470 d) that, when selected, cause the computer system to update a hierarchical arrangement of system text (e.g., 1466a 3) relative to foreground elements (e.g., 1466a 2). Displaying an option to update the stacking arrangement of system text relative to media items reduces the amount of input required to configure the system text, which in turn reduces power usage and extends the battery life of the device by enabling a user to more quickly and efficiently use the system.
In some implementations, the set of one or more user-interactive graphical user interface objects (e.g., 1470a, 1470b, 1470c, and/or 1470 d) includes a first user-interactive graphical user interface object (e.g., 1470 a) that, when selected, causes the computer system (e.g., 1400) to display system text (e.g., 1466a 3) in an upper portion of the media item and behind (e.g., at least partially visually overlaid by) a foreground element (e.g., 1466a 2) of the media item. In some implementations, the computer system detects a user input (e.g., a tap gesture, swipe, press input, and/or mouse click) corresponding to a selection of the first user-interactive graphical user interface object. In response to detecting a user input corresponding to selection of the first user-interactive graphical user interface object, the computer system updates the system text to be displayed in an upper portion of the media item and/or behind foreground elements of the media item (and optionally in front of one or more background elements of the media item). Displaying the system text in the upper portion of the media item and behind the foreground elements of the media item in response to detecting user input corresponding to selection of the first user-interactive graphical user interface object reduces the amount of input required to configure the system text, which in turn reduces power usage and extends battery life of the device by enabling a user to more quickly and efficiently use the system.
In some implementations, the set of one or more user-interactive graphical user interface objects (e.g., 1470a, 1470b, 1470c, and/or 1470 d) includes a second user-interactive graphical user interface object (e.g., 1470 b) that, when selected, causes the computer system (e.g., 1400) to display system text (e.g., 1466a 3) in an upper portion of the media item and in front of (e.g., at least partially visually overlaying) a foreground element (e.g., 1466a 2) of the media item. In some implementations, the computer system detects a user input (e.g., 1450 i) (e.g., a tap gesture, swipe, press input, and/or mouse click) corresponding to selection of the second user-interactive graphical user interface object. In response to detecting a user input corresponding to selection of the second user-interactive graphical user interface object, the computer system updates the system text to be displayed in an upper portion of the media item and/or in front of a foreground element of the media item. Displaying the system text in the upper portion of the media item and in front of the foreground element of the media item in response to detecting user input corresponding to selection of the second user interactive graphical user interface object reduces the amount of input required to configure the system text, which in turn reduces power usage and extends battery life of the device by enabling a user to more quickly and efficiently use the system.
In some implementations, the set of one or more user-interactive graphical user interface objects (e.g., 1470a, 1470b, 1470c, and/or 1470 d) includes a third user-interactive graphical user interface object (e.g., 1470 d) that, when selected, causes the computer system to display system text (e.g., 1466a 3) in a lower portion of the media item and behind (e.g., at least overlaid by) a foreground element (e.g., 1466a 2) of the media item. In some implementations, the computer system (e.g., 1400) detects a user input (e.g., a tap gesture, swipe, press input, and/or mouse click) corresponding to a selection of a third user-interactive graphical user interface object. In response to detecting a user input corresponding to selection of the third user-interactive graphical user interface object, the computer system updates the system text to be displayed in a lower portion of the media item and/or behind foreground elements of the media item (and optionally in front of one or more background elements of the media item). Displaying the system text in the lower portion of the media item and behind the foreground element of the media item in response to detecting user input corresponding to selection of the third user-interactive graphical user interface object reduces the amount of input required to configure the system text, which in turn reduces power usage and extends battery life of the device by enabling a user to more quickly and efficiently use the system.
In some implementations, the set of one or more user-interactive graphical user interface objects (e.g., 1470a, 1470b, 1470c, and/or 1470 d) includes a fourth user-interactive graphical user interface object (e.g., 1470 c) that, when selected, causes the computer system (e.g., 1400) to display system text (e.g., 1466a 3) in a lower portion of the media item and in front of (e.g., at least partially visually overlaying) a foreground element (e.g., 1466a 2) of the media item. In some implementations, the computer system detects a user input (e.g., 1450 j) (e.g., a tap gesture, swipe, press input, and/or mouse click) corresponding to selection of the fourth user-interactive graphical user interface object. In response to detecting a user input corresponding to selection of the fourth user-interactive graphical user interface object, the computer system updates the system text to be displayed in a lower portion of the media item and/or in front of a foreground element of the media item. Displaying the system text in the lower portion of the media item and in front of the foreground element of the media item in response to detecting user input corresponding to selection of the fourth user interactive graphical user interface object reduces the amount of input required to configure the system text, which in turn reduces power usage and extends battery life of the device by enabling a user to more quickly and efficiently use the system.
In some implementations, upon displaying the editing user interface (e.g., 1462 c) and wherein the first portion of the media item is included (e.g., displayed) in the editing user interface, the computer system (e.g., 1400) detects a third user input (e.g., tap input, swipe input, long press input, and/or mouse click) directed to the editing user interface (e.g., 1450h 2). In response to detecting the third user input directed to the editing user interface, the computer system pans and/or zooms the media item (e.g., scrolls the media item such that an updated portion of the media item is included in the editing user interface, zooms in the media item such that a smaller portion of the media item is included in the editing user interface, zooms out the media item such that a larger portion of the media item is included in the editing user interface). In some implementations, panning and/or zooming the media item includes causing a second portion of the media item that is different from the first portion of the media item to be included in the editing user interface. Panning and/or zooming the media item in response to detecting user input directed to the editing user interface reduces the amount of input required to pan and/or zoom the media item, which in turn reduces power usage and extends battery life of the device by enabling a user to more quickly and efficiently use the system. Further, panning and/or scrolling the media item in response to detecting the user input provides visual feedback as to which portion of the media item is selected for display in the watch user interface. Providing improved visual feedback enhances the operability of the device, and makes the user-device interface more efficient (e.g., by helping the user know which of the selectable objects being displayed has a selection focus to reduce the number of user inputs and prevent the user from erroneously selecting an incorrect selectable object), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, upon detection of the third user input and in accordance with a determination that the fourth stacked arrangement of the system text (e.g., 1466c 3) relative to the foreground element (e.g., 1466c 2) and the fourth position of the foreground element relative to the system text (e.g., in combination, collectively) meet the first set of criteria (e.g., the system text is obscured by at least a threshold amount and/or the position of the foreground element is off-center by at least a threshold amount), the computer system (e.g., 1400) displays an indicator (e.g., 1480 b) via the display generation component (e.g., edits a suggestion of the position of the system text relative to the second stacked arrangement of the system text, edits the suggestion of the position of the foreground element relative to the system text, and/or a guide line). In some implementations, upon detecting the third user input and in accordance with a determination that the fourth layering arrangement of the system text relative to the foreground elements and the fourth position of the foreground elements relative to the system text do not meet the first set of criteria (e.g., the system text is not obscured by less than a threshold amount and/or the position of the foreground is off-center by less than a threshold amount), the computer system discards displaying the indicator. The conditionally displaying the indicator when the stacking arrangement of the system text relative to the foreground element and the position of the foreground element relative to the system text satisfy a set of criteria provides visual feedback as to whether the criteria have been met (e.g., whether the system text is obscured by at least a threshold amount and/or whether the position of the foreground element is off-center by at least a threshold amount).
In some implementations, determining that the fourth hierarchical arrangement of the system text (e.g., 1466c 3) relative to the foreground element (e.g., 1466c 2) and the fourth position of the foreground element relative to the system text (e.g., in combination, collectively) meets the first set of criteria is based at least in part on a position (e.g., a position in the editing user interface) at which the system text is displayed. In some embodiments, the first set of criteria includes determining that the system text is displayed above or below a threshold boundary. The visual feedback based on the location of the display system text is provided when the stacking arrangement of the system text relative to the foreground elements and the location of the foreground elements relative to the system text satisfy a set of criteria, wherein the determination is based at least in part on the location of the display system text.
In some implementations, determining that the fourth hierarchical arrangement of the system text (e.g., 1466c 3) relative to the foreground element (e.g., 1466c 2) and the fourth position of the foreground element relative to the system text (e.g., in combination, collectively) meets the first set of criteria is based at least in part on displaying the fourth position of the foreground element (e.g., the position in the editing user interface). In some implementations, the first set of criteria includes determining that the foreground element is displayed above or below a threshold boundary. A visual feedback based on the location of the display foreground element is provided when the stacking arrangement of the system text relative to the foreground element and the location of the foreground element relative to the system text satisfy a set of criteria, wherein the determining is based at least in part on displaying a second location of the foreground element.
In some embodiments, determining that the fourth hierarchical arrangement of the system text (e.g., 1466c 3) relative to the foreground element (e.g., 1466c 2) and the fourth position of the foreground element relative to the system text (e.g., in combination, collectively) meets the first set of criteria comprises: it is determined that the system text is obscured (e.g., covered, obscured) by the foreground element by at least a threshold amount (e.g., based on the system text being displayed behind the foreground element of the media item (and optionally in front of one or more background elements of the media item)). The indicator is conditionally displayed when the stacking arrangement of the system text relative to the foreground element and the position of the foreground element relative to the system text satisfy a set of criteria (wherein the determining is based at least in part on displaying the second position of the foreground element), providing visual feedback as to whether the system text is obscured by the foreground element by at least a threshold amount.
In some implementations, editing the user interface (e.g., 1462 h) includes completing a user interactive graphical user interface object (e.g., 1476). In some implementations, the computer system (e.g., 1400) detects completion of a user input (e.g., 1450 k) on the user-interactive graphical user interface object (e.g., a tap input and/or a mouse click). In response to detecting completion of user input on the user-interactive graphical user interface object and in accordance with a determination that the system text is obscured (e.g., covered and/or obscured) by the foreground element by less than a threshold amount, the computer system ceases to display the editing user interface (e.g., displays a user interface other than the editing user interface). In some embodiments, the display is responsive to detecting completion of user input on the user-interactive graphical user interface object and in accordance with a determination that the system text is obscured (e.g., covered and/or obscured) by the foreground element by at least a threshold amount, the computer system remains editing the display of the user interface. In some embodiments, the completion user interactive graphical user interface object is not selectable when the system text is obscured by the foreground element by at least a threshold amount. In some embodiments, the completion user interactive graphical user interface object includes a visual indication (e.g., gray and/or shadow visual effect) that it is not selectable when the system text is obscured by the foreground element by at least a threshold amount. In some implementations, selecting the complete user interactive graphical user interface object when less than a threshold amount of system text is obscured by the foreground element causes the computer system to display a dial selection user interface or a dial gallery user interface. Conditionally maintaining the display of the editing user interface or relinquishing the display of the editing user interface (e.g., by displaying a different user interface) in response to detecting completion of input on the user-interactive graphical user interface object provides visual feedback as to whether additional editing is required in order to continue to complete the configuration system text and/or media item (e.g., such that a subsequent user interface is displayed).
In some implementations, the indicator includes a text indication (e.g., 1480 a) corresponding to the editing user interface. In some embodiments, the text indication includes a suggestion to edit the location of the foreground element. In some embodiments, the text indication includes a suggestion to edit the location of the system text. In some implementations, the text indication includes a suggestion to update a stacking arrangement of the system text relative to the foreground element. In some embodiments, the text indication indicates that the system text is obscured by at least a threshold amount. Conditionally displaying an indicator comprising a text indication corresponding to the editing user interface when the stacking arrangement of the system text relative to the foreground elements and the position of the foreground elements relative to the system text satisfy a set of criteria provides visual feedback (e.g., a reason for displaying the indicator) related to configuring the media item and/or the system text.
In some implementations, the indicator includes a graphical indication of the boundary location (e.g., 1478). In some implementations, the graphical indication of the boundary location includes an axis (e.g., a line) below which the foreground element must remain so that the foreground element does not obscure the system text by at least a threshold amount. The conditional display of the indicator including the graphical indication of the boundary when the stacked arrangement of the system text relative to the foreground elements and the position of the foreground elements relative to the system text satisfy a set of criteria provides visual feedback related to configuring the media item and/or the system text (e.g., a line below or above which the media item and/or the foreground elements should be translated to improve the configuration).
In some embodiments, the computer system (e.g., 1400) displays an indicator (e.g., 1480 a) before detecting a third user input (e.g., 1450h 2) directed to the editing user interface (e.g., 1462 b). In some implementations, at least a first portion of the indicator (including a portion corresponding to a text indication of the editing user interface, including a portion of a graphical indication of the boundary location) is displayed in a first color (e.g., white or green). In some implementations, in response to detecting a third user input directed to the editing user interface, the computer system displays the first portion of the indicator in a second color (e.g., red or orange) different from the first color. Changing the color of a portion of the display indicator in response to detecting user input directed to the editing user interface provides visual feedback related to configuring the media item and/or system text (e.g., by providing a visual indication that the current configuration of the media item and/or system text includes one or more errors (e.g., at least a threshold amount of system text is obscured by foreground elements)).
In some embodiments, the computer system (e.g., 1400) displays an editing user interface (e.g., 1462 a) without an indicator before detecting the third user input (e.g., 1450h 2). In some embodiments, the computer system updates the editing user interface to include the indicator while maintaining the third user input. In some implementations, the computer system updates the editing user interface to include the indicator when the third user input is detected and before the third user input has ended (e.g., before the touch input has lifted off the touch-sensitive surface). Updating the editing user interface to include the indicator while maintaining the third user input (e.g., before the corresponding touch input has lifted off the touch-sensitive surface), provides visual feedback related to configuring the media item and/or the system text (e.g., substantially displays the indicator as long as the editing of the configuration of the system text and/or the media item is such that the first set of criteria is met (e.g., as long as the editing is such that at least a threshold amount of the system text is obscured by foreground elements).
In some embodiments, the computer system (e.g., 1400) displays an editing user interface (e.g., 1462 a) without an indicator before detecting the third user input (e.g., 1450h 2). In response to detecting the end of the third user input (e.g., the end of the touch input, and/or the finger lifting off the touch-sensitive surface), the computer system displays an indicator (e.g., 1480 b). In response to the third user input ending (e.g., when the corresponding touch input is lifted off the touch-sensitive surface), an indicator is displayed providing visual feedback related to configuring the media item and/or system text (e.g., after the third input causing the configuration of the system text and/or media item to meet the first set of criteria).
In some implementations, in accordance with a determination that the media item includes a single foreground element (e.g., a person and/or a pet), the computer system (e.g., 1400) initially displays the media item in a fifth position based at least in part on (e.g., centered on) the single foreground element (e.g., 1466a 2). In accordance with a determination that the media item includes two or more foreground elements (e.g., 1484b2 and 1484b 3) (e.g., two or more persons and/or pets), the computer system initially displays the media item in a sixth location based at least in part on the two or more foreground elements. In some implementations, initially displaying the media item in a fifth media location based at least in part on the two or more foreground elements includes: the media item is positioned on the display such that the two or more foreground elements will all be within the displayed (e.g., selected) portion of the media item. Initially displaying the media item in a fourth position based at least in part on a single foreground element or a fifth position based at least in part on two or more foreground elements reduces editing the position of the media item such that one or more foreground elements of the media item are selected to display the required amount of input, which in turn reduces power usage and extends battery life of the device by enabling a user to more quickly and efficiently use the system.
In some implementations, when displaying an editing user interface (e.g., 1462 g) that includes a third portion of the media item displayed at the first zoom level, the computer system detects a swipe gesture (e.g., 1450h 2) on the media item (e.g., a drag gesture) and/or detects a pinch gesture (e.g., 1460). In response to detecting the swipe gesture, the computer system updates the editing user interface by panning from the third portion of the media item to the fourth portion of the media item based on the swipe gesture (e.g., based on a magnitude of the swipe gesture and/or based on a direction of the swipe gesture). In response to detecting the pinch gesture, the computer system updates the editing user interface by displaying the media item at a second zoom level different from the first zoom level based on the pinch gesture (e.g., a magnitude of a change in the zoom level is based on the pinch gesture, and/or a direction of a zoom change (e.g., zoom out, zoom in) is based on the pinch gesture). By translating the media item in response to the swipe gesture and displaying the media item at a different zoom level in response to the pinch gesture, the amount of input required to configure the displayed portion of the media item and/or display the zoom level of the media item is reduced, which in turn reduces power usage and extends battery life of the device by enabling a user to more quickly and efficiently use the system.
In some implementations, the computer system (e.g., 1400) maintains a display of the system text (e.g., 1466d 3) when updating the editing user interface (e.g., 1462 d) by panning from the third portion of the media item to the fourth portion of the media item, and the computer system maintains a display of the system text when updating the editing user interface by displaying the media item at a second zoom level that is different than the first zoom level. In some implementations, the location and/or zoom level of the system text remains unchanged while the computer system updates the editing user interface by panning from the third portion of the media item to the fourth portion of the media item and updates the editing user interface by displaying the media item at a second zoom level that is different from the first zoom level. Maintaining the display of system text as the media item is translated from the third portion of the media item to the fourth portion of the media item and as the editing user interface is updated by displaying the media item at a different zoom level provides visual feedback that the position and/or zoom level is not edited while the translated portion or zoom level of the media item is edited and also provides visual feedback as to how the updated configuration of the media item and the system text will look.
In some implementations, before displaying the editing user interface (e.g., 1462 a), the computer system (e.g., 1400) displays, via the display generation component (e.g., 1402), a media selection user interface (e.g., 1434 c) that includes a set of first media items (e.g., 1454a, 1454b, and/or 1454 c) (e.g., media libraries from the computer system). The computer system receives, via one or more input devices, a sequence of one or more user inputs (e.g., 1450d, 1450e, and/or 1450 f) (e.g., touch inputs, rotational inputs, and/or press inputs) corresponding to selection of a first set of media items including a subset of the media items. In response to receiving a sequence of one or more user inputs (e.g., touch inputs, rotation inputs, and/or press inputs) corresponding to a selection of a first set of media items that includes a subset of the media items, the computer system displays an editing user interface, wherein the editing user interface includes the media items. In some implementations, the computer system generates a set of qualifying media items based at least in part on characteristics of the media items (e.g., availability of depth information, shape of depth information, and/or presence of particular types of points of interest (e.g., faces, pets, and/or favorite people)), locations of the points of interest (e.g., faces, pets, and/or important foreground elements) in the media items. In some implementations, the collection of media items is a subset of a larger collection of media items (e.g., photo albums) that are accessible from (e.g., stored on) a computer system. Displaying an editing user comprising a third media item in response to a sequence of one or more user inputs corresponding to selection of a subset of the first set of media items comprising the media item reduces the amount of input required to select one or more media items to be included in the editing user interface and subsequently display the editing user interface comprising the one or more selected media items.
In some implementations, the first set of media items is selected so as to exclude media items that do not contain depth information. In some implementations, the first set of media items is selected so as to exclude media items that do not contain depth data having a particular shape and/or having a threshold degree of separation between foreground elements and background elements. In some implementations, in accordance with a determination that the plurality of media items does not contain at least one media item having depth data, the computer system forgoes adding the media item to a subset of media items selected for use with the user interface. In some implementations, the computer system determines whether the plurality of media items includes at least one media item having depth data by: a plurality of media items available (e.g., accessible) to a computer system are evaluated to determine whether a media item of the plurality of media items includes depth information. The first set of media items is displayed, wherein the first set of media items is selected so as to exclude media items that do not contain depth information, the first set of media items including depth data being provided to the user without requiring the user to manually select a media item including depth data to add to the first set of media items. Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the first set of media items is selected so as to exclude media items that do not include one or more topics that meet a first set of predetermined criteria (e.g., the presence of a particular type of point of interest (e.g., face, pet, and/or favorite person), a degree of separation between foreground elements of the media item and background elements of the media item, and/or a location of a point of interest (e.g., face, pet, and/or important foreground element) in the media). In some implementations, in accordance with a determination that the plurality of media items does not contain at least one media item that meets a first set of predetermined criteria, the computer system discards adding the media item to the first set of media items. In some implementations, the computer system determines whether the second plurality of media items includes at least one media item that meets the first set of criteria by: a second plurality of media items available (e.g., accessible) to the computer system is evaluated to determine whether media items in the second plurality of media items include one or more topics meeting a first set of predetermined criteria. The first set of media items is displayed, wherein the first set of media items is selected so as to exclude media items that do not include one or more topics meeting a first set of predetermined criteria, the first set of media items meeting the predetermined criteria being provided to the user without requiring the user to manually select media items meeting the first set of predetermined criteria for addition to the first set of media items.
In some implementations, before displaying the media selection user interface (e.g., 1434 c), the computer system (e.g., 1400) displays an album selection user interface (e.g., 1434 b) (e.g., a user interface including options for selecting one or more albums including photos with depth data) via the display generation component (e.g., 1402). In some implementations, displaying the album selection user interface includes simultaneously displaying: a first album user interactive graphical user interface object (e.g., 1448 c) corresponding to a second set of media items corresponding to a first identified topic (e.g., a first person or a first pet); and a second album user interactive graphical user interface object (e.g., 1448 d) corresponding to a third set of media items corresponding to a second identified topic (e.g., a second person or a second pet) different from the first topic. While displaying the first album user-interactive graphical user interface object and the second album user-interactive graphical user interface object, the computer system detects a fourth user input (e.g., a tap input, swipe input, long press input, and/or mouse click). In response to detecting the fourth user input and in accordance with a determination that the fourth user input corresponds to selection of the first album user-interactive graphical user interface object, the computer system displays a media selection user interface comprising a first set of media items, wherein the first set of media items comprises a second set of media items (e.g., wherein the media selection user interface comprises an option for selecting one or more media items in the second set of media items that correspond to the first identified subject matter). In response to detecting the fourth user input and in accordance with a determination that the fourth user input corresponds to selection of the second album user-interactive graphical user interface object, the computer system displays a media selection user interface comprising a first set of media items, wherein the first set of media items comprises a third set of media items (e.g., wherein the media selection user interface comprises an option for selecting one or more media items in the third set of media items that correspond to the second identified subject matter). Displaying a first option for selecting a photograph of a first user that includes depth data and a second option for selecting a photograph of a second (e.g., different) user that includes depth data provides the user with the option of selecting a photograph of a particular user that includes depth data without having to manually select a media item of the particular user that includes depth data.
In some implementations, the media selection user interface (e.g., 1434 c) includes adding the selected user interactive graphical user interface object (e.g., 1442) (e.g., adding an affordance). In some implementations, upon displaying the media selection user interface, the computer system (e.g., 1400) receives a second sequence of one or more user inputs (e.g., 1450d, 1450e, and/or 1450 f) (e.g., touch inputs, rotation inputs, and/or press inputs) corresponding to selection of one or more media items (e.g., 1454a, 1454b, and/or 1454 c) included in the first set of media items. After receiving the second sequence of one or more user inputs, the computer system detects adding a fifth user input (e.g., 1450 g) (e.g., a tap input, swipe input, long press input, and/or mouse click) on the selected user-interactive graphical user interface object (e.g., 1442). In response to detecting the fifth user input, the computer system adds the selected one or more media items to a second subset of media items selected for use with the watch user interface. In some implementations, the second sequence of one or more user inputs corresponding to selection of one or more media items corresponds to one or more taps on the one or more media items, wherein a tap on a media item changes a selection state of the tapped media item (e.g., from selected to unselected, or vice versa). Adding the selected media item to the second subset of media items selected for use with the watch user interface in response to detecting user input on the add selected user interactive graphical user interface object reduces the amount of input required to add the selected media item to the subset of media items selected for use with the watch user interface, which in turn reduces power usage and extends battery life of the device by enabling a user to more quickly and efficiently use the system.
In some embodiments, upon detecting a user input (e.g., 1450h 1) directed to an editing user interface (e.g., 1462 b), the computer system (e.g., 1400) causes a media item-based user interface (e.g., 1494 b) (e.g., dial) to be added for display as a wake screen user interface (e.g., a user interface displayed when the device wakes up or becomes active after being in a closed and/or low power state) of a corresponding electronic device (e.g., a dial of a watch paired with the computer system, or a dial of the computer system, or a lock screen user interface of the computer system). In some embodiments, displaying the user interface as a wakeup screen user interface includes simultaneously displaying: a media item (e.g., photograph, video, GIF, and/or animation) comprising a background element (e.g., 1494b 1) and a foreground element (e.g., 1494b 2) segmented from the background element based on depth information; and system text (e.g., 1494b 3). In some implementations, the media item includes depth data (e.g., data that may be used to segment a foreground element from one or more background elements, such as data indicating that the foreground element is less than a threshold distance from one or more cameras when the media is captured and the background element is greater than the threshold distance from one or more cameras when the media is captured, or a data set related to a distance between two objects in the media, including a camera sensor and a set of data of relative distances between at least a first object and a second object in a field of view of the camera sensor when the media is captured, and/or multiple layers). In some embodiments, the background element and the foreground element are selected (in some embodiments, automatically) based on depth data (e.g., in accordance with a determination that the background element is positioned behind the foreground element). In some implementations, the system text is displayed in a fifth overlaid arrangement (e.g., position) relative to the foreground element based on the depth information (e.g., in front of (e.g., at least partially visually overlaying) the foreground element, or behind (e.g., at least partially visually overlaying) the foreground element). In some embodiments, the system text is updated to include content dynamically selected based on the context of the computer system (e.g., the first time and/or the current date). Automatically creating a user interface, wherein displaying the user interface includes simultaneously displaying: a media item comprising a background element, a foreground element segmented from the background element based on depth information; and system text, wherein the system text is displayed in a stacked arrangement relative to the foreground elements and has content dynamically selected based on the context of the computer system, enabling a user interface to be displayed without requiring a user to provide multiple inputs to manually divide the media item into the divided elements, and/or to select which element of the media should be the foreground element and which element of the media item should be the background element. Performing operations when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping display the user interface by determining that the media item includes a background element and a foreground element that is segmented from the background element based on depth information), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, upon displaying the user interface (e.g., 1494 b), the computer system (e.g., 600 and/or 1400) determines that a second set of predetermined criteria has been met (e.g., time has changed, date has changed, time zone has changed, and/or input has been received). In response to determining that the second set of predetermined criteria has been met, the user interface is updated to include updated system text (e.g., 1494c 3) (e.g., updated time and/or date) and/or updated media items (e.g., second media items (e.g., photos, videos, GIFs, and/or animations) including background elements and foreground elements segmented with the background elements based on the depth information). In some embodiments, updating the system text includes modifying the system text to include different content. In some embodiments, updating the user interface to include updated system text includes: the updated system text is displayed in an updated (e.g., different) hierarchical arrangement relative to the foreground elements segmented based on the depth information of the media item. In some embodiments, updating the user interface to include updated system text includes: the updated media item is displayed such that foreground elements of the updated media item are displayed at updated (e.g., different) locations relative to the system text. Updating system text and/or media items included in the user interface in response to determining that a set of predetermined criteria has been met enables the user interface to be updated based on new inputs and/or contexts (e.g., of the computer system) without requiring the user to provide multiple inputs at the computer system to cause the user interface to be updated (e.g., by configuring the user interface to include updated times and/or dates or to include updated media items).
It is noted that the details of the process described above with respect to method 1500 (e.g., fig. 15) also apply in a similar manner to the methods described herein. For example, method 1500 optionally includes one or more of the features of the various methods described herein with reference to method 700, method 900, method 1100, and method 1300. For example, method 700 optionally includes one or more of the features of the various methods described above with reference to method 1500. For example, the user interface based on media items including depth data as described with reference to fig. 14A-14R may include the same user interface as the watch user interface described above with reference to fig. 6A-6U. As another example, method 900 optionally includes one or more of the features of the various methods described above with reference to method 1500. For example, the layout editing user interface as described with reference to fig. 14A to 14R may include and be used to edit a watch user interface as described with reference to fig. 8A to 8M. As another example, method 1100 optionally includes one or more of the features of the various methods described above with reference to method 1500. For example, the watch user interface as shown with reference to fig. 10A-10W may be added to a different computer system via the computer system as shown with reference to fig. 14A-14R. As another example, method 1300 optionally includes one or more of the features of the various methods described above with reference to method 1500. For example, in response to a press input on a rotatable and pressable input mechanism as described above with reference to fig. 12A-12W, a user interface based on media items with depth data as described above with reference to fig. 14A-14R is added to a computer system. For the sake of brevity, these details are not repeated hereinafter.
The foregoing description, for purposes of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the techniques and their practical applications. Those skilled in the art will be able to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
While the present disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. It should be understood that such variations and modifications are considered to be included within the scope of the disclosure and examples as defined by the claims.
As described above, one aspect of the present technology is to collect and use data from various sources to improve delivery of a watch user interface or any other content to a user that may be of interest to them. The present disclosure contemplates that in some examples, such collected data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, telephone numbers, email addresses, social network IDs, home addresses, data or records related to the user's health or fitness level (e.g., vital sign measurements, medication information, and/or exercise information), birth dates, or any other identification or personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be used to benefit users. For example, personal information data may be used to deliver a target watch user interface that is of greater interest to the user. Thus, the use of such personal information data enables a user to have programmatic control over the delivered content. In addition, the present disclosure contemplates other uses for personal information data that are beneficial to the user. For example, health and fitness data may be used to provide insight into the overall health of a user, or may be used as positive feedback to individuals using technology to pursue health goals.
The present disclosure contemplates that entities responsible for collecting, analyzing, disclosing, transmitting, storing, or otherwise using such personal information data will adhere to established privacy policies and/or privacy practices. In particular, such entities should exercise and adhere to privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining the privacy and security of personal information data. Such policies should be readily accessible to the user and should be updated as the collection and/or use of the data changes. Personal information from users should be collected for legal and reasonable use by entities and not shared or sold outside of these legal uses. In addition, such collection/sharing should be performed after informed consent is received from the user. In addition, such entities should consider taking any necessary steps to defend and secure access to such personal information data and to ensure that others who have access to personal information data adhere to their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices. In addition, policies and practices should be adjusted to collect and/or access specific types of personal information data and to suit applicable laws and standards including specific considerations of jurisdiction. For example, in the united states, the collection or acquisition of certain health data may be governed by federal and/or state law, such as the health insurance flow and liability act (HIPAA); while health data in other countries may be subject to other regulations and policies and should be processed accordingly. Thus, different privacy practices should be maintained for different personal data types in each country.
In spite of the foregoing, the present disclosure also contemplates embodiments in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, with respect to suggested watch user interface options, the present technology may be configured to allow a user to choose to "opt-in" or "opt-out" to participate in the collection of personal information data at any time during or after registration with a service. As another example, the user may choose not to provide mood-related data for suggested watch user interface options. In another example, the user may choose to limit the length of time that the mood-related data is maintained, or to completely prohibit development of the underlying mood state. In addition to providing the "opt-in" and "opt-out" options, the present disclosure also contemplates providing notifications related to accessing or using personal information. For example, the user may be notified that his personal information data will be accessed when the application is downloaded, and then be reminded again just before the personal information data is accessed by the application.
Further, it is an object of the present disclosure that personal information data should be managed and processed to minimize the risk of inadvertent or unauthorized access or use. Once the data is no longer needed, risk can be minimized by limiting the data collection and deleting the data. In addition, and when applicable, included in certain health-related applications, the data de-identification may be used to protect the privacy of the user. De-identification may be facilitated by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of stored data (e.g., collecting location data at a city level instead of at an address level), controlling how data is stored (e.g., aggregating data among users), and/or other methods, as appropriate.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, the present disclosure also contemplates that the various embodiments may be implemented without accessing such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, a user interface may be suggested to a user by inferring a preference based on non-personal information data or absolute minimum metrics of personal information (such as content requested by a device associated with the user, other non-personal information available to the computer system, or publicly available information).

Claims (60)

1. A computer system, comprising:
one or more processors, wherein the computer system is in communication with the display generation component and the one or more input devices; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for:
receiving, via the one or more input devices, input corresponding to a request to display a media item-based user interface; and
In response to the receipt of the input,
in accordance with a determination that the media item meets a first set of predetermined criteria, displaying, via the display generation component, the user interface based on the media item, wherein displaying the user interface includes simultaneously displaying:
the media item comprises a background element and a foreground element segmented from the background element based on depth information; and
system text, wherein the system text is displayed in front of the background element and behind the foreground element and has content dynamically selected based on a context of the computer system; and
in accordance with a determination that the media item does not meet the first set of predetermined criteria, displaying the user interface via the display generation component, wherein displaying the user interface includes simultaneously displaying:
the media item includes the background element and the foreground element segmented from the background element based on depth information; and
the system text, wherein the system text is displayed in front of the background element and in front of the foreground element and has content dynamically selected based on the context of the computer system.
2. The computer system of claim 1, wherein displaying the system text comprises:
in accordance with a determination that the input is received in a first context, displaying first content in the system text; and
in accordance with a determination that the input is received in a second context, second content different from the first content is displayed in the system text.
3. The computer system of claim 1, the one or more programs further comprising instructions for:
detecting a change in the context of the computer system; and
in response to detecting a change in the context of the computer system, the system text is updated based at least in part on the change in the context.
4. The computer system of claim 1, wherein the media item-based user interface is a dial.
5. The computer system of claim 1, wherein the user interface is an initial display screen of the computer system when the computer system transitions from a low power state to a higher power state.
6. The computer system of claim 1, wherein displaying the user interface comprises displaying an animation, wherein the animation comprises a change in appearance over time of one or more of the elements of the user interface based at least in part on the depth information.
7. The computer system of claim 6, wherein the animation comprises simulating a zoom effect.
8. The computer system of claim 6, wherein the animation comprises simulating a push-pull zoom effect.
9. The computer system of claim 6, wherein the animation includes reducing ambiguity of the foreground element and/or magnifying the foreground element.
10. The computer system of claim 6, wherein the animation comprises a parallax effect.
11. The computer system of claim 1, the one or more programs further comprising instructions for:
detecting movement while the computer system is in a higher power state; and
in response to detecting the movement, displaying, via the display generating component, the user interface with a simulated parallax effect having a direction and/or magnitude determined based on a direction and/or magnitude of the movement.
12. The computer system of claim 1, the one or more programs further comprising instructions for:
displaying an editing user interface for editing a first complex function block of the user interface via the display generating means;
Receiving, via the one or more input devices, a first sequence of one or more user inputs while the editing user interface is displayed; and
in response to receiving the first sequence of one or more user inputs:
editing the first complex function block.
13. The computer system of claim 1, wherein the system text displayed in the user interface is displayed in a first font, and the one or more programs further comprise instructions for:
after displaying the user interface in which the system text is displayed in the first font, receiving a request to edit the user interface via the one or more input devices;
in response to receiving the request to edit the user interface, displaying an editing user interface for editing the user interface via the display generating means;
receiving, via the one or more input devices, a second sequence of one or more user inputs while the editing user interface is displayed;
responsive to receiving a second sequence of the one or more user inputs, selecting a second font for the system text; and
After selecting the second font for the system text, displaying the user interface, wherein the system text displayed in the user interface is displayed in a second font different from the first font.
14. The computer system of claim 1, wherein the system text displayed in the user interface is displayed in a first color, and the one or more programs further comprise instructions for:
after displaying the user interface in which the system text is displayed in a first color, receiving, via the one or more input devices, a second request to edit the user interface;
in response to receiving the second request to edit the user interface, displaying an editing user interface for editing the user interface via the display generating means;
receiving, via the one or more input devices, a third sequence of one or more user inputs while the editing user interface is displayed;
selecting a second color for the system text in response to receiving a third sequence of the one or more user inputs; and
after selecting the second color for the system text, displaying the user interface, wherein the system text displayed in the user interface is displayed in a second color different from the first color.
15. The computer system of claim 1, the one or more programs further comprising instructions for:
detecting whether a predetermined condition has been satisfied; and
in response to detecting that the predetermined condition has been met:
displaying the user interface, wherein the user interface is based on a second media item and not on the media item, and wherein displaying the user interface comprises simultaneously displaying:
the second media item includes a second background element and a second foreground element segmented from the second background element based on depth information; and
system text, wherein the system text is displayed in front of the second background element and behind the second foreground element and has content dynamically selected based on the context of the computer system.
16. The computer system of claim 1, the one or more programs further comprising instructions for:
displaying, via the display generating component, a media selection user interface comprising a set of media items;
receiving, via the one or more input devices, a fourth sequence of one or more user inputs corresponding to a selection of a subset of the set of media items that includes a third media item; and
In response to receiving a fourth sequence of the one or more user inputs corresponding to selection of a subset of the set of media items that includes a third media item, the user interface is displayed, wherein the user interface is based on the third media item.
17. The computer system of claim 1, the one or more programs further comprising instructions for:
in accordance with a determination that a plurality of media items includes at least one media item that meets a first set of predetermined criteria, adding one or more media items that meet the first set of predetermined criteria to a subset of media items selected for use with the user interface; and
after adding one or more media items meeting the first set of predetermined criteria to the subset of media items, displaying the user interface, wherein displaying the user interface comprises:
automatically selecting a fourth media item from the subset of media items selected for use with the user interface; and
the fourth media item is displayed after being selected from the subset of media items selected for use with the user interface.
18. The computer system of claim 17, wherein the determination of the set of characteristics for the media item comprises: it is determined that displaying the system text behind the foreground element does not obscure more than a threshold amount of the system text.
19. The computer system of claim 1, the one or more programs further comprising instructions for:
in accordance with a determination that the media item meets the first set of predetermined criteria, displaying system text in an upper portion of the user interface; and
in accordance with a determination that the media item does not meet the first set of predetermined criteria, system text is displayed in a lower portion of the user interface.
20. The computer system of claim 1, wherein displaying the user interface comprises simultaneously displaying a second complex function block, wherein the second complex function block is displayed in front of the foreground element.
21. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system, wherein the computer system is in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for:
receiving, via the one or more input devices, input corresponding to a request to display a media item-based user interface; and
In response to receiving the input:
in accordance with a determination that the media item meets a first set of predetermined criteria, displaying, via the display generation component, the user interface based on the media item, wherein displaying the user interface includes simultaneously displaying:
the media item comprises a background element and a foreground element segmented from the background element based on depth information; and
system text, wherein the system text is displayed in front of the background element and behind the foreground element and has content dynamically selected based on a context of the computer system; and
in accordance with a determination that the media item does not meet the first set of predetermined criteria,
displaying the user interface via the display generating component, wherein displaying the user interface includes simultaneously displaying:
the media item includes the background element and the foreground element segmented from the background element based on depth information; and
the system text, wherein the system text is displayed in front of the background element and in front of the foreground element and has content dynamically selected based on the context of the computer system.
22. A method, comprising:
at a computer system in communication with a display generation component and one or more input devices:
receiving, via the one or more input devices, input corresponding to a request to display a media item-based user interface; and
in response to receiving the input:
in accordance with a determination that the media item meets a first set of predetermined criteria, displaying, via the display generation component, the user interface based on the media item, wherein displaying the user interface includes simultaneously displaying:
the media item comprises a background element and a foreground element segmented from the background element based on depth information; and
system text, wherein the system text is displayed in front of the background element and behind the foreground element and has content dynamically selected based on a context of the computer system; and
in accordance with a determination that the media item does not meet the first set of predetermined criteria, displaying the user interface via the display generation component, wherein displaying the user interface includes simultaneously displaying:
the media item includes the background element and the foreground element segmented from the background element based on depth information; and
The system text, wherein the system text is displayed in front of the background element and in front of the foreground element and has content dynamically selected based on the context of the computer system.
23. The non-transitory computer-readable storage medium of claim 21, wherein displaying the system text comprises:
in accordance with a determination that the input is received in a first context, displaying first content in the system text; and
in accordance with a determination that the input is received in a second context, second content different from the first content is displayed in the system text.
24. The non-transitory computer readable storage medium of claim 21, the one or more programs further comprising instructions for:
detecting a change in the context of the computer system; and
in response to detecting a change in the context of the computer system, the system text is updated based at least in part on the change in the context.
25. The non-transitory computer-readable storage medium of claim 21, wherein the media item-based user interface is a dial.
26. The non-transitory computer readable storage medium of claim 21, wherein the user interface is an initial display screen of the computer system when the computer system transitions from a low power state to a higher power state.
27. The non-transitory computer-readable storage medium of claim 21, wherein displaying the user interface comprises displaying an animation, wherein the animation comprises a change in appearance over time of one or more of the elements of the user interface based at least in part on the depth information.
28. The non-transitory computer-readable storage medium of claim 27, wherein the animation comprises simulating a zoom effect.
29. The non-transitory computer-readable storage medium of claim 27, wherein the animation comprises simulating a push-pull zoom effect.
30. The non-transitory computer-readable storage medium of claim 27, wherein the animation includes reducing an ambiguity of the foreground element and/or magnifying the foreground element.
31. The non-transitory computer-readable storage medium of claim 27, wherein the animation comprises a parallax effect.
32. The non-transitory computer readable storage medium of claim 21, the one or more programs further comprising instructions for:
detecting movement while the computer system is in a higher power state; and
in response to detecting the movement, displaying, via the display generating component, the user interface with a simulated parallax effect having a direction and/or magnitude determined based on a direction and/or magnitude of the movement.
33. The non-transitory computer readable storage medium of claim 21, the one or more programs further comprising instructions for:
displaying an editing user interface for editing a first complex function block of the user interface via the display generating means;
receiving, via the one or more input devices, a first sequence of one or more user inputs while the editing user interface is displayed; and
in response to receiving the first sequence of one or more user inputs:
editing the first complex function block.
34. The non-transitory computer-readable storage medium of claim 21, wherein the system text displayed in the user interface is displayed in a first font, and the one or more programs further comprise instructions for:
After displaying the user interface in which the system text is displayed in the first font, receiving a request to edit the user interface via the one or more input devices;
in response to receiving the request to edit the user interface, displaying an editing user interface for editing the user interface via the display generating means;
receiving, via the one or more input devices, a second sequence of one or more user inputs while the editing user interface is displayed;
responsive to receiving a second sequence of the one or more user inputs, selecting a second font for the system text; and
after selecting the second font for the system text, displaying the user interface, wherein the system text displayed in the user interface is displayed in a second font different from the first font.
35. The non-transitory computer-readable storage medium of claim 21, wherein the system text displayed in the user interface is displayed in a first color, and the one or more programs further comprise instructions for:
after displaying the user interface in which the system text is displayed in a first color, receiving, via the one or more input devices, a second request to edit the user interface;
In response to receiving the second request to edit the user interface, displaying an editing user interface for editing the user interface via the display generating means;
receiving, via the one or more input devices, a third sequence of one or more user inputs while the editing user interface is displayed;
selecting a second color for the system text in response to receiving a third sequence of the one or more user inputs; and
after selecting the second color for the system text, displaying the user interface, wherein the system text displayed in the user interface is displayed in a second color different from the first color.
36. The non-transitory computer readable storage medium of claim 21, the one or more programs further comprising instructions for:
detecting whether a predetermined condition has been satisfied; and
in response to detecting that the predetermined condition has been met:
displaying the user interface, wherein the user interface is based on a second media item and not on the media item, and wherein displaying the user interface comprises simultaneously displaying:
The second media item includes a second background element and a second foreground element segmented from the second background element based on depth information; and
system text, wherein the system text is displayed in front of the second background element and behind the second foreground element and has content dynamically selected based on the context of the computer system.
37. The non-transitory computer readable storage medium of claim 21, the one or more programs further comprising instructions for:
displaying, via the display generating component, a media selection user interface comprising a set of media items;
receiving, via the one or more input devices, a fourth sequence of one or more user inputs corresponding to a selection of a subset of the set of media items that includes a third media item; and
in response to receiving a fourth sequence of the one or more user inputs corresponding to selection of a subset of the set of media items that includes a third media item, the user interface is displayed, wherein the user interface is based on the third media item.
38. The non-transitory computer readable storage medium of claim 21, the one or more programs further comprising instructions for:
In accordance with a determination that a plurality of media items includes at least one media item that meets a first set of predetermined criteria, adding one or more media items that meet the first set of predetermined criteria to a subset of media items selected for use with the user interface; and
after adding one or more media items meeting the first set of predetermined criteria to the subset of media items, displaying the user interface, wherein displaying the user interface comprises:
automatically selecting a fourth media item from the subset of media items selected for use with the user interface; and
the fourth media item is displayed after being selected from the subset of media items selected for use with the user interface.
39. The non-transitory computer-readable storage medium of claim 38, wherein the determination of the set of characteristics for the media item comprises: it is determined that displaying the system text behind the foreground element does not obscure more than a threshold amount of the system text.
40. The non-transitory computer readable storage medium of claim 21, the one or more programs further comprising instructions for:
In accordance with a determination that the media item meets the first set of predetermined criteria, displaying system text in an upper portion of the user interface; and
in accordance with a determination that the media item does not meet the first set of predetermined criteria, system text is displayed in a lower portion of the user interface.
41. The non-transitory computer-readable storage medium of claim 21, wherein displaying the user interface comprises simultaneously displaying a second complex function block, wherein the second complex function block is displayed in front of the foreground element.
42. The method of claim 22, wherein displaying the system text comprises:
in accordance with a determination that the input is received in a first context, displaying first content in the system text; and
in accordance with a determination that the input is received in a second context, second content different from the first content is displayed in the system text.
43. The method of claim 22, further comprising:
detecting a change in the context of the computer system; and
in response to detecting a change in the context of the computer system, the system text is updated based at least in part on the change in the context.
44. The method of claim 22, wherein the media item-based user interface is a dial.
45. The method of claim 22, wherein the user interface is an initial display screen of the computer system when the computer system transitions from a low power state to a higher power state.
46. The method of claim 22, wherein displaying the user interface comprises displaying an animation, wherein the animation comprises a change in appearance over time of one or more of the elements of the user interface based at least in part on the depth information.
47. The method of claim 46, wherein the animation comprises simulating a zoom effect.
48. The method of claim 46, wherein the animation comprises simulating a push-pull zoom effect.
49. The method of claim 46, wherein the animation includes reducing ambiguity of the foreground element and/or magnifying the foreground element.
50. The method of claim 46, wherein the animation comprises a parallax effect.
51. The method of claim 22, further comprising:
detecting movement while the computer system is in a higher power state; and
In response to detecting the movement, displaying, via the display generating component, the user interface with a simulated parallax effect having a direction and/or magnitude determined based on a direction and/or magnitude of the movement.
52. The method of claim 22, further comprising:
displaying an editing user interface for editing a first complex function block of the user interface via the display generating means;
receiving, via the one or more input devices, a first sequence of one or more user inputs while the editing user interface is displayed; and
in response to receiving the first sequence of one or more user inputs:
editing the first complex function block.
53. The method of claim 22, wherein the system text displayed in the user interface is displayed in a first font, and the one or more programs further comprise instructions for:
after displaying the user interface in which the system text is displayed in the first font, receiving a request to edit the user interface via the one or more input devices;
in response to receiving the request to edit the user interface, displaying an editing user interface for editing the user interface via the display generating means;
Receiving, via the one or more input devices, a second sequence of one or more user inputs while the editing user interface is displayed;
responsive to receiving a second sequence of the one or more user inputs, selecting a second font for the system text; and
after selecting the second font for the system text, displaying the user interface, wherein the system text displayed in the user interface is displayed in a second font different from the first font.
54. The method of claim 22, wherein the system text displayed in the user interface is displayed in a first color, and the one or more programs further comprise instructions for:
after displaying the user interface in which the system text is displayed in a first color, receiving, via the one or more input devices, a second request to edit the user interface;
in response to receiving the second request to edit the user interface, displaying an editing user interface for editing the user interface via the display generating means;
receiving, via the one or more input devices, a third sequence of one or more user inputs while the editing user interface is displayed;
Selecting a second color for the system text in response to receiving a third sequence of the one or more user inputs; and
after selecting the second color for the system text, displaying the user interface, wherein the system text displayed in the user interface is displayed in a second color different from the first color.
55. The method of claim 22, further comprising:
detecting whether a predetermined condition has been satisfied; and
in response to detecting that the predetermined condition has been met:
displaying the user interface, wherein the user interface is based on a second media item and not on the media item, and wherein displaying the user interface comprises simultaneously displaying:
the second media item includes a second background element and a second foreground element segmented from the second background element based on depth information; and
system text, wherein the system text is displayed in front of the second background element and behind the second foreground element and has content dynamically selected based on the context of the computer system.
56. The method of claim 22, further comprising:
Displaying, via the display generating component, a media selection user interface comprising a set of media items;
receiving, via the one or more input devices, a fourth sequence of one or more user inputs corresponding to a selection of a subset of the set of media items that includes a third media item; and
in response to receiving a fourth sequence of the one or more user inputs corresponding to selection of a subset of the set of media items that includes a third media item, the user interface is displayed, wherein the user interface is based on the third media item.
57. The method of claim 22, further comprising:
in accordance with a determination that a plurality of media items includes at least one media item that meets a first set of predetermined criteria, adding one or more media items that meet the first set of predetermined criteria to a subset of media items selected for use with the user interface; and
after adding one or more media items meeting the first set of predetermined criteria to the subset of media items, displaying the user interface, wherein displaying the user interface comprises:
automatically selecting a fourth media item from the subset of media items selected for use with the user interface; and
The fourth media item is displayed after being selected from the subset of media items selected for use with the user interface.
58. The method of claim 57, wherein the determining of a set of characteristics for the media item comprises: it is determined that displaying the system text behind the foreground element does not obscure more than a threshold amount of the system text.
59. The method of claim 22, further comprising:
in accordance with a determination that the media item meets the first set of predetermined criteria, displaying system text in an upper portion of the user interface; and
in accordance with a determination that the media item does not meet the first set of predetermined criteria, system text is displayed in a lower portion of the user interface.
60. The method of claim 22, wherein displaying the user interface comprises simultaneously displaying a second complex function block, wherein the second complex function block is displayed in front of the foreground element.
CN202311634654.5A 2021-05-14 2022-05-13 Time-dependent user interface Pending CN117421087A (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US63/188,801 2021-05-14
US63/197,447 2021-06-06
US17/738,940 US11921992B2 (en) 2021-05-14 2022-05-06 User interfaces related to time
US17/738,940 2022-05-06
PCT/US2022/029279 WO2022241271A2 (en) 2021-05-14 2022-05-13 User interfaces related to time
CN202280026198.3A CN117242430A (en) 2021-05-14 2022-05-13 Time-dependent user interface

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202280026198.3A Division CN117242430A (en) 2021-05-14 2022-05-13 Time-dependent user interface

Publications (1)

Publication Number Publication Date
CN117421087A true CN117421087A (en) 2024-01-19

Family

ID=89086668

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202311634654.5A Pending CN117421087A (en) 2021-05-14 2022-05-13 Time-dependent user interface
CN202280026198.3A Pending CN117242430A (en) 2021-05-14 2022-05-13 Time-dependent user interface

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202280026198.3A Pending CN117242430A (en) 2021-05-14 2022-05-13 Time-dependent user interface

Country Status (1)

Country Link
CN (2) CN117421087A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059756A (en) * 2007-05-16 2007-10-24 珠海金山软件股份有限公司 Device and method for user operating sheltered area
US20110202834A1 (en) * 2010-02-12 2011-08-18 Microsoft Corporation Visual motion feedback for user interface
US20120212495A1 (en) * 2008-10-23 2012-08-23 Microsoft Corporation User Interface with Parallax Animation
CN106814886A (en) * 2015-11-30 2017-06-09 阿里巴巴集团控股有限公司 The methods of exhibiting and device of banner banner pictures
US20170357427A1 (en) * 2016-06-10 2017-12-14 Apple Inc. Context-specific user interfaces
US20180246635A1 (en) * 2017-02-24 2018-08-30 Microsoft Technology Licensing, Llc Generating user interfaces combining foreground and background of an image with user interface elements
US20180329587A1 (en) * 2017-05-12 2018-11-15 Apple Inc. Context-specific user interfaces

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059756A (en) * 2007-05-16 2007-10-24 珠海金山软件股份有限公司 Device and method for user operating sheltered area
US20120212495A1 (en) * 2008-10-23 2012-08-23 Microsoft Corporation User Interface with Parallax Animation
US20110202834A1 (en) * 2010-02-12 2011-08-18 Microsoft Corporation Visual motion feedback for user interface
CN106814886A (en) * 2015-11-30 2017-06-09 阿里巴巴集团控股有限公司 The methods of exhibiting and device of banner banner pictures
US20170357427A1 (en) * 2016-06-10 2017-12-14 Apple Inc. Context-specific user interfaces
US20180246635A1 (en) * 2017-02-24 2018-08-30 Microsoft Technology Licensing, Llc Generating user interfaces combining foreground and background of an image with user interface elements
US20180329587A1 (en) * 2017-05-12 2018-11-15 Apple Inc. Context-specific user interfaces

Also Published As

Publication number Publication date
CN117242430A (en) 2023-12-15

Similar Documents

Publication Publication Date Title
US11921992B2 (en) User interfaces related to time
AU2020267396B2 (en) Media browsing user interface with intelligently selected representative media items
JP6905618B2 (en) Context-specific user interface
US20220342514A1 (en) Techniques for managing display usage
US20220198984A1 (en) Dynamic user interface with time indicator
CN116719595A (en) User interface for media capture and management
US11921998B2 (en) Editing features of an avatar
CN115867929A (en) User interface for messages
CN117581187A (en) User interface for physical condition
CN117793524A (en) User interface for managing accessories
CN113454983B (en) User interface for managing media
KR102685525B1 (en) Time-related user interfaces
CN117421087A (en) Time-dependent user interface
WO2020227273A1 (en) Media browsing user interface with intelligently selected representative media items
WO2023235557A1 (en) User interfaces for managing accessories
KR20230150875A (en) Set Content Item User Interface
CN117099094A (en) Aggregation content item user interface
CN118283172A (en) Low bandwidth and emergency communication user interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination