CN117041416A - User interface for managing shared content sessions - Google Patents

User interface for managing shared content sessions Download PDF

Info

Publication number
CN117041416A
CN117041416A CN202310520843.3A CN202310520843A CN117041416A CN 117041416 A CN117041416 A CN 117041416A CN 202310520843 A CN202310520843 A CN 202310520843A CN 117041416 A CN117041416 A CN 117041416A
Authority
CN
China
Prior art keywords
computer system
session
real
shared content
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310520843.3A
Other languages
Chinese (zh)
Inventor
张宰祐
J·R·艾齐恩
J·A·福特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/067,350 external-priority patent/US20230370507A1/en
Application filed by Apple Inc filed Critical Apple Inc
Priority to CN202310585927.5A priority Critical patent/CN117041417A/en
Publication of CN117041416A publication Critical patent/CN117041416A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72463User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions to restrict the functionality of the device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1069Session establishment or de-establishment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons

Abstract

The present disclosure relates generally to user interfaces for managing shared content sessions. In some embodiments, if the shared content session is initiated via asynchronous communication, the shared content session is initiated with the real-time communication feature disabled. In some embodiments, an option is provided for joining a real-time communication session when the shared content session is active. In some embodiments, when a communication session is active, a user interface is displayed that includes a representation of an application configured to provide content that can be played as synchronized content during the communication session. In some embodiments, if a computer system detects a real-time communication session at an external computer system and meets a criterion, the computer system displays an option to transfer the real-time communication session from the external computer system to the computer system.

Description

User interface for managing shared content sessions
Technical Field
The present disclosure relates generally to computer user interfaces, and more particularly to techniques for managing shared content sessions.
Background
The computer system may include hardware and/or software for displaying interfaces for various types of communications and information sharing.
Disclosure of Invention
Some techniques for communication and information sharing using electronic devices are often cumbersome and inefficient. For example, some prior art techniques use complex and time consuming user interfaces that may include multiple key presses or keystrokes. The prior art requires more time than is necessary, which results in wasted user time and device energy. This latter consideration is particularly important in battery-powered devices.
Thus, the present technology provides faster, more efficient methods and interfaces for electronic devices to manage shared content sessions. Such methods and interfaces optionally supplement or replace other methods for managing shared content sessions. Such methods and interfaces reduce the cognitive burden on the user and result in a more efficient human-machine interface. For battery-powered computing devices, such methods and interfaces conserve power and increase the time interval between battery charges.
Example methods are described herein. An exemplary method includes, at a computer system in communication with one or more display generating components and one or more input devices: while displaying, via the one or more display generating components, a user interface for initiating a shared content session with the external computer system, receiving, via the one or more input devices, a first set of one or more inputs corresponding to a request to initiate a shared content session with the one or more external computer systems; and in response to receiving the first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems, wherein the shared content session, when active, enables the computer system to output the respective content while the one or more external computer systems are outputting the respective content, wherein initiating the shared content session with the one or more external computer systems comprises: in accordance with a determination that the shared content session is initiated via asynchronous communication, the shared content session is initiated in a first mode in which a set of real-time communication features are disabled for the shared content session.
An exemplary method includes, at a computer system in communication with one or more display generating components and one or more input devices: receiving an invitation to join a real-time communication session while the computer system is in a shared content session in which synchronized content is enabled for sharing with an external computer system and while a real-time communication session is not enabled, and displaying, via the one or more display generating components, an option to accept the invitation to join the real-time communication session; and after receiving the invitation to join the real-time communication session: joining the real-time communication session in accordance with the option determining that the invitation to join the real-time communication session has been selected to be accepted; and in accordance with a determination that the option to accept the invitation to join the real-time communication session has not been selected, relinquishing joining the real-time communication session.
An exemplary method includes, at a computer system in communication with one or more display generating components and one or more input devices: while the computer system is in a communication session with an external computer system: displaying, via the one or more display generating components, a control user interface for controlling one or more settings of the communication session, wherein the control user interface includes a first control option; detecting, via the one or more input devices, a set of one or more inputs directed to the control user interface, wherein the set of one or more inputs includes a selection of the first control option; and in response to detecting the selection of the first control option, displaying a representation of one or more applications available on the computer system, the one or more applications configured to provide content that can be played as synchronized content during the communication.
An exemplary method includes, at a computer system in communication with one or more display generating components and one or more cameras: receiving first data indicating whether a first external computer system meets a first set of criteria when the computer system is associated with a respective user account, wherein the first set of criteria is met when the first external computer system is within a threshold distance of the computer system, the first external computer system is associated with the respective user account, and the first external computer system is in a real-time communication session with a second external computer system; and after receiving the first data and in accordance with a determination that the first data indicates that the first external computer system meets the first set of criteria, displaying, via the one or more display generating components, a respective user interface comprising a user interface object selectable to initiate a process for joining the real-time communication session with the second external computer system, wherein displaying the respective user interface comprises: in accordance with a determination that the real-time communication session includes a live video feed, a representation of a field of view of the one or more cameras is displayed.
Example non-transitory computer-readable storage media are described herein. An exemplary non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for: while displaying, via the one or more display generating components, a user interface for initiating a shared content session with the external computer system, receiving, via the one or more input devices, a first set of one or more inputs corresponding to a request to initiate a shared content session with the one or more external computer systems; and in response to receiving the first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems, wherein the shared content session, when active, enables the computer system to output the respective content while the one or more external computer systems are outputting the respective content, wherein initiating the shared content session with the one or more external computer systems comprises: in accordance with a determination that the shared content session is initiated via asynchronous communication, the shared content session is initiated in a first mode in which a set of real-time communication features are disabled for the shared content session.
An exemplary non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for: receiving an invitation to join a real-time communication session while the computer system is in a shared content session in which synchronized content is enabled for sharing with an external computer system and while a real-time communication session is not enabled, and displaying, via the one or more display generating components, an option to accept the invitation to join the real-time communication session; and after receiving the invitation to join the real-time communication session: joining the real-time communication session in accordance with the option determining that the invitation to join the real-time communication session has been selected to be accepted; and in accordance with a determination that the option to accept the invitation to join the real-time communication session has not been selected, relinquishing joining the real-time communication session.
An exemplary non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for: while the computer system is in a communication session with an external computer system: displaying, via the one or more display generating components, a control user interface for controlling one or more settings of the communication session, wherein the control user interface includes a first control option; detecting, via the one or more input devices, a set of one or more inputs directed to the control user interface, wherein the set of one or more inputs includes a selection of the first control option; and in response to detecting the selection of the first control option, displaying a representation of one or more applications available on the computer system, the one or more applications configured to provide content that can be played as synchronized content during the communication.
An exemplary non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for: receiving first data indicating whether a first external computer system meets a first set of criteria when the computer system is associated with a respective user account, wherein the first set of criteria is met when the first external computer system is within a threshold distance of the computer system, the first external computer system is associated with the respective user account, and the first external computer system is in a real-time communication session with a second external computer system; and after receiving the first data and in accordance with a determination that the first data indicates that the first external computer system meets the first set of criteria, displaying, via the one or more display generating components, a respective user interface comprising a user interface object selectable to initiate a process for joining the real-time communication session with the second external computer system, wherein displaying the respective user interface comprises: in accordance with a determination that the real-time communication session includes a live video feed, a representation of a field of view of the one or more cameras is displayed.
Example transitory computer-readable storage media are described herein. An exemplary transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for: while displaying, via the one or more display generating components, a user interface for initiating a shared content session with the external computer system, receiving, via the one or more input devices, a first set of one or more inputs corresponding to a request to initiate a shared content session with the one or more external computer systems; and in response to receiving the first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems, wherein the shared content session, when active, enables the computer system to output the respective content while the one or more external computer systems are outputting the respective content, wherein initiating the shared content session with the one or more external computer systems comprises: in accordance with a determination that the shared content session is initiated via asynchronous communication, the shared content session is initiated in a first mode in which a set of real-time communication features are disabled for the shared content session.
An exemplary transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for: receiving an invitation to join a real-time communication session while the computer system is in a shared content session in which synchronized content is enabled for sharing with an external computer system and while a real-time communication session is not enabled, and displaying, via the one or more display generating components, an option to accept the invitation to join the real-time communication session; and after receiving the invitation to join the real-time communication session: joining the real-time communication session in accordance with the option determining that the invitation to join the real-time communication session has been selected to be accepted; and in accordance with a determination that the option to accept the invitation to join the real-time communication session has not been selected, relinquishing joining the real-time communication session.
An exemplary transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for: while the computer system is in a communication session with an external computer system: displaying, via the one or more display generating components, a control user interface for controlling one or more settings of the communication session, wherein the control user interface includes a first control option; detecting, via the one or more input devices, a set of one or more inputs directed to the control user interface, wherein the set of one or more inputs includes a selection of the first control option; and in response to detecting the selection of the first control option, displaying a representation of one or more applications available on the computer system, the one or more applications configured to provide content that can be played as synchronized content during the communication.
An exemplary transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for: receiving first data indicating whether a first external computer system meets a first set of criteria when the computer system is associated with a respective user account, wherein the first set of criteria is met when the first external computer system is within a threshold distance of the computer system, the first external computer system is associated with the respective user account, and the first external computer system is in a real-time communication session with a second external computer system; and after receiving the first data and in accordance with a determination that the first data indicates that the first external computer system meets the first set of criteria, displaying, via the one or more display generating components, a respective user interface comprising a user interface object selectable to initiate a process for joining the real-time communication session with the second external computer system, wherein displaying the respective user interface comprises: in accordance with a determination that the real-time communication session includes a live video feed, a representation of a field of view of the one or more cameras is displayed.
An exemplary computer system is described herein. An exemplary computer system is configured to communicate with one or more display generating components and one or more input devices, and includes: one or more processors; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: while displaying, via the one or more display generating components, a user interface for initiating a shared content session with the external computer system, receiving, via the one or more input devices, a first set of one or more inputs corresponding to a request to initiate a shared content session with the one or more external computer systems; and in response to receiving the first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems, wherein the shared content session, when active, enables the computer system to output the respective content while the one or more external computer systems are outputting the respective content, wherein initiating the shared content session with the one or more external computer systems comprises: in accordance with a determination that the shared content session is initiated via asynchronous communication, the shared content session is initiated in a first mode in which a set of real-time communication features are disabled for the shared content session.
An exemplary computer system is configured to communicate with one or more display generating components and one or more input devices, and includes: one or more processors; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving an invitation to join a real-time communication session while the computer system is in a shared content session in which synchronized content is enabled for sharing with an external computer system and while a real-time communication session is not enabled, and displaying, via the one or more display generating components, an option to accept the invitation to join the real-time communication session; and after receiving the invitation to join the real-time communication session: joining the real-time communication session in accordance with the option determining that the invitation to join the real-time communication session has been selected to be accepted; and in accordance with a determination that the option to accept the invitation to join the real-time communication session has not been selected, relinquishing joining the real-time communication session.
An exemplary computer system is configured to communicate with one or more display generating components and one or more input devices, and includes: one or more processors; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: while the computer system is in a communication session with an external computer system: displaying, via the one or more display generating components, a control user interface for controlling one or more settings of the communication session, wherein the control user interface includes a first control option; detecting, via the one or more input devices, a set of one or more inputs directed to the control user interface, wherein the set of one or more inputs includes a selection of the first control option; and in response to detecting the selection of the first control option, displaying a representation of one or more applications available on the computer system, the one or more applications configured to provide content that can be played as synchronized content during the communication.
An exemplary computer system is configured to communicate with one or more display generating components and one or more cameras and includes: one or more processors; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving first data indicating whether a first external computer system meets a first set of criteria when the computer system is associated with a respective user account, wherein the first set of criteria is met when the first external computer system is within a threshold distance of the computer system, the first external computer system is associated with the respective user account, and the first external computer system is in a real-time communication session with a second external computer system; and after receiving the first data and in accordance with a determination that the first data indicates that the first external computer system meets the first set of criteria, displaying, via the one or more display generating components, a respective user interface comprising a user interface object selectable to initiate a process for joining the real-time communication session with the second external computer system, wherein displaying the respective user interface comprises: in accordance with a determination that the real-time communication session includes a live video feed, a representation of a field of view of the one or more cameras is displayed.
An exemplary computer system is configured to communicate with one or more display generating components and one or more input devices, and includes: apparatus for: while displaying, via the one or more display generating components, a user interface for initiating a shared content session with the external computer system, receiving, via the one or more input devices, a first set of one or more inputs corresponding to a request to initiate a shared content session with the one or more external computer systems; means for: in response to receiving the first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems, wherein the shared content session, when active, enables the computer system to output the respective content while the one or more external computer systems are outputting the respective content, wherein initiating the shared content session with the one or more external computer systems comprises: in accordance with a determination that the shared content session is initiated via asynchronous communication, the shared content session is initiated in a first mode in which a set of real-time communication features are disabled for the shared content session.
An exemplary computer system is configured to communicate with one or more display generating components and one or more input devices and includes means for: receiving an invitation to join a real-time communication session while the computer system is in a shared content session in which synchronized content is enabled for sharing with an external computer system and while a real-time communication session is not enabled, and displaying, via the one or more display generating components, an option to accept the invitation to join the real-time communication session; and after receiving the invitation to join the real-time communication session: joining the real-time communication session in accordance with the option determining that the invitation to join the real-time communication session has been selected to be accepted; and in accordance with a determination that the option to accept the invitation to join the real-time communication session has not been selected, relinquishing joining the real-time communication session.
An exemplary computer system is configured to communicate with one or more display generating components and one or more input devices, and includes means for, when the computer system is in a communication session with an external computer system: displaying, via the one or more display generating components, a control user interface for controlling one or more settings of the communication session, wherein the control user interface includes a first control option; detecting, via the one or more input devices, a set of one or more inputs directed to the control user interface, wherein the set of one or more inputs includes a selection of the first control option; and in response to detecting the selection of the first control option, displaying a representation of one or more applications available on the computer system, the one or more applications configured to provide content that can be played as synchronized content during the communication.
An exemplary computer system is configured to communicate with one or more display generating components and one or more cameras and includes means for: receiving first data indicating whether a first external computer system meets a first set of criteria when the computer system is associated with a respective user account, wherein the first set of criteria is met when the first external computer system is within a threshold distance of the computer system, the first external computer system is associated with the respective user account, and the first external computer system is in a real-time communication session with a second external computer system; and after receiving the first data and in accordance with a determination that the first data indicates that the first external computer system meets the first set of criteria, displaying, via the one or more display generating components, a respective user interface comprising a user interface object selectable to initiate a process for joining the real-time communication session with the second external computer system, wherein displaying the respective user interface comprises: in accordance with a determination that the real-time communication session includes a live video feed, a representation of a field of view of the one or more cameras is displayed.
An exemplary computer program product is described herein. An exemplary computer program product includes one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs including instructions for: while displaying, via the one or more display generating components, a user interface for initiating a shared content session with the external computer system, receiving, via the one or more input devices, a first set of one or more inputs corresponding to a request to initiate a shared content session with the one or more external computer systems; and in response to receiving the first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems, wherein the shared content session, when active, enables the computer system to output the respective content while the one or more external computer systems are outputting the respective content, wherein initiating the shared content session with the one or more external computer systems comprises: in accordance with a determination that the shared content session is initiated via asynchronous communication, the shared content session is initiated in a first mode in which a set of real-time communication features are disabled for the shared content session.
An exemplary computer program product includes one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs including instructions for: receiving an invitation to join a real-time communication session while the computer system is in a shared content session in which synchronized content is enabled for sharing with an external computer system and while a real-time communication session is not enabled, and displaying, via the one or more display generating components, an option to accept the invitation to join the real-time communication session; and after receiving the invitation to join the real-time communication session: joining the real-time communication session in accordance with the option determining that the invitation to join the real-time communication session has been selected to be accepted; and in accordance with a determination that the option to accept the invitation to join the real-time communication session has not been selected, relinquishing joining the real-time communication session.
An exemplary computer program product includes one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs including instructions for: while the computer system is in a communication session with an external computer system: displaying, via the one or more display generating components, a control user interface for controlling one or more settings of the communication session, wherein the control user interface includes a first control option; detecting, via the one or more input devices, a set of one or more inputs directed to the control user interface, wherein the set of one or more inputs includes a selection of the first control option; and in response to detecting the selection of the first control option, displaying a representation of one or more applications available on the computer system, the one or more applications configured to provide content that can be played as synchronized content during the communication.
An example computer program product includes one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving first data indicating whether a first external computer system meets a first set of criteria when the computer system is associated with a respective user account, wherein the first set of criteria is met when the first external computer system is within a threshold distance of the computer system, the first external computer system is associated with the respective user account, and the first external computer system is in a real-time communication session with a second external computer system; and after receiving the first data and in accordance with a determination that the first data indicates that the first external computer system meets the first set of criteria, displaying, via the one or more display generating components, a respective user interface comprising a user interface object selectable to initiate a process for joining the real-time communication session with the second external computer system, wherein displaying the respective user interface comprises: in accordance with a determination that the real-time communication session includes a live video feed, a representation of a field of view of the one or more cameras is displayed.
Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors. Executable instructions for performing these functions are optionally included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
Thus, faster, more efficient methods and interfaces for managing shared content sessions are provided for devices, thereby improving the effectiveness, efficiency, and user satisfaction of such devices. Such methods and interfaces may supplement or replace other methods for managing shared content sessions.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following detailed description taken in conjunction with the following drawings, in which like reference numerals designate corresponding parts throughout the several views.
Fig. 1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
FIG. 1B is a block diagram illustrating exemplary components for event processing according to some embodiments.
Fig. 2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
Fig. 4A illustrates an exemplary user interface for a menu of applications on a portable multifunction device in accordance with some embodiments.
Fig. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface separate from a display in accordance with some embodiments.
Fig. 5A illustrates a personal electronic device according to some embodiments.
Fig. 5B is a block diagram illustrating a personal electronic device, according to some embodiments.
Fig. 5C illustrates an exemplary diagram of a communication session between electronic devices, according to some embodiments.
Fig. 6A-6 AH illustrate an exemplary user interface for managing a shared content session according to some embodiments.
Fig. 7 depicts a flowchart illustrating a method for initiating a shared content session using asynchronous communications, in accordance with some embodiments.
Fig. 8 depicts a flow chart illustrating a method for managing real-time communication features of a shared content session, in accordance with some embodiments.
Fig. 9 depicts a flowchart illustrating a method for managing shared content sessions, according to some embodiments.
Fig. 10A-10N illustrate exemplary user interfaces for managing transfer of a real-time communication session, according to some embodiments.
Fig. 11 depicts a flowchart illustrating a method for managing transfer of a real-time communication session, in accordance with some embodiments.
Detailed Description
The following description sets forth exemplary methods, parameters, and the like. However, it should be recognized that such description is not intended as a limitation on the scope of the present disclosure, but is instead provided as a description of exemplary embodiments.
There is a need for an electronic device that provides an efficient method and interface for managing shared content sessions. Such techniques may alleviate the cognitive burden on users accessing content in a shared content session, thereby improving production efficiency. Further, such techniques may reduce processor power and battery power that would otherwise be wasted on redundant user inputs.
Hereinafter, fig. 1A to 1B, 2, 3, 4A to 4B, and 5A to 5C provide a description of an exemplary device for performing techniques for managing shared content sessions. Fig. 6A-6 AH illustrate an exemplary user interface for managing a shared content session. Fig. 7 is a flow chart illustrating a method for initiating a shared content session using asynchronous communications, according to some embodiments. Fig. 8 is a flow chart illustrating a method for managing real-time communication features of a shared content session, according to some embodiments. Fig. 9 is a flow chart illustrating a method for managing a shared content session, according to some embodiments. The user interface in fig. 6A-6 AH is used to illustrate the processes described below, including the processes in fig. 7-9. Fig. 10A-10N illustrate exemplary user interfaces for managing transfer of a real-time communication session. Fig. 11 is a flow chart illustrating a method for managing transfer of a real-time communication session, according to some embodiments. The user interfaces in fig. 10A through 10N are used to illustrate the processes described below, including the process in fig. 11.
The processes described below enhance operability of a device and make user-device interfaces more efficient (e.g., by helping a user provide appropriate input and reducing user error in operating/interacting with the device) through various techniques, including by providing improved visual feedback to the user, reducing the number of inputs required to perform an operation, providing additional control options without cluttering the user interface with additional display controls, performing an operation when a set of conditions has been met without further user input, improving privacy and/or security, and/or additional techniques. These techniques also reduce power usage and extend battery life of the device by enabling a user to use the device faster and more efficiently.
Furthermore, in a method described herein in which one or more steps are dependent on one or more conditions having been met, it should be understood that the method may be repeated in multiple iterations such that during the iteration, all conditions that determine steps in the method have been met in different iterations of the method. For example, if a method requires performing a first step (if a condition is met) and performing a second step (if a condition is not met), one of ordinary skill will know that the stated steps are repeated until both the condition and the condition are not met (not sequentially). Thus, a method described as having one or more steps depending on one or more conditions having been met may be rewritten as a method that repeats until each of the conditions described in the method have been met. However, this does not require the system or computer-readable medium to claim that the system or computer-readable medium contains instructions for performing the contingent operation based on the satisfaction of the corresponding condition or conditions, and thus is able to determine whether the contingent situation has been met without explicitly repeating the steps of the method until all conditions to decide on steps in the method have been met. It will also be appreciated by those of ordinary skill in the art that, similar to a method with optional steps, a system or computer readable storage medium may repeat the steps of the method as many times as necessary to ensure that all optional steps have been performed.
Although the following description uses the terms "first," "second," etc. to describe various elements, these elements should not be limited by the terms. In some embodiments, these terms are used to distinguish one element from another element. For example, a first touch may be named a second touch and similarly a second touch may be named a first touch without departing from the scope of the various described embodiments. In some embodiments, the first touch and the second touch are two separate references to the same touch. In some implementations, both the first touch and the second touch are touches, but they are not the same touch.
The terminology used in the description of the various illustrated embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and in the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Depending on the context, the term "if" is optionally interpreted to mean "when..once..once.," in response to determining "or" in response to detecting ". Similarly, the phrase "if determined … …" or "if detected [ stated condition or event ]" is optionally interpreted to mean "upon determining … …" or "in response to determining … …" or "upon detecting [ stated condition or event ]" or "in response to detecting [ stated condition or event ]" depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described herein. In some embodiments, the device is a portable communication device, such as a mobile phone, that also includes other functions, such as PDA and/or music player functions. Exemplary embodiments of the portable multifunction device include, but are not limited to, those from Apple inc (Cupertino, california)Device, iPod->Device, and->An apparatus. Other portable electronic devices, such as a laptop or tablet computer having a touch-sensitive surface (e.g., a touch screen display and/or a touchpad), are optionally used. It should also be appreciated that in some embodiments, the device is not a portable communication device, but rather has a touch-sensitive surface (e.g., a touch screen display and/or touch Board) desktop computer. In some embodiments, the electronic device is a computer system in communication (e.g., via wireless communication, via wired communication) with the display generation component. The display generation component is configured to provide visual output, such as display via a CRT display, display via an LED display, or display via image projection. In some embodiments, the display generating component is integrated with the computer system. In some embodiments, the display generating component is separate from the computer system. As used herein, "displaying" content includes displaying content (e.g., video data rendered or decoded by display controller 156) by transmitting data (e.g., image data or video data) to an integrated or external display generation component via a wired or wireless connection to visually produce the content.
In the following discussion, an electronic device including a display and a touch-sensitive surface is described. However, it should be understood that the electronic device optionally includes one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.
The device typically supports various applications such as one or more of the following: drawing applications, presentation applications, word processing applications, website creation applications, disk editing applications, spreadsheet applications, gaming applications, telephony applications, video conferencing applications, email applications, instant messaging applications, fitness support applications, photo management applications, digital camera applications, digital video camera applications, web browsing applications, digital music player applications, and/or digital video player applications.
The various applications executing on the device optionally use at least one generic physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device are optionally adjusted and/or changed for different applications and/or within the respective applications. In this way, the common physical architecture of the devices (such as the touch-sensitive surface) optionally supports various applications with a user interface that is intuitive and transparent to the user.
Attention is now directed to embodiments of a portable device having a touch sensitive display. Fig. 1A is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display 112 is sometimes referred to as a "touch screen" for convenience and is sometimes referred to or referred to as a "touch-sensitive display system". Device 100 includes memory 102 (which optionally includes one or more computer-readable storage media), memory controller 122, one or more processing units (CPUs) 120, peripheral interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, input/output (I/O) subsystem 106, other input control devices 116, and external ports 124. The apparatus 100 optionally includes one or more optical sensors 164. The device 100 optionally includes one or more contact intensity sensors 165 for detecting the intensity of a contact on the device 100 (e.g., a touch-sensitive surface, such as the touch-sensitive display system 112 of the device 100). Device 100 optionally includes one or more tactile output generators 167 (e.g., generating tactile output on a touch-sensitive surface, such as touch-sensitive display system 112 of device 100 or touch pad 355 of device 300) for generating tactile output on device 100. These components optionally communicate via one or more communication buses or signal lines 103.
As used in this specification and the claims, the term "intensity" of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of the contact on the touch-sensitive surface (e.g., finger contact), or to an alternative to the force or pressure of the contact on the touch-sensitive surface (surrogate). The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., at least 256). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., weighted average) to determine an estimated contact force. Similarly, the pressure sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch sensitive surface. Alternatively, the size of the contact area and/or its variation detected on the touch-sensitive surface, the capacitance of the touch-sensitive surface and/or its variation in the vicinity of the contact and/or the resistance of the touch-sensitive surface and/or its variation in the vicinity of the contact are optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, surrogate measurements of contact force or pressure are directly used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to surrogate measurements). In some implementations, surrogate measurements of contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). The intensity of the contact is used as an attribute of the user input, allowing the user to access additional device functions that are not otherwise accessible to the user on a smaller sized device of limited real estate for displaying affordances and/or receiving user input (e.g., via a touch-sensitive display, touch-sensitive surface, or physical/mechanical control, such as a knob or button).
As used in this specification and in the claims, the term "haptic output" refers to a physical displacement of a device relative to a previous location of the device, a physical displacement of a component of the device (e.g., a touch sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a centroid of the device, to be detected by a user with a user's feel. For example, in the case where the device or component of the device is in contact with a touch-sensitive surface of the user (e.g., a finger, palm, or other portion of the user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a haptic sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or touch pad) is optionally interpreted by a user as a "press click" or "click-down" of a physically actuated button. In some cases, the user will feel a tactile sensation, such as "press click" or "click down", even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moved. As another example, movement of the touch-sensitive surface may optionally be interpreted or sensed by a user as "roughness" of the touch-sensitive surface, even when the smoothness of the touch-sensitive surface is unchanged. While such interpretation of touches by a user will be limited by the user's individualized sensory perception, many sensory perceptions of touches are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click down," "click up," "roughness"), unless stated otherwise, the haptic output generated corresponds to a physical displacement of the device or component thereof that would generate that sensory perception of a typical (or ordinary) user.
It should be understood that the device 100 is merely one example of a portable multifunction device, and that the device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown in fig. 1A are implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application specific integrated circuits.
Memory 102 optionally includes high-speed random access memory, and also optionally includes non-volatile memory, such as one or more disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller 122 optionally controls access to memory 102 by other components of device 100.
Peripheral interface 118 may be used to couple input and output peripherals of the device to CPU 120 and memory 102. The one or more processors 120 run or execute various software programs, such as computer programs (e.g., including instructions), and/or sets of instructions stored in the memory 102 to perform various functions of the device 100 and process data. In some embodiments, peripheral interface 118, CPU 120, and memory controller 122 are optionally implemented on a single chip, such as chip 104. In some other embodiments, they are optionally implemented on separate chips.
The RF (radio frequency) circuit 108 receives and transmits RF signals, also referred to as electromagnetic signals. RF circuitry 108 converts/converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 108 optionally includes well known circuitry for performing these functions including, but not limited to, an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and the like. RF circuitry 108 optionally communicates via wireless communication with networks such as the internet (also known as the World Wide Web (WWW)), intranets, and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), and other devices. The RF circuitry 108 optionally includes well-known circuitry for detecting a Near Field Communication (NFC) field, such as by a short-range communication radio. Wireless communications optionally use any of a variety of communication standards, protocols, and technologies including, but not limited to, global system for mobile communications (GSM), enhanced Data GSM Environment (EDGE), high Speed Downlink Packet Access (HSDPA), high Speed Uplink Packet Access (HSUPA), evolution, pure data (EV-DO), HSPA, hspa+, dual cell HSPA (DC-HSPDA), long Term Evolution (LTE), near Field Communications (NFC), wideband code division multiple access (W-CDMA), code Division Multiple Access (CDMA), time Division Multiple Access (TDMA), bluetooth low energy (BTLE), wireless fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, and/or IEEE 802.11 ac), voice over internet protocol (VoIP), wi-MAX, email protocols (e.g., internet Message Access Protocol (IMAP) and/or Post Office Protocol (POP)), messages (e.g., extensible message handling and presence protocol (XMPP), protocols for instant messaging and presence using extended session initiation protocol (sime), messages and presence (IMPS), instant messaging and/or SMS (SMS) protocols, or any other suitable communications protocol not yet developed herein.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between the user and device 100. Audio circuitry 110 receives audio data from peripheral interface 118, converts the audio data to electrical signals, and transmits the electrical signals to speaker 111. The speaker 111 converts electrical signals into sound waves that are audible to humans. The audio circuit 110 also receives electrical signals converted from sound waves by the microphone 113. The audio circuitry 110 converts the electrical signals into audio data and transmits the audio data to the peripheral interface 118 for processing. The audio data is optionally retrieved from and/or transmitted to the memory 102 and/or the RF circuitry 108 by the peripheral interface 118. In some embodiments, the audio circuit 110 also includes a headset jack (e.g., 212 in fig. 2). The headset jack provides an interface between the audio circuit 110 and removable audio input/output peripherals such as output-only headphones or a headset having both an output (e.g., a monaural or binaural) and an input (e.g., a microphone).
I/O subsystem 106 couples input/output peripheral devices on device 100, such as touch screen 112 and other input control devices 116, to peripheral interface 118. The I/O subsystem 106 optionally includes a display controller 156, an optical sensor controller 158, a depth camera controller 169, an intensity sensor controller 159, a haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive electrical signals from/transmit electrical signals to other input control devices 116. The other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click-type dials, and the like. In some implementations, the input controller 160 is optionally coupled to (or not coupled to) any of the following: a keyboard, an infrared port, a USB port, and a pointing device such as a mouse. One or more buttons (e.g., 208 in fig. 2) optionally include an up/down button for volume control of speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206 in fig. 2). In some embodiments, the electronic device is a computer system that communicates (e.g., via wireless communication, via wired communication) with one or more input devices. In some implementations, the one or more input devices include a touch-sensitive surface (e.g., a touch pad as part of a touch-sensitive display). In some implementations, the one or more input devices include one or more camera sensors (e.g., one or more optical sensors 164 and/or one or more depth camera sensors 175) such as for tracking gestures (e.g., hand gestures and/or air gestures) of the user as input. In some embodiments, one or more input devices are integrated with the computer system. In some embodiments, one or more input devices are separate from the computer system. In some embodiments, the air gesture is a gesture that is detected without the user touching an input element that is part of the device (or independent of an input element that is part of the device) and based on a detected movement of a portion of the user's body through the air, including a movement of the user's body relative to an absolute reference (e.g., an angle of the user's arm relative to the ground or a distance of the user's hand relative to the ground), a movement relative to another portion of the user's body (e.g., a movement of the user's hand relative to the user's shoulder, a movement of the user's hand relative to the other hand of the user, and/or a movement of the user's finger relative to the other finger or part of the hand of the user), and/or an absolute movement of a portion of the user's body (e.g., a flick gesture that includes a predetermined amount and/or speed of movement of the hand in a predetermined gesture that includes a predetermined gesture of the hand, or a shake gesture that includes a predetermined speed or amount of rotation of a portion of the user's body).
The quick press of the push button optionally disengages the lock of the touch screen 112 or optionally begins the process of unlocking the device using gestures on the touch screen, as described in U.S. patent application Ser. No. 11/322,549 (i.e., U.S. patent No.7,657,849) entitled "Unlocking a Device by Performing Gestures on an Unlock Image," filed on even 23, 12, 2005, which is hereby incorporated by reference in its entirety. Long presses of a button (e.g., 206) optionally cause the device 100 to power on or off. The function of the one or more buttons is optionally customizable by the user. Touch screen 112 is used to implement virtual buttons or soft buttons and one or more soft keyboards.
The touch sensitive display 112 provides an input interface and an output interface between the device and the user. Display controller 156 receives electrical signals from touch screen 112 and/or transmits electrical signals to touch screen 112. Touch screen 112 displays visual output to a user. Visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively, "graphics"). In some embodiments, some or all of the visual output optionally corresponds to a user interface object.
Touch screen 112 has a touch-sensitive surface, sensor, or set of sensors that receives input from a user based on haptic and/or tactile contact. Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or interruption of the contact) on touch screen 112 and translate the detected contact into interactions with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on touch screen 112. In an exemplary embodiment, the point of contact between touch screen 112 and the user corresponds to a user's finger.
Touch screen 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, but in other embodiments other display technologies are used. Touch screen 112 and display controller 156 optionally detect contact and any movement or interruption thereof using any of a variety of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112. In an exemplary embodiment, a projected mutual capacitance sensing technique is used, such as that described in the text from Apple inc (Cupertino, california) And iPod->Techniques used in the above.
The touch sensitive display in some implementations of touch screen 112 is optionally similar to the multi-touch sensitive touch pad described in the following U.S. patents: 6,323,846 (Westerman et al), 6,570,557 (Westerman et al) and/or 6,677,932 (Westerman et al) and/or U.S. patent publication 2002/0015024A1, each of which is hereby incorporated by reference in its entirety. However, touch screen 112 displays visual output from device 100, while touch sensitive touchpads do not provide visual output.
Touch sensitive displays in some implementations of touch screen 112 are described in the following applications: (1) U.S. patent application Ser. No. 11/381,313, "Multipoint Touch Surface Controller", filed on 5/2/2006; (2) U.S. patent application Ser. No. 10/840,862, "Multipoint Touchscreen", filed 5/6/2004; (3) U.S. patent application Ser. No. 10/903,964, "Gestures For Touch Sensitive Input Devices", filed 7.30.2004; (4) U.S. patent application Ser. No. 11/048,264, "Gestures For Touch Sensitive Input Devices", filed 1/31/2005; (5) U.S. patent application Ser. No. 11/038,590, "Mode-Based Graphical User Interfaces For Touch Sensitive Input Devices", filed 1/18/2005; (6) U.S. patent application Ser. No. 11/228,758, "Virtual Input Device Placement On A Touch Screen User Interface", filed 9/16/2005; (7) U.S. patent application Ser. No. 11/228,700, "Operation Of A Computer With A Touch Screen Interface", filed 9/16/2005; (8) U.S. patent application Ser. No. 11/228,737, "Activating Virtual Keys Of A Touch-Screen Virtual Keyboard", filed on 9/16/2005; and (9) U.S. patent application Ser. No. 11/367,749, "Multi-Functional Hand-Held Device," filed 3/2006. All of these applications are incorporated by reference herein in their entirety.
Touch screen 112 optionally has a video resolution in excess of 100 dpi. In some implementations, the touch screen has a video resolution of about 160 dpi. The user optionally uses any suitable object or appendage, such as a stylus, finger, or the like, to make contact with touch screen 112. In some embodiments, the user interface is designed to work primarily through finger-based contact and gestures, which may not be as accurate as stylus-based input due to the large contact area of the finger on the touch screen. In some embodiments, the device translates the finger-based coarse input into a precise pointer/cursor location or command for performing the action desired by the user.
In some embodiments, the device 100 optionally includes a touch pad for activating or deactivating a particular function in addition to the touch screen. In some embodiments, the touch pad is a touch sensitive area of the device that, unlike the touch screen, does not display visual output. The touch pad is optionally a touch sensitive surface separate from the touch screen 112 or an extension of the touch sensitive surface formed by the touch screen.
The apparatus 100 also includes a power system 162 for powering the various components. The power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in the portable device.
The apparatus 100 optionally further comprises one or more optical sensors 164. FIG. 1A shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The optical sensor 164 optionally includes a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The optical sensor 164 receives light projected through one or more lenses from the environment and converts the light into data representing an image. In conjunction with imaging module 143 (also called a camera module), optical sensor 164 optionally captures still images or video. In some embodiments, the optical sensor is located on the rear of the device 100, opposite the touch screen display 112 on the front of the device, so that the touch screen display can be used as a viewfinder for still image and/or video image acquisition. In some embodiments, the optical sensor is located on the front of the device such that the user's image is optionally acquired for video conferencing while viewing other video conference participants on the touch screen display. In some implementations, the location of the optical sensor 164 can be changed by the user (e.g., by rotating the lenses and sensors in the device housing) such that a single optical sensor 164 is used with the touch screen display for both video conferencing and still image and/or video image acquisition.
The device 100 optionally further includes one or more depth camera sensors 175. FIG. 1A shows a depth camera sensor coupled to a depth camera controller 169 in the I/O subsystem 106. The depth camera sensor 175 receives data from the environment to create a three-dimensional model of objects (e.g., faces) within the scene from a point of view (e.g., depth camera sensor). In some implementations, in conjunction with the imaging module 143 (also referred to as a camera module), the depth camera sensor 175 is optionally used to determine a depth map of different portions of the image captured by the imaging module 143. In some embodiments, a depth camera sensor is located at the front of the device 100 such that a user image with depth information is optionally acquired for a video conference while the user views other video conference participants on a touch screen display, and a self-photograph with depth map data is captured. In some embodiments, the depth camera sensor 175 is located at the back of the device, or at the back and front of the device 100. In some implementations, the position of the depth camera sensor 175 can be changed by the user (e.g., by rotating a lens and sensor in the device housing) such that the depth camera sensor 175 is used with a touch screen display for both video conferencing and still image and/or video image acquisition.
The apparatus 100 optionally further comprises one or more contact intensity sensors 165. FIG. 1A shows a contact intensity sensor coupled to an intensity sensor controller 159 in the I/O subsystem 106. The contact strength sensor 165 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other strength sensors (e.g., sensors for measuring force (or pressure) of a contact on a touch-sensitive surface). The contact strength sensor 165 receives contact strength information (e.g., pressure information or a surrogate for pressure information) from the environment. In some implementations, at least one contact intensity sensor is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on the rear of the device 100, opposite the touch screen display 112 located on the front of the device 100.
The device 100 optionally further includes one or more proximity sensors 166. Fig. 1A shows a proximity sensor 166 coupled to the peripheral interface 118. Alternatively, the proximity sensor 166 is optionally coupled to the input controller 160 in the I/O subsystem 106. The proximity sensor 166 optionally performs as described in the following U.S. patent applications: no.11/241,839, entitled "Proximity Detector In Handheld Device"; no.11/240,788, entitled "Proximity Detector In Handheld Device"; no.11/620,702, entitled "Using Ambient Light Sensor To Augment Proximity Sensor Output"; no.11/586,862, entitled "Automated Response To And Sensing Of User Activity In Portable Devices"; and No.11/638,251, entitled "Methods And Systems For Automatic Configuration Of Peripherals," which are hereby incorporated by reference in their entirety. In some embodiments, the proximity sensor is turned off and the touch screen 112 is disabled when the multifunction device is placed near the user's ear (e.g., when the user is making a telephone call).
The device 100 optionally further comprises one or more tactile output generators 167. FIG. 1A shows a haptic output generator coupled to a haptic feedback controller 161 in the I/O subsystem 106. The tactile output generator 167 optionally includes one or more electroacoustic devices such as speakers or other audio components; and/or electromechanical devices for converting energy into linear motion such as motors, solenoids, electroactive polymers, piezoelectric actuators, electrostatic actuators, or other tactile output generating means (e.g., means for converting an electrical signal into a tactile output on a device). The contact intensity sensor 165 receives haptic feedback generation instructions from the haptic feedback module 133 and generates a haptic output on the device 100 that can be perceived by a user of the device 100. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., inward/outward of the surface of device 100) or laterally (e.g., backward and forward in the same plane as the surface of device 100). In some embodiments, at least one tactile output generator sensor is located on the rear of the device 100, opposite the touch screen display 112 located on the front of the device 100.
The device 100 optionally further includes one or more accelerometers 168. Fig. 1A shows accelerometer 168 coupled to peripheral interface 118. Alternatively, accelerometer 168 is optionally coupled to input controller 160 in I/O subsystem 106. Accelerometer 168 optionally performs as described in the following U.S. patent publications: U.S. patent publication No.20050190059, entitled "acception-based Theft Detection System for Portable Electronic Devices" and U.S. patent publication No.20060017692, entitled "Methods And Apparatuses For Operating A Portable Device Based On An Accelerometer", both of which are incorporated herein by reference in their entirety. In some implementations, information is displayed in a portrait view or a landscape view on a touch screen display based on analysis of data received from one or more accelerometers. The device 100 optionally includes a magnetometer and a GPS (or GLONASS or other global navigation system) receiver in addition to the accelerometer 168 for obtaining information about the position and orientation (e.g., longitudinal or lateral) of the device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or instruction set) 128, a contact/motion module (or instruction set) 130, a graphics module (or instruction set) 132, a text input module (or instruction set) 134, a Global Positioning System (GPS) module (or instruction set) 135, and an application program (or instruction set) 136. Furthermore, in some embodiments, memory 102 (fig. 1A) or 370 (fig. 3) stores device/global internal state 157, as shown in fig. 1A and 3. The device/global internal state 157 includes one or more of the following: an active application state indicating which applications (if any) are currently active; display status, indicating what applications, views, or other information occupy various areas of the touch screen display 112; sensor status, including information obtained from the various sensors of the device and the input control device 116; and location information relating to the device location and/or pose.
Operating system 126 (e.g., darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or embedded operating systems such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.), and facilitates communication between the various hardware components and software components.
The communication module 128 facilitates communication with other devices through one or more external ports 124 and also includes various software components for processing data received by the RF circuitry 108 and/or the external ports 124. External port 124 (e.g., universal Serial Bus (USB), firewire, etc.) is adapted to be coupled directly to other devices or indirectly via a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external port is in communication withThe 30-pin connector used on the (Apple inc. Trademark) device is the same or similar and/or compatible with a multi-pin (e.g., 30-pin) connector.
The contact/motion module 130 optionally detects contact with the touch screen 112 (in conjunction with the display controller 156) and other touch sensitive devices (e.g., a touchpad or physical click wheel). The contact/motion module 130 includes various software components for performing various operations related to contact detection, such as determining whether a contact has occurred (e.g., detecting a finger press event), determining the strength of the contact (e.g., the force or pressure of the contact, or a substitute for the force or pressure of the contact), determining whether there is movement of the contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger drag events), and determining whether the contact has ceased (e.g., detecting a finger lift event or a contact break). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining a velocity (magnitude), a speed (magnitude and direction), and/or an acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single finger contacts) or simultaneous multi-point contacts (e.g., "multi-touch"/multiple finger contacts). In some embodiments, the contact/motion module 130 and the display controller 156 detect contact on the touch pad.
In some implementations, the contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether the user has "clicked" on an icon). In some implementations, at least a subset of the intensity thresholds are determined according to software parameters (e.g., the intensity thresholds are not determined by activation thresholds of particular physical actuators and may be adjusted without changing the physical hardware of the device 100). For example, without changing the touchpad or touch screen display hardware, the mouse "click" threshold of the touchpad or touch screen may be set to any of a wide range of predefined thresholds. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more intensity thresholds in a set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting multiple intensity thresholds at once with a system-level click on an "intensity" parameter).
The contact/motion module 130 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different movements, timings, and/or intensities of the detected contacts). Thus, gestures are optionally detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger press event, and then detecting a finger lift (lift off) event at the same location (or substantially the same location) as the finger press event (e.g., at the location of the icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event, then detecting one or more finger-dragging events, and then detecting a finger-up (lift-off) event.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other displays, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual attribute) of the displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including but not limited to text, web pages, icons (such as user interface objects including soft keys), digital images, video, animation, and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphic module 132 receives one or more codes for designating graphics to be displayed from an application program or the like, and also receives coordinate data and other graphic attribute data together if necessary, and then generates screen image data to output to the display controller 156.
Haptic feedback module 133 includes various software components for generating instructions used by haptic output generator 167 to generate haptic output at one or more locations on device 100 in response to user interaction with device 100.
Text input module 134, which is optionally a component of graphics module 132, provides a soft keyboard for entering text in various applications (e.g., contacts 137, email 140, IM 141, browser 147, and any other application requiring text input).
The GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to the phone 138 for use in location-based dialing, to the camera 143 as picture/video metadata, and to applications that provide location-based services, such as weather gadgets, local page gadgets, and map/navigation gadgets).
The application 136 optionally includes the following modules (or sets of instructions) or a subset or superset thereof:
contact module 137 (sometimes referred to as an address book or contact list);
a telephone module 138;
video conferencing module 139;
email client module 140;
an Instant Messaging (IM) module 141;
a fitness support module 142;
a camera module 143 for still and/or video images;
an image management module 144;
a video player module;
a music player module;
browser module 147;
Calendar module 148;
a gadget module 149, optionally comprising one or more of: weather gadgets 149-1, stock gadgets 149-2, calculator gadget 149-3, alarm gadget 149-4, dictionary gadget 149-5, and other gadgets obtained by the user, and user-created gadgets 149-6;
a gadget creator module 150 for forming a user-created gadget 149-6;
search module 151;
a video and music player module 152 that incorporates the video player module and the music player module;
a note module 153;
map module 154; and/or
An online video module 155.
Examples of other applications 136 optionally stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, contacts module 137 is optionally used to manage an address book or contact list (e.g., in application internal state 192 of contacts module 137 stored in memory 102 or memory 370), including: adding one or more names to the address book; deleting the name from the address book; associating a telephone number, email address, physical address, or other information with the name; associating the image with the name; classifying and classifying names; providing a telephone number or email address to initiate and/or facilitate communications through telephone 138, video conferencing module 139, email 140, or IM 141; etc.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, telephone module 138 is optionally used to input a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contact module 137, modify the entered telephone number, dial the corresponding telephone number, conduct a conversation, and disconnect or hang up when the conversation is completed. As described above, wireless communication optionally uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, text input module 134, contacts module 137, and telephony module 138, videoconferencing module 139 includes executable instructions to initiate, conduct, and terminate a videoconference between a user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions for creating, sending, receiving, and managing emails in response to user instructions. In conjunction with the image management module 144, the email client module 140 makes it very easy to create and send emails with still or video images captured by the camera module 143.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, instant message module 141 includes executable instructions for: inputting a character sequence corresponding to an instant message, modifying previously inputted characters, transmitting a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for phone-based instant messages or using XMPP, SIMPLE, or IMPS for internet-based instant messages), receiving an instant message, and viewing the received instant message. In some embodiments, the transmitted and/or received instant message optionally includes graphics, photographs, audio files, video files, and/or other attachments supported in an MMS and/or Enhanced Messaging Service (EMS). As used herein, "instant message" refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module, workout support module 142 includes executable instructions for creating a workout (e.g., with time, distance, and/or calorie burn targets); communicate with a fitness sensor (exercise device); receiving fitness sensor data; calibrating a sensor for monitoring fitness; selecting and playing music for exercise; and displaying, storing and transmitting the fitness data.
In conjunction with touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions for: capturing still images or videos (including video streams) and storing them in the memory 102, modifying features of still images or videos, or deleting still images or videos from the memory 102.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions for arranging, modifying (e.g., editing), or otherwise manipulating, tagging, deleting, presenting (e.g., in a digital slide or album), and storing still and/or video images.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions for browsing the internet according to user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, email client module 140, and browser module 147, calendar module 148 includes executable instructions for creating, displaying, modifying, and storing calendars and data associated with calendars (e.g., calendar entries, to-do items, etc.) according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, gadget module 149 is a mini-application (e.g., weather gadget 149-1, stock gadget 149-2, calculator gadget 149-3, alarm gadget 149-4, and dictionary gadget 149-5) or a mini-application created by a user (e.g., user created gadget 149-6) that is optionally downloaded and used by a user. In some embodiments, gadgets include HTML (hypertext markup language) files, CSS (cascading style sheet) files, and JavaScript files. In some embodiments, gadgets include XML (extensible markup language) files and JavaScript files (e.g., yahoo | gadgets).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, gadget creator module 150 is optionally used by a user to create gadgets (e.g., to transform user-specified portions of a web page into gadgets).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions for searching memory 102 for text, music, sound, images, video, and/or other files that match one or more search criteria (e.g., one or more user-specified search terms) according to user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuit 110, speaker 111, RF circuit 108, and browser module 147, video and music player module 152 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, as well as executable instructions for displaying, rendering, or otherwise playing back video (e.g., on touch screen 112 or on an external display connected via external port 124). In some embodiments, the device 100 optionally includes the functionality of an MP3 player such as an iPod (trademark of Apple inc.).
In conjunction with the touch screen 112, the display controller 156, the contact/movement module 130, the graphics module 132, and the text input module 134, the notes module 153 includes executable instructions for creating and managing notes, backlog, and the like according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 is optionally configured to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data related to shops and other points of interest at or near a particular location, and other location-based data) according to user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuit 110, speaker 111, RF circuit 108, text input module 134, email client module 140, and browser module 147, online video module 155 includes instructions for: allowing a user to access, browse, receive (e.g., by streaming and/or downloading), play back (e.g., on a touch screen or on an external display connected via external port 124), send an email with a link to a particular online video, and otherwise manage online video in one or more file formats such as h.264. In some embodiments, the instant messaging module 141 is used to send links to particular online videos instead of the email client module 140. Additional description of online video applications can be found in U.S. provisional patent application Ser. No.60/936,562, titled "Portable Multifunction Device, method, and Graphical User Interface for Playing Online Videos," filed on even date 6, 20, 2007, and U.S. patent application Ser. No.11/968,067, titled "Portable Multifunction Device, method, and Graphical User Interface for Playing Online Videos," filed on even date 12, 31, 2007, the contents of both of which are hereby incorporated by reference in their entirety.
Each of the modules and applications described above corresponds to a set of executable instructions for performing one or more of the functions described above, as well as the methods described in this patent application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented in a separate software program, such as a computer program (e.g., including instructions), process, or module, and thus the various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. For example, the video player module is optionally combined with the music player module into a single module (e.g., video and music player module 152 in fig. 1A). In some embodiments, memory 102 optionally stores a subset of the modules and data structures described above. Further, memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device in which the operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or touch pad. By using a touch screen and/or a touch pad as the primary input control device for operating the device 100, the number of physical input control devices (e.g., push buttons, dials, etc.) on the device 100 is optionally reduced.
A predefined set of functions performed solely by the touch screen and/or touch pad optionally includes navigation between user interfaces. In some embodiments, the touchpad, when touched by a user, navigates the device 100 from any user interface displayed on the device 100 to a main menu, home menu, or root menu. In such implementations, a touch pad is used to implement a "menu button". In some other embodiments, the menu buttons are physical push buttons or other physical input control devices, rather than touch pads.
FIG. 1B is a block diagram illustrating exemplary components for event processing according to some embodiments. In some embodiments, memory 102 (FIG. 1A) or memory 370 (FIG. 3) includes event sorter 170 (e.g., in operating system 126) and corresponding applications 136-1 (e.g., any of the aforementioned applications 137-151, 155, 380-390).
The event classifier 170 receives the event information and determines the application view 191 of the application 136-1 and the application 136-1 to which the event information is to be delivered. The event sorter 170 includes an event monitor 171 and an event dispatcher module 174. In some embodiments, the application 136-1 includes an application internal state 192 that indicates one or more current application views that are displayed on the touch-sensitive display 112 when the application is active or executing. In some embodiments, the device/global internal state 157 is used by the event classifier 170 to determine which application(s) are currently active, and the application internal state 192 is used by the event classifier 170 to determine the application view 191 to which to deliver event information.
In some implementations, the application internal state 192 includes additional information, such as one or more of the following: restoration information to be used when the application 136-1 resumes execution, user interface state information indicating that the information is being displayed or ready for display by the application 136-1, a state queue for enabling the user to return to a previous state or view of the application 136-1, and a repeat/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripheral interface 118. The event information includes information about sub-events (e.g., user touches on the touch sensitive display 112 as part of a multi-touch gesture). The peripheral interface 118 transmits information it receives from the I/O subsystem 106 or sensors, such as a proximity sensor 166, one or more accelerometers 168, and/or microphone 113 (via audio circuitry 110). The information received by the peripheral interface 118 from the I/O subsystem 106 includes information from the touch-sensitive display 112 or touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to peripheral interface 118 at predetermined intervals. In response, the peripheral interface 118 transmits event information. In other embodiments, the peripheral interface 118 transmits event information only if there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or receiving an input exceeding a predetermined duration).
In some implementations, the event classifier 170 also includes a hit view determination module 172 and/or an active event identifier determination module 173.
When the touch sensitive display 112 displays more than one view, the hit view determination module 172 provides a software process for determining where within one or more views a sub-event has occurred. The view is made up of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected optionally corresponds to a level of programming within the application's programming or view hierarchy. For example, the lowest horizontal view in which a touch is detected is optionally referred to as a hit view, and the set of events that are recognized as correct inputs is optionally determined based at least in part on the hit view of the initial touch that begins a touch-based gesture.
Hit view determination module 172 receives information related to sub-events of the touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 172 identifies the hit view as the lowest view in the hierarchy that should process sub-events. In most cases, the hit view is the lowest level view in which the initiating sub-event (e.g., the first sub-event in a sequence of sub-events that form an event or potential event) occurs. Once the hit view is identified by the hit view determination module 172, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as a hit view.
The activity event recognizer determination module 173 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some implementations, the active event identifier determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, the activity event recognizer determination module 173 determines that all views that include the physical location of a sub-event are actively engaged views, and thus determines that all actively engaged views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely localized to an area associated with one particular view, the higher view in the hierarchy will remain the actively engaged view.
The event dispatcher module 174 dispatches event information to an event recognizer (e.g., event recognizer 180). In embodiments that include an active event recognizer determination module 173, the event dispatcher module 174 delivers event information to the event recognizers determined by the active event recognizer determination module 173. In some embodiments, the event dispatcher module 174 stores event information in an event queue that is retrieved by the corresponding event receiver 182.
In some embodiments, the operating system 126 includes an event classifier 170. Alternatively, the application 136-1 includes an event classifier 170. In yet another embodiment, the event classifier 170 is a stand-alone module or part of another module stored in the memory 102, such as the contact/motion module 130.
In some embodiments, application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for processing touch events that occur within a respective view of the user interface of the application. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, the respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of the event recognizers 180 are part of a separate module that is a higher level object from which methods and other properties are inherited, such as the user interface toolkit or application 136-1. In some implementations, the respective event handlers 190 include one or more of the following: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or invokes data updater 176, object updater 177, or GUI updater 178 to update the application internal state 192. Alternatively, one or more of application views 191 include one or more corresponding event handlers 190. Additionally, in some implementations, one or more of the data updater 176, the object updater 177, and the GUI updater 178 are included in a respective application view 191.
The corresponding event identifier 180 receives event information (e.g., event data 179) from the event classifier 170 and identifies events based on the event information. Event recognizer 180 includes event receiver 182 and event comparator 184. In some embodiments, event recognizer 180 further includes at least a subset of metadata 183 and event transfer instructions 188 (which optionally include sub-event delivery instructions).
Event receiver 182 receives event information from event sorter 170. The event information includes information about sub-events such as touches or touch movements. The event information also includes additional information, such as the location of the sub-event, according to the sub-event. When a sub-event relates to movement of a touch, the event information optionally also includes the rate and direction of the sub-event. In some embodiments, the event includes rotation of the device from one orientation to another orientation (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about a current orientation of the device (also referred to as a device pose).
The event comparator 184 compares the event information with predefined event or sub-event definitions and determines an event or sub-event or determines or updates the state of the event or sub-event based on the comparison. In some embodiments, event comparator 184 includes event definition 186. Event definition 186 includes definitions of events (e.g., a predefined sequence of sub-events), such as event 1 (187-1), event 2 (187-2), and others. In some implementations, sub-events in an event (e.g., 187-1 and/or 187-2) include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition of event 1 (187-1) is a double click on the displayed object. For example, a double click includes a first touch on the displayed object for a predetermined length of time (touch start), a first lift-off on the displayed object for a predetermined length of time (touch end), a second touch on the displayed object for a predetermined length of time (touch start), and a second lift-off on the displayed object for a predetermined length of time (touch end). In another example, the definition of event 2 (187-2) is a drag on the displayed object. For example, dragging includes touching (or contacting) on the displayed object for a predetermined period of time, movement of the touch on the touch-sensitive display 112, and lift-off of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some implementations, the event definitions 186 include definitions of events for respective user interface objects. In some implementations, the event comparator 184 performs a hit test to determine which user interface object is associated with a sub-event. For example, in an application view that displays three user interface objects on touch-sensitive display 112, when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test to determine which of the three user interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 190, the event comparator uses the results of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects an event handler associated with the sub-event and the object that triggered the hit test.
In some embodiments, the definition of the respective event (187) further includes a delay action that delays delivery of the event information until it has been determined that the sequence of sub-events does or does not correspond to an event type of the event recognizer.
When the respective event recognizer 180 determines that the sequence of sub-events does not match any of the events in the event definition 186, the respective event recognizer 180 enters an event impossible, event failed, or event end state after which subsequent sub-events of the touch-based gesture are ignored. In this case, the other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the ongoing touch-based gesture.
In some embodiments, the respective event recognizer 180 includes metadata 183 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to the actively engaged event recognizer. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate how event recognizers interact or are able to interact with each other. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to different levels in a view or programmatic hierarchy.
In some embodiments, when one or more particular sub-events of an event are identified, the corresponding event recognizer 180 activates an event handler 190 associated with the event. In some implementations, the respective event identifier 180 delivers event information associated with the event to the event handler 190. The activate event handler 190 is different from sending (and deferring) sub-events to the corresponding hit view. In some embodiments, event recognizer 180 throws a marker associated with the recognized event, and event handler 190 associated with the marker retrieves the marker and performs a predefined process.
In some implementations, the event delivery instructions 188 include sub-event delivery instructions that deliver event information about the sub-event without activating the event handler. Instead, the sub-event delivery instructions deliver the event information to an event handler associated with the sub-event sequence or to an actively engaged view. Event handlers associated with the sequence of sub-events or with the actively engaged views receive the event information and perform a predetermined process.
In some embodiments, the data updater 176 creates and updates data used in the application 136-1. For example, the data updater 176 updates a telephone number used in the contact module 137 or stores a video file used in the video player module. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, the object updater 177 creates a new user interface object or updates a portion of a user interface object. GUI updater 178 updates the GUI. For example, the GUI updater 178 prepares the display information and sends the display information to the graphics module 132 for display on a touch-sensitive display.
In some embodiments, event handler 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, the data updater 176, the object updater 177, and the GUI updater 178 are included in a single module of the respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It should be appreciated that the above discussion regarding event handling of user touches on a touch sensitive display also applies to other forms of user inputs that utilize an input device to operate the multifunction device 100, not all of which are initiated on a touch screen. For example, mouse movements and mouse button presses optionally in conjunction with single or multiple keyboard presses or holds; contact movement on the touchpad, such as tap, drag, scroll, etc.; inputting by a touch pen; movement of the device; verbal instructions; detected eye movement; inputting biological characteristics; and/or any combination thereof is optionally used as input corresponding to sub-events defining the event to be distinguished.
Fig. 2 illustrates a portable multifunction device 100 with a touch screen 112 in accordance with some embodiments. The touch screen optionally displays one or more graphics within a User Interface (UI) 200. In this and other embodiments described below, a user can select one or more of these graphics by making a gesture on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figures) or one or more styluses 203 (not drawn to scale in the figures). In some embodiments, selection of one or more graphics will occur when a user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up and/or down), and/or scrolling of a finger that has been in contact with the device 100 (right to left, left to right, up and/or down). In some implementations or in some cases, inadvertent contact with the graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture that swipes over an application icon optionally does not select the corresponding application.
The device 100 optionally also includes one or more physical buttons, such as a "home" or menu button 204. As previously described, menu button 204 is optionally used to navigate to any application 136 in a set of applications that are optionally executed on device 100. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on touch screen 112.
In some embodiments, the device 100 includes a touch screen 112, menu buttons 204, a press button 206 for powering the device on/off and for locking the device, one or more volume adjustment buttons 208, a Subscriber Identity Module (SIM) card slot 210, a headset jack 212, and a docking/charging external port 124. Pressing button 206 is optionally used to turn on/off the device by pressing the button and holding the button in the pressed state for a predefined time interval; locking the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or unlock the device or initiate an unlocking process. In an alternative embodiment, the device 100 also accepts voice input through the microphone 113 for activating or deactivating certain functions. The device 100 also optionally includes one or more contact intensity sensors 165 for detecting the intensity of contacts on the touch screen 112, and/or one or more haptic output generators 167 for generating haptic outputs for a user of the device 100.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, the device 300 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child learning toy), a gaming system, or a control device (e.g., a home controller or an industrial controller). The device 300 generally includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. Communication bus 320 optionally includes circuitry (sometimes referred to as a chipset) that interconnects and controls communications between system components. The device 300 includes an input/output (I/O) interface 330 with a display 340, typically a touch screen display. The I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and a touchpad 355, a tactile output generator 357 (e.g., similar to the tactile output generator 167 described above with reference to fig. 1A), a sensor 359 (e.g., an optical sensor, an acceleration sensor, a proximity sensor, a touch sensitive sensor, and/or a contact intensity sensor (similar to the contact intensity sensor 165 described above with reference to fig. 1A)) for generating tactile output on the device 300. Memory 370 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and optionally includes non-volatile memory such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 370 optionally includes one or more storage devices located remotely from CPU 310. In some embodiments, memory 370 stores programs, modules, and data structures, or a subset thereof, similar to those stored in memory 102 of portable multifunction device 100 (fig. 1A). Furthermore, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk editing module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (fig. 1A) optionally does not store these modules.
Each of the above elements in fig. 3 is optionally stored in one or more of the previously mentioned memory devices. Each of the above-described modules corresponds to a set of instructions for performing the above-described functions. The above-described modules or computer programs (e.g., sets of instructions or instructions) need not be implemented in a separate software program (such as a computer program (e.g., instructions), process or module, and thus the various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures described above. Further, memory 370 optionally stores additional modules and data structures not described above.
Attention is now directed to embodiments of user interfaces optionally implemented on, for example, portable multifunction device 100.
Fig. 4A illustrates an exemplary user interface of an application menu on the portable multifunction device 100 in accordance with some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
Signal strength indicators 402 for wireless communications such as cellular signals and Wi-Fi signals;
time 404;
bluetooth indicator 405;
battery status indicator 406;
tray 408 with icons for commonly used applications, such as:
an icon 416 labeled "phone" of phone module 138, the icon 416 optionally including an indicator 414 of the number of missed calls or voice mails;
an icon 418 of the email client module 140 marked "mail", the icon 418 optionally including an indicator 410 of the number of unread emails;
icon 420 labeled "browser" for browser module 147; and
icon 422 labeled "iPod" of video and music player module 152 (also known as iPod (trademark of Apple inc.) module 152); and
icons of other applications, such as:
icon 424 marked "message" for IM module 141;
icon 426 of calendar module 148 marked "calendar";
icon 428 marked "photo" of image management module 144;
icon 430 marked "camera" for camera module 143;
icon 432 of online video module 155 marked "online video";
Icon 434 labeled "stock market" for stock market gadget 149-2;
icon 436 marked "map" of map module 154;
icon 438 labeled "weather" for weather gadget 149-1;
icon 440 labeled "clock" for alarm clock gadget 149-4;
icon 442 labeled "fitness support" for fitness support module 142;
icon 444 marked "note" of the note module 153; and
the "set" icon 446 of a set application or module provides access to settings of the device 100 and its various applications 136.
It should be noted that the iconic labels shown in fig. 4A are merely exemplary. For example, the icon 422 of the video and music player module 152 is labeled "music" or "music player". Other labels are optionally used for various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 of fig. 3) having a touch-sensitive surface 451 (e.g., tablet or touchpad 355 of fig. 3) separate from a display 450 (e.g., touch screen display 112). The device 300 also optionally includes one or more contact intensity sensors (e.g., one or more of the sensors 359) for detecting the intensity of the contact on the touch-sensitive surface 451 and/or one or more tactile output generators 357 for generating tactile outputs for a user of the device 300.
While some of the examples below will be given with reference to inputs on touch screen display 112 (where the touch sensitive surface and the display are combined), in some embodiments the device detects inputs on a touch sensitive surface separate from the display, as shown in fig. 4B. In some implementations, the touch-sensitive surface (e.g., 451 in fig. 4B) has a primary axis (e.g., 452 in fig. 4B) that corresponds to the primary axis (e.g., 453 in fig. 4B) on the display (e.g., 450). According to these embodiments, the device detects contact (e.g., 460 and 462 in fig. 4B) with the touch-sensitive surface 451 at a location corresponding to a respective location on the display (e.g., 460 corresponds to 468 and 462 corresponds to 470 in fig. 4B). In this way, when the touch-sensitive surface (e.g., 451 in FIG. 4B) is separated from the display (e.g., 450 in FIG. 4B) of the multifunction device, user inputs (e.g., contacts 460 and 462 and movement thereof) detected by the device on the touch-sensitive surface are used by the device to manipulate the user interface on the display. It should be appreciated that similar approaches are optionally used for other user interfaces described herein.
Additionally, while the following examples are primarily given with reference to finger inputs (e.g., finger contacts, single-finger flick gestures, finger swipe gestures), it should be understood that in some embodiments one or more of these finger inputs are replaced by input from another input device (e.g., mouse-based input or stylus input). For example, a swipe gesture is optionally replaced with a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, a flick gesture is optionally replaced by a mouse click (e.g., instead of detection of contact, followed by ceasing to detect contact) when the cursor is over the position of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be appreciated that multiple computer mice are optionally used simultaneously, or that the mice and finger contacts are optionally used simultaneously.
Fig. 5A illustrates an exemplary personal electronic device 500. The device 500 includes a body 502. In some embodiments, device 500 may include some or all of the features described with respect to devices 100 and 300 (e.g., fig. 1A-4B). In some implementations, the device 500 has a touch sensitive display 504, hereinafter referred to as a touch screen 504. In addition to or in lieu of touch screen 504, device 500 has a display and a touch-sensitive surface. As with devices 100 and 300, in some implementations, touch screen 504 (or touch-sensitive surface) optionally includes one or more intensity sensors for detecting the intensity of an applied contact (e.g., touch). One or more intensity sensors of the touch screen 504 (or touch sensitive surface) may provide output data representative of the intensity of the touch. The user interface of the device 500 may respond to touches based on the intensity of the touches, meaning that touches of different intensities may invoke different user interface operations on the device 500.
Exemplary techniques for detecting and processing touch intensity are found, for example, in the following related patent applications: international patent application sequence No. pct/US2013/040061, filed 5/8 a 2013, entitled "Device, method, and Graphical User Interface for Displaying User Interface Objects Corresponding to an Application", issued as WIPO patent publication No. wo/2013/169849; and international patent application sequence No. pct/US2013/069483, filed 11/2013, entitled "Device, method, and Graphical User Interface for Transitioning Between Touch Input to Display Output Relationships", published as WIPO patent publication No. wo/2014/105276, each of which is hereby incorporated by reference in its entirety.
In some embodiments, the device 500 has one or more input mechanisms 506 and 508. The input mechanisms 506 and 508 (if included) may be in physical form. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, the device 500 has one or more attachment mechanisms. Such attachment mechanisms, if included, may allow for attachment of the device 500 with, for example, a hat, glasses, earrings, necklace, shirt, jacket, bracelet, watchband, bracelet, pants, leash, shoe, purse, backpack, or the like. These attachment mechanisms allow the user to wear the device 500.
Fig. 5B depicts an exemplary personal electronic device 500. In some embodiments, the apparatus 500 may include some or all of the components described with reference to fig. 1A, 1B, and 3. The device 500 has a bus 512 that operatively couples an I/O section 514 with one or more computer processors 516 and memory 518. The I/O portion 514 may be connected to a display 504, which may have a touch sensitive component 522 and optionally an intensity sensor 524 (e.g., a contact intensity sensor). In addition, the I/O portion 514 may be connected to a communication unit 530 for receiving application and operating system data using Wi-Fi, bluetooth, near Field Communication (NFC), cellular, and/or other wireless communication technologies. The device 500 may include input mechanisms 506 and/or 508. For example, the input mechanism 506 is optionally a rotatable input device or a depressible input device and a rotatable input device. In some examples, the input mechanism 508 is optionally a button.
In some examples, the input mechanism 508 is optionally a microphone. Personal electronic device 500 optionally includes various sensors, such as a GPS sensor 532, an accelerometer 534, an orientation sensor 540 (e.g., compass), a gyroscope 536, a motion sensor 538, and/or combinations thereof, all of which are operatively connected to I/O section 514.
The memory 518 of the personal electronic device 500 may include one or more non-transitory computer-readable storage media for storing computer-executable instructions that, when executed by the one or more computer processors 516, may, for example, cause the computer processors to perform the techniques described below, including processes 700, 800, 900, and 1100 (fig. 7-9 and 11). A computer-readable storage medium may be any medium that can tangibly contain or store computer-executable instructions for use by or in connection with an instruction execution system, apparatus, and device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer readable storage medium may include, but is not limited to, magnetic storage devices, optical storage devices, and/or semiconductor storage devices. Examples of such storage devices include magnetic disks, optical disks based on CD, DVD, or blu-ray technology, and persistent solid state memories such as flash memory, solid state drives, etc. The personal electronic device 500 is not limited to the components and configuration of fig. 5B, but may include other components or additional components in a variety of configurations.
As used herein, the term "affordance" refers to a user-interactive graphical user interface object that is optionally displayed on a display screen of device 100, 300, and/or 500 (fig. 1A, 3, and 5A-5B). For example, an image (e.g., an icon), a button, and text (e.g., a hyperlink) optionally each constitute an affordance.
As used herein, the term "focus selector" refers to an input element for indicating the current portion of a user interface with which a user is interacting. In some implementations that include a cursor or other position marker, the cursor acts as a "focus selector" such that when the cursor detects an input (e.g., presses an input) on a touch-sensitive surface (e.g., touch pad 355 in fig. 3 or touch-sensitive surface 451 in fig. 4B) above a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted according to the detected input. In some implementations including a touch screen display (e.g., touch sensitive display system 112 in fig. 1A or touch screen 112 in fig. 4A) that enables direct interaction with user interface elements on the touch screen display, the contact detected on the touch screen acts as a "focus selector" such that when an input (e.g., a press input by a contact) is detected on the touch screen display at the location of a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations, the focus is moved from one area of the user interface to another area of the user interface without a corresponding movement of the cursor or movement of contact on the touch screen display (e.g., by moving the focus from one button to another using a tab key or arrow key); in these implementations, the focus selector moves according to movement of the focus between different areas of the user interface. Regardless of the particular form that the focus selector takes, the focus selector is typically controlled by the user in order to deliver a user interface element (or contact on the touch screen display) that is interactive with the user of the user interface (e.g., by indicating to the device the element with which the user of the user interface desires to interact). For example, upon detection of a press input on a touch-sensitive surface (e.g., a touchpad or touch screen), the position of a focus selector (e.g., a cursor, contact, or selection box) over a respective button will indicate that the user desires to activate the respective button (rather than other user interface elements shown on the device display).
As used in the specification and claims, the term "characteristic intensity" of a contact refers to the characteristic of a contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. The characteristic intensity is optionally based on a predefined number of intensity samples or a set of intensity samples acquired during a predetermined period of time (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, 5 seconds, 10 seconds) relative to a predefined event (e.g., after detection of contact, before or after detection of lift-off of contact, before or after detection of start of movement of contact, before or after detection of end of contact, and/or before or after detection of decrease in intensity of contact). The characteristic intensity of the contact is optionally based on one or more of: maximum value of intensity of contact, average value of intensity of contact, value at first 10% of intensity of contact, half maximum value of intensity of contact, 90% maximum value of intensity of contact, etc. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an operation. For example, the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold. In this example, contact of the feature strength that does not exceed the first threshold results in a first operation, contact of the feature strength that exceeds the first strength threshold but does not exceed the second strength threshold results in a second operation, and contact of the feature strength that exceeds the second threshold results in a third operation. In some implementations, a comparison between the feature strength and one or more thresholds is used to determine whether to perform one or more operations (e.g., whether to perform or forgo performing the respective operations) rather than for determining whether to perform the first or second operations.
Fig. 5C depicts an exemplary diagram of a communication session between electronic devices 500A, 500B, and 500C. Devices 500A, 500B, and 500C are similar to electronic device 500, and each share one or more data connections 510 (such as an internet connection, a Wi-Fi connection, a cellular connection, a short-range communication connection, and/or any other such data connection or network) with each other in order to facilitate real-time communication of audio data and/or video data between the respective devices for a period of time. In some embodiments, the exemplary communication session may include a shared data session whereby data is transferred from one or more of the electronic devices to other electronic devices to enable the concurrent output of the respective content at the electronic devices. In some embodiments, the exemplary communication session may include a video conference session whereby audio data and/or video data is transferred between devices 500A, 500B, and 500C so that users of the respective devices may communicate in real-time using the electronic devices.
In fig. 5C, device 500A represents an electronic device associated with user a. Device 500A communicates (via data connection 510) with devices 500B and 500C, devices 500B and 500C being associated with user B and user C, respectively. Device 500A includes a camera 501A for capturing video data of a communication session and a display 504A (e.g., a touch screen) for displaying content associated with the communication session. Device 500A also includes other components, such as a microphone (e.g., 113) for recording audio of the communication session and a speaker (e.g., 111) for outputting audio of the communication session.
Device 500A displays, via display 504A, a communication UI 520A, which is a user interface for facilitating a communication session (e.g., a videoconference session) between device 500B and device 500C. Communication UI 520A includes video feed 525-1A and video feed 525-2A. Video feed 525-1A is a representation of video data captured at device 500B (e.g., using camera 501B) and transmitted from device 500B to devices 500A and 500C during a communication session. Video feed 525-2A is a representation of video data captured at device 500C (e.g., using camera 501C) and transmitted from device 500C to devices 500A and 500B during a communication session.
The communication UI 520A includes a camera preview 550A, which is a representation of video data captured at the device 500A via the camera 501A. Camera preview 550A represents to user a the intended video feed of user a that is displayed at respective devices 500B and 500C.
Communication UI 520A includes one or more controls 555A for controlling one or more aspects of the communication session. For example, control 555A may include controls for muting audio of the communication session, changing a camera view of the communication session (e.g., changing a camera used to capture video of the communication session, adjusting a zoom value), terminating the communication session, applying a visual effect to the camera view of the communication session, activating one or more modes associated with the communication session. In some embodiments, one or more controls 555A are optionally displayed in communication UI 520A. In some implementations, one or more controls 555A are displayed separately from the camera preview 550A. In some implementations, one or more controls 555A are displayed overlaying at least a portion of the camera preview 550A.
In fig. 5C, device 500B represents an electronic device associated with user B, who communicates (via data connection 510) with devices 500A and 500C. Device 500B includes a camera 501B for capturing video data of a communication session and a display 504B (e.g., a touch screen) for displaying content associated with the communication session. Device 500B also includes other components such as a microphone (e.g., 113) for recording audio of the communication session and a speaker (e.g., 111) for outputting audio of the communication session.
Device 500B displays a communication UI 520B similar to communication UI 520A of device 500A via touch screen 504B. Communication UI 520B includes video feed 525-1B and video feed 525-2B. Video feed 525-1B is a representation of video data captured at device 500A (e.g., using camera 501A) and transmitted from device 500A to devices 500B and 500C during a communication session. Video feed 525-2B is a representation of video data captured at device 500C (e.g., using camera 501C) and transmitted from device 500C to devices 500A and 500B during a communication session. The communication UI 520B further includes: a camera preview 550B, which is a representation of video data captured at device 500B via camera 501B; and one or more controls 555B similar to control 555A for controlling one or more aspects of the communication session. Camera preview 550B represents to user B the intended video feed of user B displayed at respective devices 500A and 500C.
In fig. 5C, device 500C represents an electronic device associated with user C, who communicates (via data connection 510) with devices 500A and 500B. Device 500C includes a camera 501C for capturing video data of a communication session and a display 504C (e.g., a touch screen) for displaying content associated with the communication session. Device 500C also includes other components such as a microphone (e.g., 113) for recording audio of the communication session and a speaker (e.g., 111) for outputting audio of the communication session.
The device 500C displays a communication UI 520C similar to the communication UI 520A of the device 500A and the communication UI 520B of the device 500B via the touch screen 504C. Communication UI 520C includes video feed 525-1C and video feed 525-2C. Video feed 525-1C is a representation of video data captured at device 500B (e.g., using camera 501B) and transmitted from device 500B to devices 500A and 500C during a communication session. Video feed 525-2C is a representation of video data captured at device 500A (e.g., using camera 501A) and transmitted from device 500A to devices 500B and 500C during a communication session. The communication UI 520C further includes: a camera preview 550C, which is a representation of video data captured at the device 500C via the camera 501C; and one or more controls 555C similar to controls 555A and 555B for controlling one or more aspects of the communication session. Camera preview 550C represents to user C the intended video feed of user C displayed at respective devices 500A and 500B.
Although the diagram depicted in fig. 5C represents a communication session between three electronic devices, the communication session can be established between two or more electronic devices, and the number of devices participating in the communication session can vary as electronic devices join or leave the communication session. For example, if one of the electronic devices leaves the communication session, audio data and video data from devices that stopped participating in the communication session are no longer represented on the participating devices. For example, if device 500B stops participating in the communication session, then there is no data connection 510 between devices 500A and 500C, and there is no data connection 510 between devices 500C and 500B. In addition, device 500A does not include video feed 525-1A and device 500C does not include video feed 525-1C. Similarly, if a device joins a communication session, a connection is established between the joining device and the existing device, and video data and audio data are shared among all devices so that each device can output data transmitted from the other devices.
The embodiment depicted in fig. 5C represents a diagram of a communication session between a plurality of electronic devices, including the exemplary communication sessions depicted in fig. 6A-6 AH and 10A-10N. In some embodiments, the communication session depicted in fig. 6A-6 AH and 10A-10N includes two or more electronic devices, even though other electronic devices participating in the communication session are not depicted in the figures.
As used herein, an "installed application" refers to a software application that has been downloaded onto an electronic device (e.g., device 100, 300, and/or 500) and is ready to be started (e.g., turned on) on the device. In some embodiments, the downloaded application becomes an installed application using an installer that extracts program portions from the downloaded software package and integrates the extracted portions with the operating system of the computer system.
As used herein, the term "open application" or "executing application" refers to a software application having retention state information (e.g., as part of device/global internal state 157 and/or application internal state 192). The open or executing application is optionally any of the following types of applications:
an active application currently displayed on the display screen of the device that is using the application;
a background application (or background process) that is not currently shown but for which one or more processes are being processed by one or more processors; and
a suspended or dormant application that is not running but has state information stored in memory (volatile and nonvolatile, respectively) and available to resume execution of the application.
As used herein, the term "closed application" refers to a software application that does not have maintained state information (e.g., the state information of the closed application is not stored in the memory of the device). Thus, closing an application includes stopping and/or removing application processes of the application and removing state information of the application from memory of the device. Generally, when in a first application, opening a second application does not close the first application. The first application becomes a background application when the second application is displayed and the first application stops being displayed.
Attention is now directed to embodiments of a user interface ("UI") and associated processes implemented on an electronic device, such as portable multifunction device 100, device 300, or device 500.
Fig. 6A-6 AH illustrate an exemplary user interface for managing a shared content session according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 7-9.
The present disclosure describes embodiments for managing a shared content session (also referred to as a sharing session), wherein respective content can be output simultaneously at a plurality of devices participating in the shared content session. In some embodiments, the respective content is screen sharing content. For example, the participants of a session with shared content share the content of the display screen of the host device so that the participants can view the screen content of the host device (the shared device, or the device whose screen content is shared) in real-time at their respective devices, including any changes to the display screen content. In some implementations, the respective content is synchronized content that is concurrently output at respective devices of participants of the shared content session. For example, the respective devices of the participants access respective content (e.g., video, movies, TV programs, songs, games, browsing experiences, and/or other interactive experiences) from the remote server and/or local storage, respectively, and are synchronized in their respective content output such that the content is output at the respective devices simultaneously (e.g., via an application local to the respective devices) because each device accesses respective content from the remote server and/or local storage, respectively. In some implementations, the respective devices exchange information (e.g., via a server) to facilitate synchronization. For example, the respective devices may share play status and/or playback location information of the content and indications of local commands (e.g., play, pause, stop, fast forward, and/or rewind) in order to implement commands for content output on the other devices. Sharing the play state and/or playback position information is more efficient and effective for synchronizing content at the respective devices because the host device does not transmit content to the respective devices, but rather transmits smaller data packets containing the play state and/or playback position information. In addition, each respective device outputs content at a size and quality suitable for the respective device and the connectivity of the device (e.g., data connection conditions, such as data transmission and/or processing speed), thereby providing a more customized but synchronized playback experience at each of the respective devices. In some implementations, an application (or "app") is available (e.g., downloaded and/or installed) at the respective device to enable the device to participate in the shared content session.
In some embodiments, the content is displayed as a window, optionally overlaid on another user interface (e.g., a home screen and/or an application user interface), and movable separately from the user interface on which the content is displayed. In some embodiments, such windows are referred to herein as picture-in-picture windows or "PiP". In some embodiments, the PiP may include shared content, such as screen shared content and/or synchronized content. In some implementations, a PiP may include content independent of the shared content session, such as a video feed from a video conference (e.g., video call) (although in some implementations, such PiP may be displayed in connection with the shared content session). In some implementations, piP may be moved, resized, minimized, docked, undocked, and/or expanded in response to various inputs and/or gestures.
Unless otherwise specified, as discussed herein, the term "share/sharing/shared" is generally used to refer to a situation in which content (e.g., screen-shared content and/or synchronized content) is or can be simultaneously output (e.g., viewed and/or played) at multiple devices participating in a shared content session. Unless specifically stated otherwise, these terms do not require that content being "shared" be transferred from any particular device participating in a shared content session to any other device with which the content is being shared. In some embodiments, the content being shared in the shared content session is content that each respective device individually accesses, for example, from a remote server or another source other than one of the devices participating in the shared content session. For example, in some embodiments, when media content, such as a movie, is being played at a device that is participating in a shared content session, the movie is considered to be shared with the respective participants even though the participants are accessing (e.g., from a movie application) and playing the movie separately (but simultaneously) from other participants in the shared content session. In some embodiments, content is shared with a participant sharing a content session by transmitting image data representing content displayed on a display screen of a host device from the host device to other devices participating in the shared content session.
In some embodiments, the real-time communication session may be active for the shared content session through one or more audio channels and/or video channels that, when active (e.g., open), enable real-time communication for one or more participants of the shared content session. For example, when one or more audio channels are active during a shared content session, a real-time communication session is considered enabled, and participants to the shared content session may speak with each other in real-time (e.g., via an audio call and/or a live audio feed) while the shared content session is ongoing and optionally while content (e.g., screen shared content and/or synchronized content) is being shared via the shared content session. As another example, when one or more video channels are active (e.g., via a video conferencing application local to the respective device), a real-time communication session is considered enabled, and participants to the shared content session may participate in live video communications (e.g., video chat, video call, video conference, and/or live video feed) while the shared content session is ongoing and, optionally, while content is being shared via the shared content session. In some embodiments, the real-time communication session may be audio-only (e.g., audio channels are active without video channels), video-only (e.g., video channels are active without audio channels), or both audio and video enabled (e.g., both audio channels and video channels are active). In some implementations, the audio channel or video channel can be active with the respective audio feed or video feed temporarily disabled or muted (e.g., a microphone providing the audio feed is muted and/or a camera providing the video feed is disabled). In some implementations, the real-time communication session may be active (e.g., enabling real-time communication) separately from the shared content session (e.g., no active shared content session). In some implementations, the shared content session may be active separately from the real-time communication session (e.g., no real-time communication enabled). Various aspects of these embodiments and further details of the shared content session and the real-time communication session are discussed below with reference to the figures.
Fig. 6A-6 AH illustrate exemplary devices for participating in a shared content session according to some embodiments. Specifically, these devices include John's device 600A (e.g., a smart phone) and Jane's device 600B (e.g., a smart phone), which are shown side-by-side in some figures to illustrate the concurrent state of the respective devices, including user interfaces and inputs at the respective devices. John's device 600A is a computer system that includes a display 600-1A, one or more cameras 600-2A, one or more microphones 600-3A (also referred to as mic 600-3A), and one or more speakers 600-4A (e.g., similar to speaker 111). Jane's device 600B is a computer system that includes a display 600-1B, one or more cameras 600-2B, one or more microphones 600-3B (also referred to as mic 600-3B), and one or more speakers 600-4B (e.g., similar to speaker 111). John's device 600A is similar to Jane's device 600B. In the following description, reference numerals may include the letter "a" to refer to elements of John's device, the letter "B" to refer to elements of Jane's device, or no letter to refer to elements of either or both devices. For example, devices 600A and 600B may be referred to using reference numeral 600, that is, reference numeral 600 may be used herein to refer to John's device 600A or Jane's device 600B, or both. Other elements sharing a common reference number may be referenced in a similar manner. For example, displays 600-1A and 600-1Bb, cameras 600-2A and 600-2B, microphones 600-3A and 600-3B, and speakers 600-4A and 600-4B may be referred to using reference numerals 600-1, 600-2, 600-3, and 600-4, respectively. In some embodiments, device 600 includes one or more features of devices 100, 300, and/or 500.
In the embodiments provided herein, john's device 600A may be described as performing a set of functions associated with a shared content session, and Jane's device 600B may be described as performing a different set of functions associated with a shared content session. The description is not intended to limit the functions performed by the respective devices, but is provided to illustrate various aspects and embodiments of the shared content session. Thus, unless otherwise specified, the functions described as being performed by John's device 600A can similarly be performed by Jane's device 600B and devices of other participants in the shared content session. Thus, the functions described as being performed by Jane's device 600B can similarly be performed by John's device 600A and the devices of other participants in the shared content session, unless otherwise specified.
In fig. 6A, john's device 600A is not currently engaged in a shared content session or real-time communication, but is displaying via display 600-1A music interface 602 that may be used to begin playback of music at John's device 600A. John's device 600A detects an input 605-1 on affordance 604 (e.g., a tap input or other selection input on display 600-1A) and, in response, displays a menu 606, as shown in FIG. 6B. Menu 606 includes options that may be selected for performing various tasks, such as selecting playback options for music, indicating preferences for music, and sharing music. For example, option 606-1 may be selected to display an invitation interface for inviting users to share music in music interface 602, and option 606-2 may be selected to display a sharing interface with various options for sharing music with other users. In some implementations, the option 606-1 can be displayed as an option in a shared interface that is displayed in response to selecting the option 606-2.
In response to detecting input 605-2 on option 606-1, john's device 600A displays an invitation interface 608, as shown in FIG. 6C. The invitation interface 608 provides an interface for selecting one or more recipients (e.g., contacts) of an invitation to share content (e.g., "track 3" indicated by identifier 608-1) in a shared content session. In some embodiments, john's device 600A suggests (e.g., automatically suggests) various contacts to be selected as recipients of the invitation. In some embodiments, the user may manually enter the recipient by typing contact information into the recipient field 608-2 using the keyboard 610, or may select the recipient indicated by the contact options 611, which may be selected to add the corresponding contact or group of contacts to the recipient field 608-2. In FIG. 6C, the "mountain climber" group is selected as the recipient of the invitation using the keyboard 610 or by selecting the contact option 611-1. Other contacts may be selected as recipients in a similar manner. In some embodiments, john's device 600A pre-populates (e.g., automatically selects) one or more contacts into the recipient field 608-2. For example, if John's device 600A is currently in a real-time communication session (e.g., video chat) with a particular contact (e.g., user) or group of contacts, john's device displays an invitation interface 608 in which the contact (e.g., other participants of the real-time communication session) is pre-populated in the recipient field 608-2.
The invitation interface 608 also includes invitation options (video call option 612 and message option 614) that may be selected to send an invitation to the selected recipient to join the shared content session. For example, selecting video call option 612 (e.g., via input 605-3 or other selection input) causes John's device 600A to send an invitation to the selected recipient using an application on John's device (e.g., a video call application) for providing synchronous communication. In the embodiments described herein, the synchronous communication is a video call provided using a video call application, but it should be understood that the synchronous communication may be an audio call or other synchronous communication option described herein. When a shared content session is initiated via synchronous communication, real-time communication is enabled for the shared content session using an audio channel and/or video channel that is activated by synchronous communication (e.g., an audio call and/or a video call). Selecting message option 614 (e.g., via input 605-4 or other selection input) causes John's device 600A to send an invitation to the selected recipient using an application on John's device for providing asynchronous communications (e.g., a messaging application). In the embodiments described herein, the asynchronous communication is a text message communication provided using a messaging application, but it should be understood that the asynchronous communication may be email or other asynchronous communication options described herein. When a shared content session is initiated via asynchronous communication, real-time communication is not enabled for (or simultaneously with) the shared content session. In some implementations, real-time communication may be enabled or disabled for (or concurrently with) a respective shared content session, as described in more detail below.
Fig. 6D-6H depict interfaces of various embodiments in which John's device 600A sends an invitation to join a shared content session using video call option 612. In response to detecting input 605-3, john's device 600A initiates a shared content session with the mountain-climbing group by initiating a video call with the group. As mentioned previously, when a shared content session is initiated via synchronous communication, real-time communication is enabled for the shared content session through a video call. Thus, john's device 600A displays video feed 618 and pill-shaped camera 620A, as shown in fig. 6D. Video feed 618 represents an active video feed for a video call, and the pill-shaped camera 620A indicates that the shared content session is active at John's device 600A. Because no other user joined the video call, the video feed shown in fig. 6D is an autograph of John captured using camera 600-2A, and pill camera 620A is displayed with a de-emphasized (e.g., grey) appearance to indicate that the other user has not joined the shared content session provided with the video call. In the embodiment depicted in fig. 6D, john's device 600A has begun sharing a content session, but has not yet automatically begun playback of the selected content (e.g., track 3). However, in some embodiments, playback of the selected content begins automatically after the shared content session is started.
In fig. 6D, jane's device 600B displays a notification 616, which is an incoming video call notification, indicating that John is inviting Jane, which is a member of the mountain-climbing group, to join the video call. The notification 616 is displayed as a home screen 622 on device 600B overlaying Jane. In some embodiments, a notification similar to notification 616 is displayed at the device of other users that are also members of the mountain climbing team. Notification 616 includes reject option 616-1 and accept option 616-2. In some implementations, the notification 616 includes an indication that the video call is associated with the shared content session. In some embodiments, the notification includes an indication of content that has been selected for sharing.
In fig. 6E, john's device 600A displays control region 615A in response to input 605-5 selecting pill form camera 620A. In some implementations, john's device 600A displays a control region 615A after selecting the input 605-3 of the video call option 612 in FIG. 6C. In such an embodiment, the control region 615A would be displayed in fig. 6D in place of the pill-shaped camera 620A, and the pill-shaped camera 620A could be displayed when the control region 615A is dismissed (e.g., via a dismissal gesture or after a predetermined amount of time has elapsed). Control region 615A provides information associated with the shared content session and includes control options for controlling the operation, parameters, and/or settings of the active shared content session. The control region 615A includes a status region 615-1A that includes status information associated with the shared content session and, in some embodiments, may be selected to display additional information about the shared content session. As shown in FIG. 6E, the status field 615-1A currently indicates that other members of the mountain climbing group have not joined the shared content session. The control area 615A also includes various options that can be selected to control the operation, parameters, and/or settings of the shared content session. For example, in some embodiments, message option 615-2A can be selected to view a message conversation between participants invited to a group joining the shared content session. In some implementations, the speaker option 615-3A can be selected to enable or disable audio output at John's device 600A (e.g., at speaker 600-4A). In some implementations, the speaker option 615-3A can be selected to select a different audio output device (e.g., a headset and/or a wirelessly connected audio output device) for real-time communication enabled for the shared content session. In some implementations, the Mic option 615-4A can select to enable or disable microphone 600-3A, thereby muting or un-muting John's audio input for real-time communications enabled for the shared content session. In some implementations, the camera option 615-5A can be selected to enable or disable the camera 600-2A, thereby enabling or disabling video feeds provided by the camera 600-2A for sharing content sessions. In some implementations, the camera option 615-5A can be selected to select a different camera as a video source for real-time communications enabled for the shared content session. In some implementations, the sharing option 615-6A can be selected to display a menu with various options and settings associated with the shared content session. In some embodiments, leave option 615-7A can be selected to leave John's device 600A from the shared content session, optionally without terminating the shared content session for other participants of the shared content session.
In some implementations, the control region 615A has a different appearance depending on whether real-time communication has been enabled at John's device 600A for the shared content session. For example, when real-time communication is enabled for a shared content session at John's device 600A, control region 615A has a dark appearance (e.g., black or dark gray background color) as shown in fig. 6E, and includes speaker options 615-3A, mic option 615-4A and camera options 615-5A. When real-time communication is not enabled for the shared content session at John's device 600A, the control region 615A has a light appearance (e.g., white background color) and includes audio call options and video call options (and does not include speaker option 615-3A, mic option 615-4A and camera option 615-5A), as shown in fig. 6K and described in more detail below. In some implementations, the appearance of the control region 615A is specific to the state of John's device 600A. Thus, devices of other participants sharing the content session may display control regions having different appearances, depending on whether real-time communication is enabled at the respective devices, without affecting the appearance of control region 615A displayed at John's device 600A.
In some implementations, the appearance of each option in the control area 615A is used to indicate the status of the corresponding option. For example, the speaker option 615-3A, mic option 615-4A and the camera option 615-5A may be shaded (or otherwise visually emphasized) as shown in fig. 6E to indicate that the speaker, mic, and camera are enabled. These options may be shown as unshaded (or otherwise visually de-emphasized) to indicate that the corresponding speaker, mic, and/or camera options are disabled. Additionally, the sharing option 615-6A may be shown in a shaded or unshaded appearance to indicate the status of the content of the shared content session. For example, in some embodiments, the sharing option 615-6A is shown in a shaded (or otherwise visually emphasized) state when content is being shared via a shared content session (e.g., when screen shared content or synchronized content is being output), and the sharing option is shown in an unshaded (or otherwise visually de-emphasized) state when screen shared or synchronized content is not being output during the shared content session.
In fig. 6E, jane's device 600B detects input 605-6 selecting accept option 616-2 and, in response, joins the shared content session with John's device 600A and the other participants of the mountain-climbing group. Because Jane's device accepted the invitation in fig. 6E, real-time communication is enabled for the shared content session at Jane's device 600B, and Jane's device 600B displays a control area 615B (similar to control area 615A) having a dark appearance and including enabled speaker options 615-3B, mic options 615-4B and camera options 615-5B, as shown in fig. 6F. Jane's device also displays a video call interface 624 that includes video tiles 626 and 628 and a video feed 630. Video tiles 626 and 628 represent video feeds of participants to the shared content session, and video feed 630 represents a self-view of Jane provided by camera 600-2B. Video tile 626 represents the video feed of Emily joining the video call at approximately the same time as Jane, while video tile 628 represents the video feed of John.
In fig. 6F, john's device 600A has deactivated control region 615A (e.g., in response to an input gesture to deactivate the control region or because a predetermined amount of time has elapsed) and is currently displaying pill form camera 620A, which has turned to a green appearance to indicate that a threshold number of participants have joined the shared content session. In some implementations, after the first participant (other than John) has joined the shared content session, the pill-shaped camera 620A turns green. John's device 600A also updates video feed 618 to display Jane's video feed, jane now has joined the shared content session. In some implementations, the video feed 618 displays the video feed of the most active or recently active participant sharing the content session. John's device 600A also displays a notification 632 indicating Jane has joined the shared content session. In some embodiments, when an additional user joins the shared content session, an additional notification is displayed. In some implementations, a single notification is displayed indicating that multiple participants have joined the shared content session. In some embodiments, notification 632 is not displayed.
In FIG. 6G, john's device 600A displays control region 615A in response to input 605-7 on pill camera 620A or in response to input 605-8 on notification 632. The status field 615-1A indicates that the shared content session with the two participants (Emily and Jane) of the mountain-climbing group is active, but that no content is currently being shared with the participants of the shared content session. John's device 600A displays a banner 634 prompting John to begin playback of content selected for sharing (e.g., track 3). Banner 634 includes an option 634-1 that can be selected to begin playback of track 3 for participants of the shared content session. In the embodiment depicted in fig. 6G, john's device 600A automatically displays banner 634 and option 634-1 in response to detecting a threshold number of participants joining the shared content session. For example, when a first participant (other than John) joins the shared content session, banner 634 and option 634-1 are automatically displayed. In some embodiments, john's device 600A displays banner 634 when initiating the shared content session (e.g., in response to input 605-3 or other input), but does not display option 634-1 (or option 634-1 is not selectable) until a threshold number of participants join the shared content session or until input is received on banner 634.
John's device 600A detects the input 605-9 of the selection option 634-1 and, in response, begins playing track 3 for the participants of the shared content session, as indicated by audio output 636A at John's device 600A and audio output 636B at Jane's device 600B, as shown in FIG. 6H. In some implementations, john's device 600A may play track 3 for participants sharing the content session in response to selection of play affordance 638 or in response to other input to play the content. Since content (e.g., track 3) is being shared via a shared content session, the sharing options 615-6A and 615-6B are shown as shaded in control areas 615A and 615B, respectively. In addition, status areas 615-1A and 615-1B are updated to indicate that track 3 is being played for members of the mountain climbing group. Because real-time communication is enabled for the shared content session at John's device 600A and Jane's device 600B, john and Jane are able to interact with other participants of the shared content session, e.g., view each other's video feeds and talk to each other over an active audio channel.
Fig. 6I-6 AC depict interfaces of various embodiments in which John's device 600A uses asynchronous communications to send invitations to join a shared content session. In response to detecting input 605-4 selecting message option 614 in fig. 6C, john's device 600A displays a message interface 640A that includes a message conversation between members of the mountain climbing group. Message interface 640A includes a message conversation area 642A having messages 642-1A, 642-2A, and 642-3A that have been sent to members of the mountain-climbing group as part of a message conversation. Message interface 640A also includes a message composition field 644A and a send option 646A that can be selected to send a message that has been entered in message composition field 644A. As shown in FIG. 6I, john's device 600A displays a message interface 640A in which an invitation link 645A automatically populates a message composition field 644A in response to input 605-4 on message option 614. In some implementations, the keyboard 610 can be used to type additional text into the message composition file 644A for inclusion in the link. The invitation link 645A is a link that, once sent to the mountain climbing group, can be selected to join a shared content session with members of the mountain climbing group, as described in more detail below.
In response to detecting input 605-10 selecting send option 646A, john's device 600A sends link 645A in message 642-4A and automatically initiates a shared content session with the mountain climbing group. As mentioned previously, when a shared content session is initiated via asynchronous communication, the shared content session is initiated without enabling real-time communication for the shared content session, as shown, for example, in fig. 6C and 6I. In some embodiments, message 642-4A includes (optionally as part of link 645A) status information 647A that indicates the status of the shared content session. For example, in fig. 6J, the status information 647A indicates that music of "track 3" named "album" has been selected for sharing in the shared content session. In some embodiments, as the state of the shared content session changes, the state information 647A is automatically updated in message 642-4A.
In fig. 6J, john's device 600A displays a pill camera 620A indicating that the shared content session is active at John's device 600, and Jane's device 600B displays a message notification 648 indicating that John has sent an invitation to join the shared content session by the message application. In some implementations, john's device 600A displays a control area 615A (e.g., as depicted in fig. 6K) in place of the pill-shaped camera 620A. Because no other user is joining the shared content session, the pill camera 620A is displayed with a grey appearance to indicate that other users have not joined the shared content session provided by the link sent in message 642-4A. In the embodiment depicted in fig. 6J, john's device 600A has begun sharing a content session, but has not yet automatically begun playback of the selected content (e.g., track 3). In some embodiments, playback of the selected content begins automatically after the shared content session is started.
In some embodiments, message notification 648 includes text 648-1 indicating the sender of the invitation, the recipient of the invitation, the activity to be shared in the shared content session, the content selected for sharing in the shared content session, and/or instructions or prompts to join the shared content session. In response to detecting input 605-12 selecting message notification 648, jane's device 600B displays message interface 640B, as shown in FIG. 6K. Message interface 640B is similar to message interface 640A and includes a message conversation region 642B that includes messages 642-4B that include links 645B for joining a shared content session with a mountain-climbing group and status information 647B that is similar to status information 647A. In fig. 6K, status information 647B also indicates that a person (e.g., john) is active in the shared content session.
In fig. 6K, john's device 600A displays control region 615A in response to detecting input 605-11 on pill camera 620A in fig. 6J. Since real-time communication is not enabled for the shared content session, the control area 615A has a light appearance and includes an audio call option 615-8A and a video call option 615-9A in place of the speaker option 615-3A, mic option 615-4A and the camera option 615-5A. The audio call option 615-8A and the video call option 615-9A may be selected to enable real-time communication for a shared content session at John's device 600A. In particular, the audio call option 615-8A may be selected to initiate an audio call for the shared content session (without a live video channel), and the video call option 615-9A may be selected to initiate a video call for the shared content session (optionally including a live audio channel). The status field 615-1A indicates that no user (other than John) is joining the shared content session.
In FIG. 6L, jane's device 600B joins the shared content session in response to detecting input 605-13 selecting link 645B. Because the shared content session is initiated via asynchronous communication (e.g., by selecting link 645B sent in text message communication or via other input), jane's device 600B joins the shared content session with real-time communication disabled, similar to John's device 600A. Thus, jane's device 600B displays a control area 615B that has a light appearance and includes an audio call option 615-8B and a video call option 615-9B in place of the speaker option 615-3B, mic option 615-4B and the camera option 615-5B. The audio call option 615-8B and the video call option 615-9B are similar to the audio call option 615-9A and the video call option 615-9A, respectively, and may be selected to enable real-time communication for a shared content session at Jane's device 600B. The status area 615-1B (and status information 647B) indicates that two people other than Jane (e.g., john and Emily) have joined the shared content session, but are not currently sharing content with participants of the shared content session.
In fig. 6L, john's device 600A updates status area 615-1A (and status information 647A) to indicate that two persons other than John (e.g., jane and Emily) have joined the shared content session, but currently have no content shared with the participants of the shared content session. John's device 600A displays a banner 650 prompting John to begin playback of content selected for sharing (e.g., track 3). Banner 650 is similar to banner 634 and includes an option 650-1 that can be selected to begin playback of track 3 for participants of the shared content session. In the embodiment depicted in fig. 6L, john's device 600A automatically displays banner 650 and option 650-1 in response to detecting a threshold number of participants joining the shared content session. For example, when a first participant (other than John) joins the shared content session, banner 650 and option 650-1 are automatically displayed. In some embodiments, john's device 600A displays banner 650 when initiating the shared content session (e.g., in response to input 605-10 or other input), but does not display option 650-1 (or option 650-1 is not selectable) until a threshold number of participants join the shared content session or until input is received on banner 650. In some implementations, pill-shaped camera 620A is displayed as having a green color when a first participant (or a threshold number of participants) other than John joins the shared content session.
John's device 600A detects the input 605-14 selecting option 650-1 and, in response, begins playing track 3 for participants sharing the content session, as indicated by audio output 636A at John's device 600A and audio output 636B at Jane's device 600B, as shown in FIG. 6M. In some implementations, john's device 600A may play track 3 for participants of the shared content session in response to selection of play affordance 638 (e.g., as shown in fig. 6H) or in response to other input to play the content. Since content (e.g., track 3) is being shared via a shared content session, the sharing options 615-6A and 615-6B are shown as shaded in control areas 615A and 615B, respectively. In addition, status areas 615-1A and 615-1B and status information 647A and 647B are updated to indicate that track 3 is being played for members of the mountain-climbing group. Because real-time communication is not enabled for the shared content session at John's device 600A and Jane's device 600B, john and Jane cannot interact with other participants of the shared content session via real-time communication.
In FIG. 6M, john's device 600A detects input 605-15 selecting sharing option 615-6A and, in response, displays sharing menu 655, as shown in FIG. 6N. The shared menu 655 includes various options for controlling the operation, parameters, and/or settings of the shared content session. In the embodiment depicted in fig. 6N, the sharing menu 655 includes a screen sharing option 652, an automatic playback option 654, a manual playback option 656, an end option 657, and an application area 658 including an application download option 662 and a list of applications 660 capable of providing content that can be shared in a shared content session. The application area 658 also includes an application download option 662. Screen share option 652 may be selected to begin sharing content displayed on John's device 600A. When the automatic playback option 654 is selected, as shown in fig. 6N, content that can be shared in the shared content session is automatically shared with the participants of the shared content session without prompting the user to confirm playback of the participants. When the manual playback option 656 is selected, the user is prompted to select playback of the content for participants of the shared content session. An end option 657 may be selected to end playback of the content being shared in the shared content session. In some implementations, the end option 657 is not displayed when content is not being shared in the shared content session. In some embodiments, the sharing menu 655 includes options selectable to display shared content session settings. In some embodiments, the sharing menu 655 includes options selectable to display shared content session settings for a corresponding application.
The application area 658 provides the user with quick access to various applications that support sharing of content (e.g., synchronized content) in the shared content session. For example, in FIG. 6N, john's device 600A includes a TV application icon 660-1, a music application icon 660-2, a short video application icon 660-3, and a partially on-screen application icon 660-4. The application icons may be selected to launch corresponding applications, as discussed in more detail below. A list of applications 660 may be scrolled (e.g., in response to scroll input 605-16 or other input) to fully display application icons 660-4 as well as other application icons provided in the list. In some embodiments, the order in which the applications and/or application icons provided in the list of applications 660 are listed is determined based on various criteria, such as, for example, participants of the shared content session, historical use of the applications in the shared content session, participant rights/privileges for viewing content in the respective applications, and/or availability of the applications at John's device 600A. The applications displayed in the list of applications 660 include applications that are downloaded and/or installed at John's device 600A, and in some embodiments applications that are not currently downloaded and/or installed but are capable of being downloaded and/or installed at John's device 600A.
In fig. 6O, john's device 600A displays an application download interface 664 in response to detecting input 605-17 selecting application download option 662 in fig. 6N. The application download interface 664 includes a list of applications 666 that can provide content (e.g., synchronized content) that can be shared in the shared content session and available for retrieval by purchasing, downloading, and/or installing the corresponding applications at John's device 600A. Application 666 represents a subset of the larger set of applications that can be acquired and the applications in that subset are filtered out of the larger set of applications based on their ability to provide content for the shared content session. As shown in fig. 6O, applications are not currently downloaded and/or installed at John's device 600A, but can be downloaded and/or installed at John's device 600A where they are accessible from John's home screen and/or list of applications 660 in application area 658. For example, in response to input 605-18 selecting download icon 668-1, animation application 666-1 may be downloaded/installed at John's device 600A. In response to input 605-19 selecting download icon 668-2, game application 666-2 may be downloaded/installed at John's device 600A. Similarly, in response to input 605-20 selecting download icon 668-3, movie application 666-3 may be downloaded/installed at John's device 600A. In some embodiments, some applications need to be purchased before being downloaded, installed and/or accessed at John's device 600A. In such embodiments, after purchasing the application, the application is downloaded/installed at John's device 600A. In some embodiments, some applications require certain privileges and/or rights to access content in the respective application. In such implementations, after obtaining (e.g., purchasing and/or granting) privileges and/or rights, content in the respective application may be accessed for playback and for sharing in the shared content session.
In FIG. 6P, john's device 600A displays a TV application interface 670 in response to detecting input 605-21 selecting TV application icon 660-1 in FIG. 6N. The TV application interface 670 provides content, such as programs and movies, that can be shared in a shared content session. In fig. 6P, an indicator 672 indicates that the football program 674 can be shared in the shared content session. In response to inputs 605-22 selecting football program 674, john's device 600A displays media interface 676, as shown in FIG. 6Q. John's device 600A detects input 605-23 selecting play button 678, which begins playback of the football program for the mountain climber group, as shown in FIG. 6R. Because the automatic playback option 654 is selected in fig. 6N, john's device 600A automatically begins playback of the football program for the mountain-climbing group without prompting John to confirm that he wants to share content in the shared content session. However, if the manual playback option 656 is selected in fig. 6N, john's device 600A will display a prompt in response to inputs 605-23, and the prompt will provide John with the option to play football programs only at John's device 600A (without starting playback of programs for the mountain-climbing group) and the option to play football programs for the mountain-climbing group.
In fig. 6R, john's device 600A has begun playback of the football program for the mountain climber group. When playback of a football program begins, john's device 600A displays media PiP680A and playback control 681A in media playback interface 679. Playback control 681A presents information regarding playback of content and provides various controls that can be selected to control playback of content displayed in media PiP 680A. For example, the playback control 681A indicates a playback state of the content (such as being paused or actively playing), indicates a playback state relative to a duration of the media content (e.g., elapsed playback time), and can be selected to browse the media content (e.g., move a playback position of the media content commensurate with the input). The media PiP680A displays the media content being played at John's device 600A. The media PiP680A can have a fixed location (or use all screens except a portion of the screen designated for system status information and/or system control) in an expanded or full screen view, as shown in media playback interface 679, or as a PiP that may be positioned on various user interfaces as discussed herein. In fig. 6R, media PiP680A is shown in an expanded state, while John's device 600A is in a portrait orientation. However, in some implementations, if John's device 600A is rotated to a landscape orientation while media PiP680A is in an expanded view, media PiP680A expands to a full screen view or an enlarged view that is larger than the view depicted in fig. 6R. For simplicity, the media representation displayed is hereinafter referred to as media PiP680A, which may be used to refer to media in an extended view or PiP format, depending on the context.
As shown in fig. 6R, media PiP 680A is displaying the content of a football program and audio 682A associated with the football program is being output at John's device 600A (e.g., using speakers 600-7A). John's device 600A also displays a control region 615A on media playback interface 679, wherein status region 615-1A has been updated to indicate that the mountain climber group is now watching a football program. Since real-time communication is not enabled for the shared content session, the control area 615A is displayed with a light appearance and includes an audio call option 615-8A and a video call option 615-9A.
Since the football program is started for the mountain climber group, jane's device 600B stops playing track 3 and starts playing the football program as shown in fig. 6R. Thus, jane's device 600B displays media PiP 680B (similar to media PiP 680A) overlaying message interface 640B, and outputs football program audio 682B using speakers 600-7B. In addition, status information 647B and status field 615-1B are updated to indicate that the mountain climber group is now watching a football program. Playback of the football program is synchronized at John's device 600B of device 600A, jane and devices of other members of the mountain climbing group. Jane' S device 600B detects input 605-24 (which is a gesture that causes the device to display home screen 622) and displays home screen 622 in response to input 605-24, as shown in FIG. 6S. While main screen 622 is displayed, jane's device 600B continues to display media PiP 680B, which is shown moving to a different location on the display and overlaying main screen 622.
Fig. 6S-6 AC depict interfaces corresponding to various embodiments of inputs 605-25, 605-26, 605-27, and 605-28 depicted in fig. 6R, which illustrate selection of options in control area 615A. In response to the input 605-25 selecting the message option 615-2A, john's device 600A displays a message interface 640A overlaying the media playback interface 679. When message interface 640A is displayed as an overlay, john' S device 600A undocks media PiP 680A from media playback interface 679 and displays media PiP 680A on message interface 640A, as shown in fig. 6S. John's device 600A displays the media PIP 680A on the display at a location below the recipient indication 640-1A and above the most recent messages (e.g., messages 642-3A and 642-4A) in the message conversation area 642A so that John can see the person he is messaging and the most recent (and perhaps most relevant) messages in the conversation while still continuing to view football programming on the media PIP 680A. In the embodiment depicted in fig. 6S, john is able to interact with the message interface 640A by sending a message using the keyboard 610 and scrolling the message in the message conversation region 642A (e.g., via inputs 605-29 or other inputs). John is also able to interact with the media PIP 680A when displayed on the message interface 640A. For example, in FIG. 6S, john uses the pinching gestures provided via the concurrent inputs 605-30A and 605-30B to resize the media PIP 680A. John may also move, minimize, expand, or otherwise interact with media PiP 680A, including tapping media PiP 680A to display controls for controlling playback of the football program (e.g., similar to playback control 681A). In FIG. 6T, john's device 600A is depicted after resizing the media PiP 680A, sending the message 642-5A, and scrolling the message in the message conversation area 642A. The overlay comprising message interface 640A may be dismissed using various gestures or inputs, including, for example, a swipe gesture that completes, closes, or cancels the affordance via a selection provided by input 605-31 or input 605-32. In some embodiments, when message interface 640A is dismissed, john's device 600A displays a media playback interface 679 as shown in fig. 6T or displays a media PiP 680A overlaying a different user interface (e.g., TV application interface 670, media interface 676, or home screen interface). In some embodiments, the message interface overlay completely covers the underlying application interfaces on certain device types, and does not cover or only partially covers the underlying application interfaces on other device types. For example, in some embodiments, the message interface overlay completely covers the underlying application interface on a device having a smaller screen size (e.g., a phone or a wearable device) and does not cover (or partially cover) the underlying application interface on a device having a larger screen size (e.g., a tablet or a laptop).
Fig. 6U-6W depict interfaces of various embodiments in which John initiates an audio call by selecting audio call option 615-8A via input 605-26 in fig. 6R. In response to detecting the input 605-26 selecting the audio call option 615-8A, john's device 600A enables real-time communication for the shared content session and initiates an audio call with the participants of the mountain-climbing group. Thus, john's device 600A changes the appearance of control area 615A to have a dark appearance, stops displaying audio call option 615-8A and video call option 615-9A, and displays speaker option 615-3A, mic option 615-4A and camera option 615-5A. Since the audio call includes an active audio channel and no active video channel, the speaker option 615-3A and the mic option 615-4A are enabled (e.g., as indicated by the shading) and the camera option 615-5A is disabled (e.g., as indicated by the lack of shading).
Jane's device 600B receives an incoming audio call initiated by John's selection of audio call option 615-8A. As shown in fig. 6U, jane's device 600B displays an incoming call notification 683 indicating John has initiated an audio call for the shared content session. Incoming call notification 683 includes reject option 683-1 and accept option 683-2. An incoming call may be accepted by selecting accept option 683-2 (e.g., via inputs 605-34 or other selection inputs). An incoming call may be rejected by selecting a reject option 683-1 (e.g., via input 605-33 or other selection input) or, in some embodiments, by failing to select an accept option 683-2 within a predetermined amount of time. If Jane rejects the audio call, then Jane's device 600B remains in the shared content session with real-time communication disabled. If Jane accepts the audio call invitation, jane's device 600B remains in the shared content session and real-time communication is enabled through the active audio channel.
Fig. 6V depicts an embodiment in which Jane's device 600B rejects an incoming call in response to detecting input 605-33 on reject option 683-1. Thus, jane's device 600B continues to share the content session (continue playing the football program) with real-time communication disabled, as indicated at least by the appearance of control area 615B in fig. 6V. In some implementations, jane can join the audio feed for the shared content session by selecting the audio call option 615-8B (e.g., via inputs 605-35 or other selection inputs). While real-time audio is enabled at John's device 600A (as indicated by at least the appearance of control region 615A) and John's speaker and mic are enabled, jane's device 600B does not enable real-time communication and therefore real-time audio is not provided from John's device 600A to Jane's device 600B. Thus, as John speaks, as indicated by speech 633-1, audio from John speaking is not output at Jane's device 600B. However, it should be appreciated that John's audio can be output at other devices that have enabled real-time communication for the shared content session. For example, when John selects audio call option 615-8A, an incoming call notification similar to notification 683 is displayed at Emily's device. If Emily accepts the incoming call, then the audio channel is enabled for Emily's device and audio from John speaking will be output at Emily's device (assuming the speaker is enabled at Emily's device).
FIG. 6W depicts an embodiment in which Jane's device 600B has enabled real-time communication in response to detecting input 605-34 selecting the accept option 683-2 in FIG. 6U or in response to detecting input 605-35 selecting the audio call option 615-8B in FIG. 6V. In FIG. 6W, jane's device 600B has enabled real-time audio, as indicated by the dark appearance of control area 615B and the display of speaker options 615-3B, mic option 615-4B and camera options 615-5B. Since the audio channel is an active and inactive video channel, the speaker option 615-3B and the mic option 615-4B are enabled and the camera option 615-5B is disabled. Because real-time audio is enabled at John's device 600A and Jane's device 600B, when John speaks (e.g., as indicated by speech 633-2), audio from John's speech is output at Jane's device as indicated by output audio 635-1.
Fig. 6X and 6Y depict interfaces of various embodiments in which John initiates a video call by selecting video call option 615-9A via inputs 605-27 in fig. 6R. In response to detecting the input 605-27 selecting the video call option 615-9A, john's device 600A enables real-time communication for the shared content session and initiates a video call with the participants of the mountain-climbing group. Thus, john's device 600A changes the appearance of control area 615A to have a dark appearance, stops displaying audio call option 615-8A and video call option 615-9A, and displays speaker option 615-3A, mic option 615-4A and camera option 615-5A. Since the video call includes an active audio channel and an active video channel, speaker option 615-3A, mic option 615-4A and camera option 615-5A are enabled (e.g., as indicated by the shaded areas). John's device 600A also displays a video feed 686 that currently shows John's self-view because other participants have not joined the real-time communication session. In some implementations, john's device 600A selectively places video feed 686 so as to avoid overlapping playback control 681A and media PiP 680A. In some implementations, the video feed 686 is displayed within the media PiP 680A, for example, when the media PiP 680A is undocked from the media playback interface 679.
Jane's device 600B receives an incoming video call initiated by John's selection of video call option 615-9A. As shown in fig. 6X, jane's device 600B displays an incoming call notification 684 indicating John has initiated a video call for the shared content session. Incoming call notification 684 is similar to notification 616 and includes reject option 684-1 and accept option 684-2. An incoming video call may be accepted by selecting accept option 684-2 (e.g., via inputs 605-36 or other selection inputs). An incoming video call may be rejected by selecting a reject option 684-1 (e.g., via input 605-37 or other selection input) or, in some implementations, by failing to select an accept option 684-2 within a predetermined amount of time. If Jane rejects the video call, then Jane's device 600B remains in the shared content session with real-time communication disabled. If Jane accepts the video call invitation, jane's device 600B remains in the shared content session and enables real-time communication over the active video channel and optionally the active audio channel.
Fig. 6Y depicts an embodiment in which Jane's device 600B has enabled real-time communications in response to detecting input 605-36 selecting accept option 684-2 in fig. 6X. In FIG. 6Y, jane's device 600B has enabled real-time audio and real-time video, as indicated by the dark appearance of control area 615B, the display of speaker options 615-3B, mic option 615-4B and camera options 615-5B, and the display of video feed 685. Since the audio channel and the video channel are active, the speaker option 615-3B, mic option 615-4B and the camera option 615-5B are enabled. Because real-time audio is enabled at John's device 600A and at Jane's device 600B, when John speaks (e.g., as indicated by speech 633-3), audio from John speaking is output at Jane's device as indicated by output audio 635-2. Because real-time video is enabled at both devices (and both device cameras are enabled), jane's device 600B displays video feed 685 showing John's video feed, and John's device 600A displays Jane's video feed in video feed 686. As the football program continues to play and real-time communication is enabled, both devices continue to display media PiP 680, allowing participants sharing the content session to interact with each other as they watch the football program.
Fig. 6Z-6 AC depict interfaces of various embodiments in which John shares his screen with participants sharing a content session. In response to detecting the input 605-28A selecting the share option 615-6 in FIG. 6R, john's device 600A displays a share menu 655, as shown in FIG. 6Z. John's device 600A detects inputs 605-38 selecting screen sharing option 652 and, in response, enables real-time communication for the shared content session and initiates an audio call with the participants of the mountain-climbing group. Thus, john's device 600A changes the appearance of control area 615A to have a dark appearance, stops displaying audio call option 615-8A and video call option 615-9A, and displays speaker option 615-3A, mic option 615-4A and camera option 615-5A. Since the audio call includes active audio channels and no active video channels, the speaker option 615-3A and the mic option 615-4A are enabled (e.g., as indicated by the shaded) and the camera option 615-5A is disabled (as indicated by the lack of shading). John's device 600A also temporarily displays a countdown 687 in place of the sharing option 615-6A, which provides a countdown of the amount of time remaining until John's device 600A begins to share the content of display 600-1A (also referred to herein as screen-shared content) in the shared content session.
The screen sharing content replaces playback of other content currently being shared in the shared content session. Thus, if a participant sharing a content session accepts an invitation to view John's screen sharing content, the screen sharing content will replace playback of the football program at the participant's device. If the participant refuses to view John's invitation to screen-share content, the participant's device remains in the shared content session but does not continue to output the content shared in the shared content session, which is screen-share content.
Jane's device 600B receives an incoming audio call initiated by John's selection of screen sharing option 652. The audio call is provided in connection with John's sharing of the screen and serves as an invitation to view John's screen sharing content. As shown in fig. 6AA, jane's device 600B displays an incoming call notification 688 (which is similar to notification 683) indicating John is inviting the mountain climber group to view his shared screen. Incoming call notification 688 includes reject option 688-1 and accept option 688-2. An invitation to view John's screen may be accepted by selecting accept option 688-2 (e.g., via input 605-40 or other selection input), and may be rejected by selecting reject option 688-1 (e.g., via input 605-39 or other selection input), or in some embodiments by failing to select accept option 688-2 within a predetermined amount of time. If Jane refuses the invitation, jane's device 600B remains in the shared content session with real-time communication disabled and stops displaying media PiP 680B. If Jane accepts the invitation, jane's device 600B remains in the shared content session, enables real-time communication over the active audio channel, and displays the PIP with screen-shared content from John's device 600A.
In FIG. 6AB, john's device 600A displays a home screen 621 in response to detecting input 605-41, which is a gesture that causes device 600A to display home screen 621. After expiration of countdown 687, john's device 600A begins to share the content of display 600-1A as home screen 621. In some embodiments, the control region 615A is not included as part of the screen sharing content. John's device 600A updates the status field 615-1A to indicate that John is sharing the content of his screen with the mountain climbing group.
In FIG. 6AB, jane's device 600B refuses to view John's invitation to screen share content in response to detecting input 605-39 on refusal option 688-1 in FIG. 6 AA. Jane's device 600B displays a notification 689 that Jane John is sharing his screen for the participants of the shared content session and prompts Jane to join the screen to share the content. Because Jane refuses to invite to view John's screen share content, jane's device 600B does not enable real-time communication and remains in the shared content session without displaying John's screen share content or media PiP 680B (which was previously used to display the football program). Further, since Jane's device 600B remains in the shared content session, jane may replace the shared content (e.g., the current John's screen shared content) with other content. For example, jane may resume playback of the football program (or select other content for playback) which will replace John's screen sharing content with the football program, thereby switching participants to the sharing content session from viewing John's screen sharing content to viewing the football program.
In FIG. 6AC, jane's device 600B joins viewing John's screen share content in response to selection of the input 605-40 accepting option 688-2 or in response to selection of the input 605-42 of notification 689. In some embodiments, jane's device 600B may select audio call option 615-8B (or video call option 615-9B) to enable real-time communication and view John's screen sharing content. In the embodiment depicted in fig. 6AC, jane's device 600B enables real-time communication over the active audio channel and displays screen share PiP 690 with screen share content shared from John's device 600A, including representation 621 'of John's home screen 621. Jane's device 600B updates control area 615B based on enabling the real-time communication session, including updating status area 615-1B to indicate that she is viewing John's screen in the shared content session.
In the above embodiments, screen sharing content is provided in connection with an audio call. However, it should be understood that in some implementations, screen sharing content may be provided with the video call. In such implementations, the real-time communication session may include a video feed that is shared with participants sharing the content session while John's device shares the content of display 600-1A.
Fig. 6 AD-6 AH depict interfaces for various embodiments in which John ends the content of the shared content session and Jane displays information of the shared content session. In fig. 6AD, a mountain climber is watching a football program in a shared content session. John's device 600A views the program in the shared content session with real-time communication enabled, and Jane's device 600B views the program in the shared content session with real-time communication disabled. John's device detects an input 605-43 selecting the sharing option 615-6A, jane's device detects an input 605-44 selecting the status area 615-1B of the control area 615B.
In fig. 6AE, john's device 600A displays a share menu 655 and detects inputs 605-45 selecting an end option 657 that may be selected to end playback of content being shared in the shared content session. In response to detecting inputs 605-44, jane's device 600B displays a group card interface 691 that provides information about the shared content session. As shown in fig. 6AE, group card interface 691 includes a list 692 of invited participants and status information of the respective participants. The status information includes an indication of the status of the participant with respect to the shared content session and an indication of whether the user has enabled real-time communication for the shared content session. For example, list 692 includes John 692-1 with status information 693-1 showing that John is viewing content in the shared content session and an indicator 694-1 indicating that John has enabled real-time communication. In some implementations, the indicator 694-1 indicates that John joined the shared content session via synchronous communication (e.g., a video call). The list 692 also includes Jane 692-2 with status information 693-2 showing that Jane is viewing content in the shared content session and an indicator 694-2 indicating that Jane has not enabled real-time communication. In some embodiments, indicator 694-2 indicates Jane to join the shared content session via asynchronous communication (e.g., message invitation). List 692 includes an ain 692-3 with status information 693-3 showing that an ain has been invited to join the shared content session and that he has not accepted the invitation. The group card interface 691 also includes content information 695 that indicates, for example, content that is currently being shared, has been previously shared, or is suggested to be shared in a shared content session with a mountain climbing group. In some implementations, the content information 695 includes options such as option 695-1 that can be selected to begin playback of content in the shared content session. In some embodiments, the group card interface 691 has a different appearance depending on whether the device displaying the group card interface 691 has enabled real-time communication. For example, in fig. 6AE, group card interface 691 is shown with a light appearance because Jane's device 600B has not enabled real-time communications; however, if Jane's device 600B does enable real-time communication, then the set card interface 691 will be displayed with a dark appearance. In some implementations, the group card interface 691 includes options (e.g., similar to the audio call option 615-8 and/or the video call option 615-9) that can be selected to enable or disable real-time communications for the shared content session.
In response to detecting the input 605-45 selecting the end option 657, john's device 600A terminates the real-time communication session at John's device 600A while remaining in the shared content session and continuing to play the shared content (e.g., football program). John's device 600A also displays a continuation notification 696 and, optionally, stops displaying control area 615A, as shown in FIG. 6 AF. Continuation notification 696 asks John if he wants to continue playing the shared content. Continuation notification 696 includes continuation option 696-1 and end session option 696-2. The continue option 696-1 can be selected to remain in the shared content session and continue the sharing activity (without enabling real-time communication). The end session option 696-2 can be selected to end playback of the shared content. In some implementations, the option to remain in the shared content session (e.g., via the end option 657 and/or the continue notification 696) with real-time communication disabled is only available if John begins the shared content session from asynchronous communication. When John's device 600A terminates its real-time communication session, jane's device 600B updates John's state in the group card interface 691 to indicate that John is not enabling real-time communication, as shown by indicator 694-1. In some implementations, john can re-enable real-time communication for the shared content session by selecting the link 645A in the message 642-4A or by selecting the audio call option 615-8A or the video call option 615-9A.
In FIG. 6AG, john's device 600A displays a confirmation interface 697 in response to detecting the input 605-47 selecting the end session option 696-2 in FIG. 6 AF. Confirmation interface 697 includes options 697-1 and option 697-2. Option 697-1 can be selected to end playback of the shared content (e.g., football program) for John's device 600A without ending playback of the content for other participants of the shared content session. Option 697-2 can be selected to end playback of the shared content for John's device 600A and for other participants of the shared content session.
In FIG. 6AH, john's device 600A continues playing a football program in media PiP 680A with real-time communication disabled, in response to detecting input 605-46 selecting continue option 696-1 in FIG. 6 AF. Because real-time communication is disabled, john's device 600A displays control area 615A having a light appearance and including audio call options 615-8A and video call options 615-9A.
Fig. 7 is a flow chart illustrating a method for initiating a shared content session using asynchronous communications using a computer system, in accordance with some embodiments. The method 700 is performed at a computer system (e.g., 100, 300, 500, and/or 600) (e.g., a smart phone, a tablet, a desktop computer, a laptop computer, and/or a head-mounted device (e.g., a head-mounted augmented reality and/or augmented reality device)) that communicates with (e.g., includes) one or more display generating components (e.g., 600-1) (e.g., a display controller, a touch-sensitive display system, speakers, a bone-conduction audio output device, a haptic output generator, a projector, a holographic display, and/or a head-mounted display system) and one or more input devices (e.g., 600-1, 600-2, and/or 600-3) (e.g., a touch-sensitive surface, a keyboard, a mouse, a touch pad, a microphone, one or more optical sensors for detecting gestures, one or more capacitive sensors for detecting hover inputs, and/or an accelerometer/gyroscope/inertial measurement unit) and the one or more input devices and/or the one or more input devices connected to the one or more display generating components and the one or more input devices. Some operations in method 700 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 700 provides an intuitive way for initiating a shared content session using asynchronous communications. The method reduces the cognitive burden on the user to manage the shared content session, thereby creating a more efficient human-machine interface. For battery-driven computing devices, enabling users to initiate shared content sessions using asynchronous communications faster and more efficiently saves power and increases the time interval between battery charges.
At method 700, a user interface (e.g., 606, 608, and/or 640) (e.g., a computer system) for initiating a session with an external computer system (e.g., 600B) (e.g., one or more external computer systems (e.g., a computer system associated with a remote user (e.g., operated by and/or logged into a user account associated with the remote user)) is displayed via one or more display generating components (e.g., 600-1) (e.g., a display controller, a touch-sensitive display system, a speaker, a bone conduction audio output device, a haptic output generator, a projector, a holographic display, and/or a head-mounted display system), including one or more options for selecting participants to join the shared content session and/or for sending an invitation to join the shared content session via synchronous or asynchronous communications, and/or an invitation user interface having a menu of one or more options for initiating the shared content session), a computer system (e.g., 600A) (e.g., a smart phone, tablet, desktop, laptop, and/or head-mounted device (e.g., head-mounted augmented reality and/or augmented reality devices)) via one or more input devices (e.g., 600-1, 600-2, and/or 600-3) (e.g., a touch-sensitive surface, a keyboard, a mouse, a touch pad, a microphone, one or more optical sensors for detecting gestures, one or more capacitive sensors for detecting hover inputs, a touch pad, and/or accelerometer/gyroscope/inertial measurement unit)) receives (702) a first set of one or more inputs (e.g., 605-1, 605-2, 605-3, 605-4, and/or 605-10) corresponding to a request to initiate a shared content session with one or more external computer systems (e.g., 600B).
In response to receiving a first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems, the computer system (e.g., 600A) initiates (704) a shared content session (e.g., creates a shared content session, activates a shared content session, and/or generates a link for a shared content session) with one or more external computer systems (e.g., 600B), wherein the shared content session, when active, enables the computer system to output corresponding content (e.g., synchronized content (e.g., audio and/or video data synchronously output at the computer system and the external computer system) and/or screen shared content (e.g., image data generated by a device (e.g., computer system or external computer system) providing a real-time representation of image or video content currently displayed at the device) while the corresponding content is being output by the one or more external computer systems. In some implementations, during a shared content session, the respective content is output simultaneously at both the computer system (e.g., 600A) and the external computer system (e.g., 600B). In some embodiments, the respective content is screen sharing content (e.g., 690) from the computer system (e.g., content displayed on a display of the computer system) that is transmitted to an external computer system such that both computer systems output the screen sharing content from the computer system at the same time. In some embodiments, the respective content is screen sharing content from the external computer system (e.g., content displayed on a display of the external computer system) that is transmitted to the computer system such that both computer systems output the screen sharing content from the external computer system simultaneously. In some embodiments, the respective content is synchronized content (e.g., 636 and/or 680) output at the computer system and the external computer system. In some embodiments, the computer system and the external computer system each access respective content (e.g., video, movie, TV program, or song) from the remote server, respectively, and synchronize in their respective outputs of the respective content such that the content is output at both computer systems (e.g., via applications local to the respective computer systems) while each computer system accesses the respective content from the remote server, respectively. In some embodiments, the computer system and the external computer system access respective content (e.g., synchronize content) in response to a selection received at the computer system or at the external computer system for requesting output of the respective content.
Initiating a shared content session with one or more external computer systems (e.g., 600A) by a computer system (e.g., 600B) includes initiating (706) the shared content session in a first mode (e.g., as shown in fig. 6J, 6K, and/or 6L) in accordance with a determination that the shared content session is initiated via asynchronous communication (e.g., 642-4A and/or 642-4B) (e.g., messages sent to the external computer system using a messaging application operating at the computer system and/or emails sent to the external computer system using an email application operating at the computer system), wherein a set of real-time communication features (e.g., live camera feeds and/or live audio feeds) between the computer system and the external computer system) are disabled (e.g., closed, initially disabled, and/or temporarily disabled) for the shared content session. In accordance with a determination that the shared content session is initiated via asynchronous communication, initiating the shared content session in a first mode in which a set of real-time communication features are disabled for the shared content session enables the computer system to initiate the shared content session with real-time communication disabled based on a user-selected communication style or preference without requiring the user to provide additional input to indicate the preference and disable the real-time communication control. In some implementations, initiating the shared content session includes sending an invitation to join the shared content session (e.g., 645B) to the external computer system via asynchronous communication (e.g., 642-4B).
In some embodiments, the computer system (e.g., 600A) initiating a shared content session with one or more external computer systems (e.g., 600B) includes initiating the shared content session in a second mode in which a set of real-time communication features is enabled for the shared content session in accordance with a determination that the shared content session is initiated via (e.g., using and/or in conjunction with) synchronous communication (e.g., as shown in fig. 6C and/or fig. 6D) (e.g., the shared content session is initiated via real-time communication such as a video call, video chat, and/or audio call, and/or a real-time communication application such as a video call application, video chat application, and/or audio call application). In accordance with a determination that the shared content session is initiated via synchronous communication, initiating the shared content session in a second mode in which a set of real-time communication features are enabled for the shared content session enables the computer system to initiate the shared content session with real-time communication enabled based on a communication style or preference selected by the user without requiring the user to provide additional input to indicate the preference and enable the real-time communication control. In some embodiments, initiating the shared content session includes sending an invitation (e.g., 616, 683, 684, and/or 688) to the external computer system to join the shared content session via the synchronous communication.
In some embodiments, displaying a user interface (e.g., 608) for initiating a shared content session with an external computer system (e.g., 600B) includes displaying (e.g., concurrently displaying) a first option (e.g., 614 and/or 646) (e.g., message option and/or email option) selectable (e.g., via one or more inputs (e.g., 605-4 and/or 605-10)) to initiate a shared content session (e.g., 612) (e.g., video call option and/or audio call option) via synchronous communication (e.g., real-time communication session and/or communication session in which a live audio feed and/or live video feed is in communication with an external computer system). Displaying a user interface for initiating a shared content session (including a first option selectable to initiate the shared content session via asynchronous communication and a second option selectable to initiate the shared content session via real-time communication session) reduces the amount of input required to initiate the shared content session with the real-time communication feature enabled or disabled by providing control for enabling or disabling real-time communication when initiating the shared content session via a particular communication style.
In some embodiments (e.g., as part of initiating a shared content session with one or more external computer systems (e.g., 600B), in response to receiving a first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems and in accordance with a determination that the first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems includes selection of a first option (e.g., 614) (e.g., message option and/or email option) (e.g., 605-4), the computer system (e.g., 600A) displays a message composition user interface (e.g., 640A and/or 644A) (e.g., message application interface) via one or more display generating components (e.g., 600-1A). Displaying a message composing user interface in accordance with a determination that a first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems includes selection of a first option causes the device to automatically display a user interface for enabling a user to communicate an invitation to join the shared content session and for providing additional communications to the one or more external computer systems without displaying additional controls requiring additional inputs to display the message composing user interface. In some embodiments, the message composing user interface is displayed as part of initiating a shared content session via asynchronous communications. In some embodiments, the message composition user interface includes a user interface (e.g., 640A) for a message application operating on a computer system. In some embodiments, the messaging application interface includes a message composition area (e.g., 644A) that includes a displayed invitation (e.g., 645A) that can be sent via the messaging application to one or more recipients (e.g., 600B) to join the shared content session via asynchronous communication (e.g., an invitation sent via the messaging application). In some embodiments, initiating the shared content session with the one or more external computer systems includes initiating the shared content session via the real-time communication session (e.g., using synchronous communication) in accordance with a determination that the first set of one or more inputs corresponding to the request to initiate the shared content session with the one or more external computer systems includes selection of a second option (e.g., 612) (e.g., 605-3).
In some embodiments, the computer system (e.g., 600A) initiating a shared content session with one or more external computer systems (e.g., 600B) includes initiating playback of respective content (e.g., 636, 680, and/or 682) in the shared content session (e.g., outputting the respective content at the computer system and the external computer system that have joined the shared content session) in accordance with determining that a number of the one or more external computer systems that have joined the shared content session meets (e.g., is greater than or equal to or greater than) a threshold number (e.g., a non-zero number of external computer systems (such as one, two, three, five, or ten) and/or a non-zero percentage of the invited external computer systems (such as 5%, 10%, 25%, 50%, 75%, 90%, 95%, or 100%)). In some embodiments, the computer system initiating the shared content session with the one or more external computer systems includes, in accordance with a determination that the number of the one or more external computer systems that have joined the shared content session does not satisfy (e.g., is less than or equal to or less than) a threshold number, refraining from initiating playback of the respective content (e.g., refraining from outputting the respective content at the computer system and the external computer systems until the threshold number of external computer systems have joined the shared content session). In accordance with a determination that at least a threshold number of one or more external computer systems have joined the shared content session to initiate playback of the respective content in the shared content session, and in accordance with a determination that less than the threshold number of one or more external computer systems have joined the shared content session to forgo initiating playback of the respective content, a user experience of the shared content session is improved by delaying a start of the respective content until a plurality of participants have joined the shared content session.
In some embodiments, after initiating a shared content session (e.g., when the shared content session is active) and displaying a graphical object (e.g., 620A) (e.g., icon, affordance, and/or graphical element) having a first display state (e.g., as shown in fig. 6D) (e.g., a first color such as gray or purple, and/or a de-emphasized or obscured appearance represented by hatching or shading of the graphical object), and in accordance with a determination (or in response to a determination) a threshold number (e.g., a non-zero number of external computer systems (such as one, two, three, five, or ten) and/or a non-zero percentage of invited external computer systems (such as 5%, 10%, 25%, 50%, 75%, 90%, 95%, or 100%)) has joined the shared content session, the computer system displays a graphical object (e.g., 620A) having a second display state (e.g., a second color such as green or blue, and/or an emphasized or unflurred appearance represented by a line or shading of the graphical object, as shown in fig. 6F). Displaying a graphical object having a second display state that is different from the first display state in accordance with determining that a threshold number of one or more external computer systems have joined the shared content session provides feedback regarding the state of the computer system (e.g., the state in which multiple participants have been detected to have joined the shared content session). In some embodiments, the computer system changes the graphical object from the first display state to the second display state when the first user joins the shared content session.
In some embodiments, after initiating the shared content session (and before any external computer system has joined the shared content session), the computer system (e.g., 600A) detects the first external computer system (e.g., 600B) joining the shared content session (before any other external computer system has joined the shared content session). In some embodiments, in response to detecting a first external computer system joining the shared content session, the computer system displays notifications (e.g., 634-1, 650, and/or 650-1) (e.g., banners, text, and/or graphical alerts) selectable (e.g., via one or more inputs (e.g., 605-9 or 605-14)) to enable playback of content (e.g., corresponding content) for the shared content session via one or more display generating components (e.g., 600-1A). In response to detecting that the first external computer system joined the shared content session, displaying a notification that can be selected to enable playback of content for the shared content session, the number of inputs required to begin playback of content of the participant of the shared content session is reduced by automatically displaying a control for starting playback of content when the participant has joined the shared content session. In some implementations, the computer system begins playback of content for the shared content session in response to (e.g., immediately in response to) detecting the selection of the notification. In some embodiments, the computer system displays an option (e.g., 634-1 or 650-1) that can be selected to begin playback of the content in response to detecting the selection of the notification (e.g., 634 or 650).
In some embodiments, the request to initiate a shared content session with one or more external computer systems (e.g., 600B) is associated with first content (e.g., "track 3") (e.g., corresponding content and/or content selected for the shared content session), and the notification (e.g., 634 or 650) includes a first option (e.g., 634-1 or 650-1) (e.g., a "start" affordance or an "open" affordance) selectable (e.g., via one or more inputs) to initiate playback of the first content of the shared content session. Displaying a first option that can be selected to initiate playback of the first content of the shared content session reduces the amount of input required to begin playback of the first content of the participant of the shared content session by automatically displaying a control for starting playback of the first content when the participant has joined the shared content session.
In some embodiments, a request to initiate a shared content session with one or more external computer systems is associated with a first content (e.g., "track 3") (e.g., the corresponding content and/or content selected for the shared content session). In some embodiments, upon displaying a notification (e.g., 634 or 650) selectable (e.g., via one or more inputs) to enable playback of content of the shared content session, the computer system (e.g., 600A) receives an input (e.g., 605-9 or 605-14) directed to the notification. In some embodiments, in response to receiving input directed to the notification, the computer system displays, via the one or more display generating components, a second option (e.g., 634-1 or 650-1) (e.g., a "start" affordance and/or an "open" affordance) selectable (e.g., via the one or more inputs) to initiate playback of the first content for the shared content session (e.g., corresponding content and/or content initially selected for the shared content session and/or shared with the external computer system). Displaying a second option selectable to initiate playback of the first content of the shared content session in response to receiving input directed to the notification causes the computer system to automatically provide control for starting playback of the first content of the participant of the shared content session when the participant has joined the shared content session.
In some embodiments, after initiating the shared content session, the computer system (e.g., 600A) displays a first control user interface (e.g., 615A) (e.g., graphical objects and/or menus) via one or more display generating components (e.g., 600-1A). In some embodiments, displaying the first control user interface via the one or more display generating components includes displaying the first control user interface with a first set of one or more control options (e.g., 615-8A and/or 615-9A) (e.g., selectable graphical elements, icons, text, and/or affordances) (e.g., message options, audio call options, video call options, and/or sharing options) in accordance with determining that the shared content session is provided via an asynchronous communication session (e.g., 642-4) (e.g., the shared content session is in a first mode, the shared content session is in a first state, and/or the communication session associated with the shared content session is currently an asynchronous communication session or a communication session in which real-time communication features (such as live video feeds and/or live audio feeds) are disabled). In some embodiments, displaying the first control user interface via the one or more display generating components includes displaying a first control user interface having a second set of one or more control options (e.g., 615-3, 615-4, and/or 615-5) that is different from the first set of one or more control options (e.g., message options, audio routing options, mic on/off options, speaker on/off options, camera on/off options, and/or sharing options) in accordance with a determination that the shared content session is provided via the real-time communication session (e.g., the shared content session is in a mode other than the first mode, the shared content session is in the second state, and/or the communication session associated with the shared content session is currently a synchronous communication session or a communication session in which real-time communication features such as a live video feed and/or a live audio feed are enabled). In accordance with a determination that the shared content session is provided via the asynchronous communication session, displaying a first control user interface having a first set of one or more control options, and in accordance with a determination that the shared content session is provided via the real-time communication session, displaying the first control user interface having a second set of one or more control options enables the computer system to automatically provide control for the shared content session based on a communication type of the shared content session. In some embodiments, the control user interface includes information associated with the shared content session (e.g., 615-1), information associated with the communication session (e.g., 615-1), one or more selectable communication session function options that, when selected, cause the computer system to perform respective functions associated with the communication session (e.g., 615-2, 615-3, 615-4, 615-5, 615-6, 615-7, 615-8, and/or 615-9), and/or one or more selectable shared content session function options that, when selected, cause the computer system to perform respective functions associated with the shared content session (e.g., 615-2, 615-3, 615-4, 615-5, 615-6, 615-7, 615-8, and/or 615-9).
In some embodiments, after initiating the shared content session, the computer system (e.g., 600A) displays a second control user interface (e.g., 615A) (e.g., graphical objects and/or menus) having a first appearance (in some embodiments, the second control user interface is the first control user interface) via one or more display generating components (e.g., 600-1A) while the shared content session is in a first mode (e.g., 615A, as shown in fig. 6R) (e.g., a mode in which the shared content session is provided via an asynchronous communication session and/or a mode in which real-time communication features are disabled for the shared content session). In some embodiments, the computer system detects a change in the shared content session from the first mode to a third mode (e.g., as shown in fig. 6U, 6X, and/or 6 AA) that is different from the first mode (e.g., a mode in which the shared content session is provided via a synchronous communication session and/or a mode in which a real-time communication feature is enabled for the shared content session). In some implementations, in response to detecting a change in the shared content session from the first mode to the third mode, the computer system displays a second control user interface (e.g., 615A in fig. 6U, 6X, and/or 6 AA) having a second appearance different from the first appearance (e.g., indicates a change in the shared content session from the first mode to the third mode) (e.g., changes the appearance of the second control user interface from the first appearance to the second appearance). Displaying a second control user interface having a second appearance different from the first appearance in response to detecting a change in the shared content session from the first mode to the third mode enables the computer system to automatically update the second control user interface and provide feedback regarding a state of the computer system (e.g., a state in which the real-time communication feature is enabled for the shared content session) based on the mode of the shared content session.
In some embodiments, the computer system (e.g., 600A) displaying a second control user interface (e.g., 615A in fig. 6R) having a first appearance includes displaying (e.g., in the second control user interface) a first set of one or more selectable control options (e.g., 615-8A and/or 615-9A) (e.g., selectable graphical elements, icons, text, and/or affordances), and the computer system displaying a second control user interface (e.g., 615A in fig. 6U, 6X, and/or 6 AA) having a second appearance includes displaying (e.g., in the second control user interface) a second set of one or more selectable control options that is different from the first set of one or more selectable control options (e.g., 615-3, 615-4, and/or 615-5). Displaying the first set of one or more selectable control options when the second control user interface has a first appearance and displaying the second set of one or more selectable control options when the second control user interface has a second appearance enables the computer system to automatically update control options provided in the second control user interface based on a mode of the shared content session. In some embodiments, when the second control user interface has the first appearance, the control user interface includes an audio call option (e.g., 615-8) and a video call option (e.g., 615-9), and does not include a mic on/off option (e.g., 615-4), a speaker on/off option (e.g., 615-3), or a camera on/off option (e.g., 615-5). In some embodiments, when the second control user interface has a second appearance, the control user interface includes a mic on/off option (e.g., 615-4), a speaker on/off option (e.g., 615-3), and a camera on/off option (e.g., 615-5), and does not include an audio call option (e.g., 615-8) or a video call option (e.g., 615-9).
In some embodiments, the computer system (e.g., 600A) displaying the second control user interface (e.g., 615A in fig. 6R) having the first appearance includes displaying (e.g., in the second control user interface) a background of the second control user interface having a first state (e.g., a first background color (e.g., a light color such as white or yellow) and/or a first shaded state (e.g., an unshaded or light shaded state)), and the computer system displaying the background of the second control user interface having the second appearance (e.g., 615A in fig. 6U, 6X, and/or 6 AA) includes displaying a background of the second control user interface having a second state (e.g., a second background color (e.g., a dark color such as gray or black) and/or a second shaded state (e.g., a shaded or dark shaded state)) that is different from the first state. Displaying the context of the second control user interface having the first state when the second control user interface has the first appearance and displaying the context of the second control user interface having the second state when the second control user interface has the second appearance provides feedback regarding the state of the computer system (e.g., the state of the shared content session at the computer system). In some embodiments, the control user interface is displayed in a different background color depending on the mode of the shared content session. For example, the control user interface background has a light color when the shared content session is provided via an asynchronous communication session, and a dark color when the shared content session is provided via a synchronous (e.g., real-time) communication session. In some implementations, the computer system displays a change in context of the second control user interface from the first context to the second context and vice versa in response to detecting a change in the shared content session from the first mode to the third mode.
In some implementations, the third mode is a mode in which one or more real-time communication features (e.g., a live camera feed and/or a live audio feed between the computer system and the external computer system) are enabled for the shared content session, and when the shared content session changes from the first mode (e.g., in fig. 6R) to the third mode (e.g., in fig. 6U, 6X, and/or 6 AA), the appearance of the second control user interface changes (e.g., and vice versa) based on the state change of the computer system (e.g., 600A) (e.g., and not based on the state of the shared content session of the external computer system participating in the shared content session). Changing the appearance of the second control user interface based on the change in the state of the computer system when the shared content session changes from the first mode to the third mode provides feedback regarding the state of the computer system (e.g., the state of the shared content session at the computer system).
In some embodiments, initiating the request to share the content session includes sending (e.g., sharing and/or transmitting) a link (e.g., 645) to one or more external computer systems (e.g., 600B) for joining the shared content session (e.g., displayed in a message UI (e.g., 640)). By providing a convenient and easily accessible link for joining a shared content session at one or more external computer systems, sending a link to join the shared content session to one or more external computer systems reduces the amount of input required to join the shared content session.
In some implementations, the link (e.g., displayed in the message (e.g., 642-4)) includes a join option (e.g., 645) (e.g., a "join" or "open" option). In some embodiments, the computer system detects one or more inputs (e.g., 605-13) corresponding to selection of a joining option (e.g., 645A or 645B) via one or more input devices. In some embodiments, in response to detecting one or more inputs corresponding to selection of a join option, the computer system (e.g., 600A and/or 600B) initiates a process for joining the shared content session (e.g., joining the ongoing shared content session, rejoining the ongoing shared content session, and/or initiating a new shared content session). Initiating a process for joining the shared content session in response to detecting one or more inputs corresponding to selection of the joining option enables the computer system to join the shared content session without displaying additional controls, which provides additional control options without cluttering the user interface. In some implementations, the link (e.g., 645A) is displayed at the computer system (e.g., 600A). In some embodiments, the link (e.g., 645B) is displayed at one or more external computer systems (e.g., 600B). In some implementations, content (e.g., 645 and/or 647) in a message (e.g., 642-4) (e.g., at a computer system and/or one or more external computer systems) changes over time to indicate a status of a shared content session. For example, the message may include an indication (e.g., 647) that the respective content was output during the shared content session, and that the indication changes over time as the respective content changes. As another example, the selectable option (e.g., 645) displayed in the message may change from "join" to "leave" when the computer system is participating in the shared content session. As another example, the message may include a status of the shared content session, such as, for example, an indication of an activity being performed (e.g., watching a movie or listening to a song), which may be updated as the activity changes.
In some embodiments, after initiating a shared content session with one or more external computer systems (e.g., 600), the computer system (e.g., 600) displays a status user interface (e.g., 691) (e.g., a user interface having status information about the shared content session (e.g., a group card), such as, for example, a user status, and/or content associated with the shared content session) including call options (e.g., 615-8 and/or 615-9) (e.g., video call options and/or audio call options) via one or more display generation components (e.g., 600-1). In some embodiments, when a status user interface is displayed that includes call options, the computer system detects one or more inputs corresponding to selection of call options via one or more input devices (e.g., 600-1). In some implementations, in response to detecting one or more inputs corresponding to selection of the call option, the computer system initiates a process for enabling real-time communication for the shared content session (e.g., initiating an audio call to establish an audio feed for the shared content session and/or initiating a video call to establish a video feed and optionally an audio feed for the shared content session). Initiating a process for enabling real-time communication of the shared content session in response to detecting one or more inputs corresponding to selection of the call option enables the computer system to enable real-time communication for the shared content session without displaying additional controls, which provides additional control options without cluttering the user interface.
In some embodiments, after initiating a shared content session with one or more external computer systems, the computer system (e.g., 600) displays, via one or more display generating components (e.g., 600-1), a user status interface (e.g., a user interface having status information regarding the shared content session (e.g., a group card), such as, for example, a user associated with the shared content session, a user status and/or content, that includes an indication (e.g., 694-1) (e.g., text, a graphical indicator, an icon, and/or an affordance) of participants (e.g., one or more users and/or user accounts associated with the external computer system) that participate in the shared content session with real-time communications enabled (e.g., one or more real-time communication features such as a live audio feed and/or a live video feed) and a shared content participant (e.g., 694-1) (e.g., text, a graphical indicator, an icon, and/or an affordance in fig. 6 AE) that includes a disabled graphical indicator (e.g., text, graphical indicator, icon, and/or affordance) of the participants (e.g., participants that participate in the shared content session with real-time communications enabled (e.g., no real-time communication features enabled) with the shared content session) via one or more display generating components (e.g., a display device). The user state interface displaying an indication of a participant of the shared content session that includes a shared content session that is enabled for real-time communication and an indication of a participant of the shared content session that is disabled for real-time communication provides feedback regarding the status of the participants of the shared content session.
In some embodiments, while the shared content session is in a first mode (e.g., disabling real-time communication features for the computer system for the shared content session) at the computer system (e.g., 600), and in response to the second external computer system (e.g., 600) enabling real-time communication of the shared content session at the second external computer system, the computer system displays an incoming communication user interface (e.g., 616, 683, 684, and/or 688) including an accept option (e.g., a "join" option and/or an "accept" option) (e.g., a banner or alert indicating an incoming invitation (e.g., an audio call and/or a video call) to enable real-time communication of the shared content session at the computer system). In some embodiments, when an incoming communication user interface including an acceptance option is displayed, the computer system detects one or more inputs corresponding to a selection of the acceptance option (e.g., 605-6, 605-34, 605-36, and/or 605-40) via one or more input devices (e.g., 600-1). In some embodiments, in response to detecting one or more inputs corresponding to selection of the receipt option, the computer system initiates a process for enabling real-time communication at the computer system for the shared content session (e.g., establishing a real-time communication session with the second external computer system and, in some embodiments, with other participants of the shared content session). Initiating a process for enabling real-time communication at a computer system for a shared content session in response to detecting one or more inputs corresponding to selection of an acceptance option enables the computer system to enable real-time communication for the shared content session at the computer system without displaying additional controls, which provides additional control options without cluttering a user interface.
In some embodiments, while the shared content session is in a fourth mode (e.g., 600A in fig. 6 AE) in which a set of real-time communication features is enabled for the shared content session, the computer system (e.g., 600A) receives a request (e.g., 605-43, 605-45, 605-47, selection of option 697-1, and/or selection of option 697-2) to transition the shared content session from the fourth mode via one or more input devices (e.g., 600-1A) (e.g., a set of one or more inputs corresponding to a request to end the real-time communication of the shared content session, a set of one or more inputs corresponding to a request to end the shared content session, and/or a set of one or more inputs corresponding to a request to transition the shared content session from the fourth mode to the first mode). In some embodiments, while the shared content session is in a fourth mode in which a set of real-time communication features are enabled for the shared content session, and in response to receiving a request to transition the shared content session from the fourth mode, the computer system displays a session options user interface (e.g., 696 and/or 697) via one or more display generating components (e.g., 600-1). In some implementations, the session options user interface includes termination options (e.g., 696-2, 697-1, and/or 697-2) (e.g., a "leave" option, an "end session" option, an icon, an affordance, and/or a graphical element) that are selectable (e.g., via one or more inputs) to terminate a shared content session at the computer system. In some implementations, the session options user interface includes a continuation option (e.g., 696-1) (e.g., a "continue" option, a "cancel" option (e.g., in confirmation interface 697), an icon, an affordance, and/or a graphical element) selectable (e.g., via one or more inputs) to continue sharing the content session at the computer system (e.g., with the real-time communication feature disabled or enabled). In response to receiving a request to transition the shared content session from the fourth mode, displaying a session option user interface including a termination option selectable to terminate the shared content session at the computer system and a continuation option selectable to continue the shared content session at the computer system causes the computer system to automatically display an option to continue or terminate the shared content session upon receiving the request to transition the shared content session from the fourth mode. In some embodiments, the computer system detects selection (e.g., 605-47) of a termination option (e.g., 696-2) via one or more input devices. In some implementations, when the termination option (e.g., 696-2 and/or 697-1) is selected, the shared content session is terminated at the computer system (e.g., 600A) while the shared content session continues for other participants (e.g., 600B) of the shared content session. In some implementations, when a termination option (e.g., 696-2 and/or 697-2) is selected, the shared content session is terminated for all participants of the shared content session. In some implementations, when a termination option (e.g., 696-2) is selected, the computer system displays an option (e.g., 697-1, and/or 697-2) to terminate the shared content session for all participants of the shared content session or for the computer system alone.
In some implementations, in response to receiving a request to transition the shared content session from the fourth mode (e.g., 605-46), the computer system (e.g., 600A) stops displaying the third control user interface (e.g., 615A is no longer displayed in fig. 6 AF) for the shared content session (e.g., graphical objects and/or menus) (e.g., replaces the third control user interface with a session option user interface). In some implementations, the computer system displays the session option user interface (e.g., 696) at a location that at least partially overlaps with a location where the third control user interface (e.g., 615A) was previously displayed (e.g., in fig. 6 AE). In some embodiments, the third control user interface includes information associated with the shared content session, information associated with the communication session, one or more selectable communication session function options that when selected cause the computer system to perform respective functions associated with the communication session, and/or one or more selectable shared content session function options that when selected cause the computer system to perform respective functions associated with the shared content session.
In some embodiments, a computer system (e.g., 600A) receives a selection (e.g., 605-46) of a continuation option (e.g., 696-1) via one or more input devices (e.g., 600-1) (e.g., a set of one or more inputs directed to the continuation option). In some embodiments, in response to receiving a selection of the continue option, the computer system transitions the shared content session from the fourth mode to the first mode (e.g., as shown in fig. 6 AH) (e.g., continues the shared content session with the set of real-time communication features disabled). Transitioning the shared content session from the fourth mode to the first mode in response to receiving the selection of the continue option causes the computer system to automatically continue the shared content session in the first mode.
In some embodiments, the computer system (e.g., 600A) displaying the session options user interface (e.g., 696) includes displaying a continuation option (e.g., 696 and/or 696-1) that can be selected to continue the shared content session at the computer system in accordance with a determination that the shared content session is initiated via asynchronous communication. In some embodiments, the computer system displaying the session options user interface includes discarding displaying a continuation option selectable to continue the shared content session at the computer system in accordance with a determination that the shared content session is not initiated via asynchronous communication. In accordance with a determination that the shared content session is initiated via asynchronous communication, displaying the continuation option, and in accordance with a determination that the shared content session is not initiated via asynchronous communication, discarding displaying the continuation option, causing the computer system to automatically display the option to continue the shared content session in the first mode if the shared content session is initiated via asynchronous communication session. In some implementations, the option to continue sharing the content session with the real-time communication feature disabled is only available when the sharing of the content session is initiated via asynchronous communication. In some implementations, when the shared content session is not initiated via asynchronous communication, the computer system continues the shared content session in the fourth mode in response to selection of a continue option (e.g., where the real-time communication feature is enabled). In some embodiments, the computer system does not display the option to continue the sharing of the content session when the sharing of the content session is not initiated via asynchronous communication.
In some embodiments, after transitioning the shared content session from the fourth mode to the first mode (e.g., as shown in fig. 6 AH) (e.g., after disabling real-time communication at the computer system for the shared content session), the computer system (e.g., 600A) receives a set of one or more inputs (e.g., 605-25) corresponding to a request to display a message user interface (e.g., 640A) (e.g., a user interface of a message application at the computer system) via one or more input devices (e.g., 600-1A). In some embodiments, in response to receiving a set of one or more inputs corresponding to a request to display a message user interface, the computer system displays the message user interface via one or more display generating components (e.g., 640A). In some implementations, displaying the message user interface via the one or more display generating components includes displaying a rejoin option (e.g., 645A) (e.g., a graphical element, icon, and/or affordance) in the message user interface (e.g., in a message in the message user interface) selectable (e.g., via one or more inputs) to transition the shared content session at the computer system from a first mode to a fourth mode (e.g., to re-enable real-time communication features for the shared content session). Displaying a rejoin option that can be selected to transition the shared content session from the first mode to the fourth mode enables the device to provide the user with an option to switch modes of the shared content session without having to display additional controls for the shared content session.
In some implementations, the asynchronous communication includes text-based messaging (e.g., 642-4) (e.g., instant messaging, text messaging, SMS, MMS, and/or email).
In some implementations, while first content (e.g., a first movie, program, and/or music) is being output as respective content of a shared content session (e.g., while "track 3" is being played in fig. 6Q), a computer system (e.g., 600A) receives a request (e.g., 605-23) to initiate playback of second content (e.g., a "football program") (e.g., a second movie, program, and/or music) at the computer system via one or more input devices (e.g., 600-1A), wherein the second content is different from the first content. In some embodiments, in response to receiving a request to initiate playback of the second content at the computer system, the computer system outputs the second content (e.g., 680) as the respective content of the shared content session (e.g., and stops outputting the first content as the respective content of the shared content session). Outputting the second content as the respective content of the shared content session in response to receiving a request to initiate playback of the second content at the computer system causes the computer system to automatically update the respective content of the shared content session based on the content selected for playback at the computer system.
In some embodiments, when the shared content session is active and the first user interface (e.g., 679 in fig. 6R) is displayed, the computer system (e.g., 600A) receives a request (e.g., 605-25) to display a message user interface (e.g., control selection of a message option (e.g., 615-2A) in the user interface (e.g., 615A) via one or more input devices (e.g., 600-1). In some implementations, in response to receiving a request to display a message user interface (e.g., a user interface for a messaging application at a computer system), the computer system displays the message user interface (e.g., 640A in fig. 6S) on at least a portion of the first user interface (e.g., partially overlapping or fully overlapping the first user interface). Displaying a message user interface on at least a portion of the first user interface in response to receiving a request to display the message user interface causes the computer system to automatically display a message user interface for communication with other users while sharing content session activities.
In some embodiments, upon displaying (e.g., in fig. 6S) a message user interface (e.g., 640A) on at least a portion of a first user interface (e.g., 679), a computer system (e.g., 600A) receives a set of one or more inputs (e.g., 605-31 and/or 605-32) corresponding to a request to dismiss the message user interface (e.g., tap input, swipe gesture from a respective location or region of a display generating component, selection of a graphical element such as completion affordance, icon, and/or text) via one or more input devices. In some embodiments, in response to receiving a set of one or more inputs corresponding to a request to dismiss the message user interface (and/or optionally, in accordance with a determination that the set of one or more inputs meets a set of criteria (e.g., a set of criteria for dismissing the message user interface)), the computer system ceases to display the message user interface (e.g., 640A) and displays the first user interface (e.g., 679) (e.g., continues to display at least a portion of the first user interface and/or redisplays at least a portion of the first user interface). Stopping displaying the message user interface and displaying the first user interface in response to receiving a set of one or more inputs corresponding to a request to dismiss the message user interface causes the computer system to automatically dismiss the message user interface and display the first user interface.
In some implementations, the first user interface (e.g., 679) includes a representation (e.g., a window or PiP displaying output content) of media (e.g., 680A) output as the respective content of the shared content session. In some embodiments, the computer system (e.g., 600A) displaying the message user interface (e.g., 640A) on at least a portion of the first user interface (e.g., 679) includes displaying a representation of the media (e.g., 680A) at a location that at least partially overlaps the message user interface region but not the recipient region (e.g., 640-1A) of the message user interface (e.g., a region of the message user interface indicating one or more recipients of the message in the message user interface) and not the message user interface (e.g., a region of the message user interface including one or more messages (e.g., 642-4 and/or 642-5) transmitted as part of the message session). Displaying the representation of the media at a location that at least partially overlaps the message user interface area but does not overlap the recipient area of the message user interface and does not overlap at least a portion of the message display area of the message user interface enhances the user experience of the shared content session by allowing the user to continue viewing the corresponding content while also being able to communicate with other users via the message user interface, including recipients capable of viewing the message.
In some embodiments, the computer system (e.g., 600A) displaying the message user interface (e.g., 640A) on at least a portion of the first user interface (e.g., 679) includes displaying the message user interface concurrently with at least a portion of the first user interface (e.g., the message user interface partially overlays the first user interface) in accordance with a determination that the computer system is a first type of device (e.g., a tablet, laptop, and/or desktop computer). In some embodiments, the computer system displaying the message user interface on at least a portion of the first user interface includes displaying the message user interface without displaying at least a portion of the first user interface (e.g., the message user interface completely overlays the first user interface) in accordance with a determination that the computer system is a second type of device (e.g., a smart phone and/or a wearable device).
In some embodiments, while the shared content session is in the first mode (e.g., as shown in fig. 6R and/or 6Z) (e.g., the real-time communication feature is disabled), the computer system (e.g., 600A) receives a set of one or more inputs (e.g., 605-28 and/or 605-38) corresponding to a request to select screen shared content (e.g., a screen and/or application interface being displayed by the computer system) as respective content for the shared content session. In some embodiments, in response to receiving a set of one or more inputs corresponding to a request to select screen-shared content as respective content of a shared content session, the computer system selects the screen-shared content as respective content of the shared content session (in some embodiments, including outputting the screen-shared content as respective content of the shared content session) and transitions the shared content session from a first mode to a fifth mode (e.g., as shown in fig. 6 AA) in which a real-time audio channel (e.g., an audio feed) is active for the shared content session. Selecting screen-shared content as the respective content of the shared content session and transitioning the shared content session from the first mode to the fifth mode in response to receiving a set of one or more inputs corresponding to a request to select screen-shared content as the respective content of the shared content session causes the computer system to automatically enable real-time communication for the shared content session when the respective content is screen-shared content. In some implementations, when the shared content session is in the fifth mode, the audio channel is active and the real-time video channel is not active. In some embodiments, the real-time audio channel and the real-time video channel are both active when the shared content session is in the fifth mode.
It should be noted that the details of the process described above with respect to method 700 (e.g., fig. 7) also apply in a similar manner to the method described below. For example, any of methods 800, 900, and 1100 optionally include one or more of the features of the various methods described above with reference to method 700. For example, any of the aspects discussed with respect to method 700 for initiating a shared content session using asynchronous communications may be applied to the shared content session described with respect to any of methods 800, 900, and/or 1100. For the sake of brevity, these details are not repeated.
Fig. 8 is a flow chart illustrating a method for managing real-time communication features for a shared content session using a computer system, in accordance with some embodiments. The method 800 is performed at a computer system (e.g., 100, 300, 500, and/or 600) (e.g., a smart phone, a tablet, a desktop computer, a laptop computer, and/or a head-mounted device (e.g., a head-mounted augmented reality and/or augmented reality device)) that communicates with (e.g., includes) one or more display generating components (e.g., 600-1) (e.g., a display controller, a touch-sensitive display system, speakers, a bone-conduction audio output device, a haptic output generator, a projector, a holographic display, and/or a head-mounted display system) and one or more input devices (e.g., 600-1, 600-2, and/or 600-3) (e.g., a touch-sensitive surface, a keyboard, a mouse, a touch pad, a microphone, one or more optical sensors for detecting gestures, one or more capacitive sensors for detecting hover inputs, and/or an accelerometer/gyroscope/inertial measurement unit) and the one or more input devices and/or the one or more input devices connected to the one or more display generating components and/or more input devices. Some operations in method 800 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 800 provides an intuitive way for managing real-time communication features for shared content sessions. The method reduces the cognitive burden on the user to manage the real-time communication characteristics of the shared content session, thereby creating a more efficient human-machine interface. For battery-powered computing devices, enabling users to manage real-time communication features of a shared content session faster and more efficiently saves power and increases the time interval between battery charges.
At method 800, synchronized content is enabled to be shared (e.g., as shown in fig. 6U, 6X, and 6 AA) with a computer system (e.g., a smart phone, tablet, desktop, laptop, and/or head-mounted device (e.g., head-mounted augmented reality and/or augmented reality device)) in a shared content session where synchronized content is enabled for sharing with an external computer system (e.g., 600A) (e.g., one or more external computer systems (e.g., computer systems associated with a remote user (e.g., being operated by and/or logged into a user account associated with the remote user)) when the external computer system is not outputting (e.g., a head-mounted augmented reality and/or augmented reality device), and a real-time communication session (e.g., as shown in fig. 6U, 6X and 6 AA) is enabled and/or selected for sharing when the external computer system is not outputting (e.g., synchronized content is added to a shared content session but not actively outputting at the external computer system because the remote user has not yet started playing back content, paused playback or stopped playback)) and a real-time communication session (e.g., a communication session where live audio and/or video is enabled for a live communication with the external user is temporarily disabled (e.g., a first mode) (e.g., a first mode, a live communication mode, a communication mode, e.g., a first mode, a communication mode, a mode, etc.) of the shared content is disabled) or 6) (e.g., as shown in fig. 6 AA) when the first mode, live camera feed and/or live audio feed between the computer system and the external computer system), the computer system receives (802) an invitation (e.g., 683, 684, and/or 688) to join a real-time communication session (e.g., with the external computer system, and optionally a different computer system) (in some embodiments, the real-time communication session is provided as part of a shared content session) and displays (804) (e.g., in response to receiving the invitation to join the real-time communication session) via one or more display generating components (e.g., 600-1B) (e.g., a display controller, a touch-sensitive display system, a speaker, a bone conduction audio output device, a tactile output generator, a projector, a holographic display, and/or a head-mounted display system) options (e.g., 683-2, 684-2, and/or 688-2) (e.g., selectable graphical elements, icons, text, and/or affordances) to accept the invitation to join the real-time communication session. In some implementations, the shared content session, when active, enables the computer system to output the respective content (e.g., synchronized content and/or screen-shared content) while the external computer system is outputting the respective content. In some embodiments, the invitation to join a real-time communication session is an invitation to initiate and/or join a new real-time communication session. In some embodiments, the invitation to join the real-time communication session is an invitation to join an ongoing real-time communication session. In some embodiments, the computer system displays a representation of the invitation (e.g., 683, 684, and/or 688). In some embodiments, the representation of the invitation includes an option to accept the invitation to join the real-time communication session.
At step 806 of method 800, after receiving the invitation to join the real-time communication session (and in some embodiments, after displaying the invitation) and in accordance with a determination that an option to accept the invitation to join the real-time communication session (e.g., 683-2, 684-2, and/or 688-2) has been selected (e.g., in response to detecting a selection of an option to accept the invitation to join the real-time communication session (e.g., 605-34, 605-36, and/or 605-40)), the computer system (e.g., 600B) joins (808) the real-time communication session (e.g., as shown in fig. 6W, 6Y, and/or 6 AC) (in some embodiments, continues to share the content session concurrently, with synchronized content being selectable for sharing with the external computer system). In some implementations, joining the real-time communication session includes enabling a set of real-time communication features (e.g., a live camera feed and/or a live audio feed between the computer system and an external computer system) for the shared content session.
At step 806 of method 800, after receiving the invitation to join the real-time communication session and in accordance with a determination that an option to accept the invitation to join the real-time communication session (e.g., 683-2, 684-2, and/or 688-2) has not been selected (e.g., after a predetermined amount of time, the option has not been selected and/or the computer system has received a set of one or more inputs (e.g., 605-33, 605-37, and/or 605-39) including instructions to de-invite, ignore the invitation, and/or reject the invitation), the computer system relinquishes 810 the joining the real-time communication session (e.g., as shown in fig. 6V and/or 6 AB) (in some embodiments, while continuing to share the content session, wherein synchronized content is selected for sharing with the external computer system). Joining the real-time communication session in accordance with the option to determine that an invitation to join the real-time communication session has been selected, and relinquishing joining the real-time communication session in accordance with the option to determine that an invitation has not been selected, enables the user to join or reject the real-time communication session while in the shared content session without having to display additional controls.
In some embodiments, a computer system (e.g., 600) generates a component (e.g., 600-1) via one or more displays and displays graphical objects (e.g., 620) (e.g., icons, affordances, and/or graphical elements) selectable (e.g., via one or more inputs) to display a set of controls (e.g., 615) for sharing a content session (e.g., controlling a user interface) concurrently with an option to accept an invitation to join a real-time communication session (e.g., 683-2, 684-2, and/or 688-2). The option to accept the invitation to join the real-time communication session is displayed simultaneously with the graphical object selectable to display a set of controls for the shared content session such that the user is able to join the real-time communication session while in the shared content session without having to display the set of controls for the shared content session.
In some embodiments (e.g., after and/or concurrently with receiving an invitation to join a real-time communication session), a computer system (e.g., 600B) displays, via one or more display generating components (e.g., 600-1B), an option (e.g., 683-1, 684-1, and/or 688-1) (e.g., selectable graphical elements, icons, text, and/or affordances) to decline the invitation to join the real-time communication session. In some embodiments, after receiving the invitation to join the real-time communication session and in accordance with a determination that an option (e.g., 605-33, 605-37, and/or 605-39) to reject the invitation to join the real-time communication session has been selected (e.g., in response to detecting selection of the option to reject the invitation to join the real-time communication session), the computer system rejects (e.g., terminates and/or stops displaying) the invitation to join the real-time communication session (e.g., does not join the real-time communication session) (in some embodiments, while continuing to share the content session, with the synchronized content selected for sharing with the external computer system). In some embodiments, after receiving the invitation to join the real-time communication session and in accordance with a determination that an option to decline the invitation to join the real-time communication session has not been selected, the computer system relinquishes the invitation to decline the real-time communication session (e.g., at least temporarily continues to display the invitation to join the real-time communication session). Rejecting the invitation to join the real-time communication session in accordance with the option determining that the invitation to join the real-time communication has been selected, and discarding the invitation to join the real-time communication session in accordance with the option determining that the invitation to join the real-time communication has not been selected, enabling the user to reject the invitation to join the real-time communication session while in the shared content session without having to display additional controls.
In some embodiments, after receiving the invitation to join the real-time communication session and in accordance with a determination that an option (e.g., 683-2, 684-2, and/or 688-2) to accept the invitation to join the real-time communication session has not been selected within at least a threshold amount of time (e.g., 5 seconds, 7 seconds, or 10 seconds), the computer system (e.g., 600B) ceases to display the invitation to join the real-time communication session (e.g., 683, 684, and/or 688) (e.g., does not join the real-time communication session) (in some embodiments, while continuing to share the content session, with synchronized content selected for sharing with the external computer system). Stopping displaying the invitation to join the real-time communication session in accordance with the determination that an option to accept the invitation to join the real-time communication has not been selected for at least a threshold amount of time causes the computer system to automatically stop displaying (and in some embodiments, refusing) the invitation to join the real-time communication session in the shared content session when the invitation has not been accepted for the threshold amount of time. In some embodiments, in accordance with a determination that an option to accept an invitation to join a real-time communication session has not been selected within at least a threshold amount of time, the computer system denies the invitation to join the real-time communication session.
In some embodiments, in response to receiving an invitation to join a real-time communication session, a computer system (e.g., 600B) displays a notification (e.g., 648, 683, 684, and/or 688) (e.g., a banner and/or graphical user interface object) corresponding to the invitation to join the real-time communication session (e.g., representation and/or including an indication thereof) via one or more display generating components (e.g., 600-1B). Displaying a notification corresponding to an invitation to join a real-time communication session provides feedback regarding the status of the computer system (e.g., the status that the invitation has been received). In some embodiments, the notification includes an option to accept the invitation to join the real-time communication session (e.g., 683-2, 684-2, and/or 688-2) (and/or a rejection (e.g., options 683-1, 684-1, and/or 688-1 of the invitation to join the real-time communication session)).
In some embodiments, upon displaying the notifications (e.g., 648, 683, 684, and/or 688), the computer system (e.g., 600B) receives a selection (e.g., 605-12) of the notification (e.g., 648) via one or more input devices (e.g., 600-1B) (e.g., a set of one or more inputs comprising the selection of the notification). In some embodiments, in response to receiving the selection of the notification, the computer system displays an asynchronous communication user interface (e.g., 640B) (e.g., a user interface for a messaging application at the computer system and/or a user interface for an email application at the computer system) via one or more display generating components (e.g., 600-1B). Displaying the asynchronous communication user interface in response to receiving the selection of the notification reduces the amount of input required to display a user interface for providing asynchronous communication (e.g., to participants of the shared content session and/or to a user account associated with an external computer system sending an invitation to join the real-time communication session). In some embodiments, the asynchronous communication user interface enables the computer system to provide asynchronous communications (e.g., text messages, SMS messages, MMS messages, and/or emails) to participants (e.g., 600A) of a shared content session and/or to user accounts associated with external computer systems that send invitations to join a real-time communication session. In some embodiments, the asynchronous communication user interface includes an indication (e.g., 640-1) of the participants of the message conversation (e.g., recipients of text messages, SMS messages, MMS messages, and/or emails), the content of messages sent between the participants (e.g., 642-1, 642-2, 642-3, and/or 642-4), and/or a message composition user interface (e.g., 644) for composing and/or sending new messages to other participants.
In some embodiments, the computer system receives an invitation (e.g., 645 and/or 648) to join the shared content session before the computer system (e.g., 600B) is in the shared content session in which synchronized content is enabled for sharing with the external computer system. In some implementations, the invitation to join the shared content session is received via an asynchronous communication (e.g., 642-4B) (e.g., an SMS message, an MMS message, or an email received at the computer system) provided using an asynchronous communication application (e.g., a messaging application or an email application operating at the computer system).
In some implementations, the asynchronous communication (e.g., 642-4B) includes an indication (e.g., 647B) of the synchronous content for the shared content session (e.g., text, icons, and/or graphical elements representing the synchronous content selected for the shared content session). The inclusion of an indication of synchronous content in the asynchronous communication provides feedback indicating which content to select for sharing via the shared content session.
In some implementations, the asynchronous communication (e.g., 642-4B) includes an option (e.g., 645B) (e.g., selectable graphical elements, icons, text, and/or affordances) selectable (e.g., via one or more inputs) to join the shared content session. Including an option that can be selected to join a shared content session with asynchronous communications reduces the amount of input required to join the shared content session by providing an option that can be easily and conveniently selected by a user of the computer system without having to navigate to another user interface.
In some embodiments, the invitation to join the real-time communication session (e.g., 683 and/or 684) is initiated (e.g., at the external computer system (e.g., 600A)) via selection (e.g., 605-26 or 605-27) of a call option (e.g., 615-8A or 615-9A) (e.g., selectable graphical element, icon, text, and/or affordance) (e.g., audio call option or video call option) in a synchronized content session user interface (e.g., 615A) (e.g., a control user interface displayed at the external computer system).
In some embodiments, the invitation to join the real-time communication session (e.g., 688) is initiated (e.g., at the external computer system (e.g., 600A)) via selection (e.g., 605-28 and/or 605-38) of screen sharing options (e.g., 615-6A and/or 652) (e.g., selectable graphical elements, icons, text and/or affordances) in a synchronized content session user interface (e.g., 615A and/or 655) at the external computer system (e.g., a control user interface displayed at the external computer system).
In some embodiments, the option to accept the invitation to join the real-time communication session (e.g., 688) includes an indication that the respective user is sharing screen content (e.g., image data generated by the device (e.g., computer system; external computer system)) from an external computer system (e.g., 600A) associated with the respective user, the image data providing a real-time representation of image or video content currently displayed at the device (e.g., as shown in FIG. 6 AA). Displaying an indication that the respective user is sharing screen content from an external computer system associated with the respective user provides feedback regarding the type of content that is being shared with the computer system via the shared content session. In some embodiments, the computer system displays a notification and/or banner for the incoming invitation, the notification and/or banner including an indication that the user is sharing their screen.
In some embodiments, after relinquishing joining the real-time communication session (e.g., in response to rejecting the joining the real-time communication session (e.g., rejecting input 605-39 on option 688-1) and/or in response to failing to select an option to accept the invitation to join the real-time communication session (e.g., 688-2), the computer system (e.g., 600B) displays, via one or more display generating components (e.g., 600-1B), an indication of an activity occurring in the shared content session (e.g., sharing of screen shared content) and a prompt (e.g., text indicating that a user of the computer system may join the real-time communication session to view the screen shared content) to join the real-time communication session. Displaying an indication of activity occurring in the shared content session and a prompt to join the real-time communication session provides feedback regarding the status of the computer system (e.g., the status of the computer system not participating in screen-shared content, but may participate in the shared content by joining the real-time communication session).
In some embodiments, after relinquishing the joining of the real-time communication session (e.g., in response to refusing to join the real-time communication session (e.g., refusing input 605-39 on option 688-1) and/or in response to failing to select an option to accept the invitation to join the real-time communication session (e.g., 688-2)), the computer system (e.g., 600B) displays a control user interface (e.g., 615B) for the shared content session (e.g., graphical object and/or menu) via one or more display generating components (e.g., 600-1B), the control user interface having a first control option (e.g., 615-8B and/or 615-9B) (e.g., selectable graphical elements, icons, text, and/or affordances) (e.g., message options, audio call options, video call options, and/or shared options). In some embodiments, the computer system receives a set of one or more inputs including a selection of a first control option (e.g., 605-35) via one or more input devices (e.g., 600-1B). In some embodiments, in response to receiving a set of one or more inputs including selection of the first control option, the computer system initiates a process (e.g., as shown in fig. 6W) for joining the real-time communication session (e.g., joining or rejoining an ongoing audio call to establish an audio feed for the shared content session and/or joining or rejoining an ongoing video call to establish a video feed and optionally an audio feed for the shared content session). Initiating a process for joining the real-time communication session in response to receiving a set of one or more inputs including a selection of the first control option enables the computer system to automatically join the real-time communication session after relinquishing joining the real-time communication session. In some implementations, after rejecting or failing to join the real-time communication session, the computer system can join the real-time communication session in response to selection of an option (e.g., an audio call option and/or a video call option) in a control user interface displayed for the shared content session. In some embodiments, after joining the real-time communication session, the computer system enables the user to communicate with participants of the shared content session using real-time communications such as live audio feeds and/or live video feeds.
It is noted that the details of the process described above with respect to method 800 (e.g., fig. 8) also apply in a similar manner to the methods described above and/or below. For example, methods 700, 900, and 1100 optionally include one or more of the features of the various methods described above with reference to method 800. For example, any of the aspects discussed with respect to method 800 for managing real-time communication features of a shared content session may be applied to a shared content session described with respect to any of methods 700, 900, and/or 1100. For the sake of brevity, these details are not repeated.
Fig. 9 is a flow chart illustrating a method for managing shared content sessions using a computer system, according to some embodiments. The method 900 is performed at a computer system (e.g., 100, 300, 500, and/or 600) (e.g., a smart phone, a tablet, a desktop computer, a laptop computer, and/or a head-mounted device (e.g., a head-mounted augmented reality and/or augmented reality device)) that communicates with (e.g., includes) one or more display generating components (e.g., 600-1) (e.g., a display controller, a touch-sensitive display system, speakers, a bone-conduction audio output device, a haptic output generator, a projector, a holographic display, and/or a head-mounted display system) and one or more input devices (e.g., 600-1, 600-2, and/or 600-3) (e.g., a touch-sensitive surface, a keyboard, a mouse, a touch pad, a microphone, one or more optical sensors for detecting gestures, one or more capacitive sensors for detecting hover inputs, and/or an accelerometer/gyroscope/inertial measurement unit) and the one or more input devices and/or the one or more input devices connected to the one or more display generating components and/or more input devices. Some operations in method 900 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, method 900 provides an intuitive way for managing shared content sessions. The method reduces the cognitive burden on the user to manage the shared content session, thereby creating a more efficient human-machine interface. For battery-driven computing devices, enabling users to manage shared content sessions faster and more efficiently saves power and increases the time interval between battery charges.
At method 900, while a computer system (e.g., 600A) is in a communication session (e.g., a real-time communication session, an asynchronous communication session, and/or a shared content session) with an external computer system (e.g., 600B) (e.g., one or more external computer systems) (e.g., a computer system associated with a remote user (e.g., operated by and/or logged into a user account associated with the remote user)), and optionally while the computer system is in a shared content session with the external computer system, the computer system (e.g., a smart phone, a tablet, a desktop computer, a laptop computer, and/or a headset device (e.g., a head-mounted augmented reality and/or augmented reality device) displays (902) a control user interface (e.g., 615A) (e.g., a control menu, a) (e.g., graphical user interface(s) (e.g., graphical interface(s), a control icon(s), a graphical element(s), a) (e.g., graphical icon(s), a) (e.g., graphical element(s), a) (e.g., a control icon(s) (e.g., a) (e.g., graphical element (s)), a) (e.g., graphical element (s)). Sharing options and/or options for displaying information associated with the shared content session). In some embodiments, the control user interface for controlling the one or more settings of the communication session includes information associated with the communication session, information associated with the shared content session, one or more selectable communication session function options that, when selected, cause the computer system to perform respective functions associated with the communication session, and/or one or more selectable shared content session function options that, when selected, cause the computer system to perform respective functions associated with the shared content session.
While the computer system (e.g., 600A) is in a communication session with an external computer system (e.g., 600B), the computer system detects (904) a set of one or more inputs (e.g., 605-15, 605-28, and/or 605-43) directed to a control user interface (e.g., 615A) via one or more input devices (e.g., 600-1A) (e.g., a touch-sensitive surface, a keyboard, a mouse, a touch pad, one or more optical sensors for detecting gestures, one or more capacitive sensors for detecting hover inputs, and/or an accelerometer/gyroscope/inertial measurement unit), wherein the set of one or more inputs includes a selection of a first control option (e.g., 615-6A).
While the computer system (e.g., 600A) is in a communication session with an external computer system (e.g., 600B) and in response to detecting a selection (e.g., 605-15, 605-28, and/or 605-43) of a first control option (e.g., 615-6A), the computer system displays (906) representations (e.g., 660-1, 660-2, 660-3, and/or 660-4) (e.g., graphical elements, icons, text, and/or affordances) of one or more applications available (e.g., downloaded, stored, and/or installed) on the computer system as and/or as part of a synchronized content (e.g., a shared content session in which the provided content is output as synchronized content of the shared content session), the one or more applications are configured to provide a queue and/or immediate addition of corresponding content (e.g., a video experience, an interactive experience, an audio experience, or other output of the shared content) as synchronized content (e.g., program, movie, video experience, song, audio, or album, or other shared content) and/or other output of the synchronized output (e.g., to the computer system and/or the external system). In response to detecting selection of the first control option, displaying representations of one or more applications available on the computer system configured to provide content that can be played as synchronized content during the communication session enables the computer system to automatically provide a set of applications supporting playback of the synchronized content to the user without requiring additional input to filter the applications or to determine such applications through a trial-and-error process. In some embodiments, the computer system foregoes displaying representations of applications that are not configured to provide content that can be played as synchronized content during a communication session (e.g., in a shared content session and/or a real-time communication session). In some embodiments, among applications available on a computer system, some applications are configured to provide content that can be played in a shared content session, and some applications are not configured to provide content that can be played in a shared content session. In some embodiments, the one or more applications configured to provide content that can be played as synchronized content are a subset of applications available at the computer system, wherein the applications in the subset correspond to applications that can stream content such as synchronized content. In some embodiments, the representations of the one or more applications include representations of one or more applications associated with a user of the computer system (e.g., applications corresponding to user accounts with the user or applications previously used by the user, even if such applications are not currently downloaded onto the computer system).
In some embodiments, in response to detecting selection (e.g., 605-15, 605-28, and/or 605-43) of the first control option (e.g., 615-6A), the computer system (e.g., 600A) displays a screen sharing option (e.g., 652) (e.g., selectable graphical elements, icons, text, and/or affordances) of a process for selecting screen sharing content (e.g., 690) (e.g., a screen and/or an application interface displayed by the computer system) for the communication session (e.g., the communication session is a shared content session, and the screen sharing content is selected for sharing in the shared content session) via one or more display generating components (e.g., 600-1A). In response to detecting selection of the first control option, a screen sharing option selectable to initiate a process for selecting screen sharing content for the communication session is displayed to cause the computer system to automatically display a control for sharing display content of a screen of the computer system in the communication session. In some implementations, the screen sharing option is a second control option (e.g., 615-6A) in the control user interface (e.g., 615A).
In some embodiments, in response to detecting selection (e.g., 605-15, 605-28, and/or 605-43) of the first control option (e.g., 615-6A), the computer system (e.g., 600A) displays, via the one or more display generating components (e.g., 600-1A), setting options (e.g., selectable graphical elements, icons, text, and/or affordances) for displaying setting controls (e.g., controls for enabling, disabling, and/or modifying one or more settings of an application configured to provide content for sharing the content of the content session and/or an application providing or enabling one or more features of the communication session (e.g., real-time audio, real-time video, and/or messaging) for an application associated with the communication session. Displaying a setup option selectable to display setup controls for an application associated with the communication session in response to detecting selection of the first control option causes the computer system to automatically display controls for enabling, disabling, and/or modifying one or more settings of the communication session. In some embodiments, the setup control is globally applied to the shared content session at the computer system. In some embodiments, the setup control is applied to a particular application operating at the computer system and configured to provide content for the shared content session. In some embodiments, the setup options include an automatic playback option 652. In some embodiments, the setup options include a manual playback option 654. In some embodiments, the setup options include setup options other than an automatic playback option or a manual playback option.
In some embodiments, displaying representations (e.g., 660-1, 660-2, 660-3, and/or 660-4) of one or more applications available on a computer system (e.g., 600A) configured to provide content that can be played as synchronized content during a communication session includes displaying a list (e.g., 660) of applications (e.g., application icons, affordances, and/or graphical elements) arranged (e.g., organized, categorized, and/or filtered) based on usage criteria (e.g., criteria based on usage of respective applications in the application list, such as recency of use and/or frequency of use (e.g., most recently used or more frequently used applications are displayed higher or earlier in the list than most recently used or less frequently used applications)). Displaying the list of applications based on the usage criteria arrangement causes the computer system to automatically organize and display relevant applications to a user of the computer system based on usage of the corresponding applications. In some embodiments, the computer system displays a first application icon (e.g., 660-1) having a first location in the list when the first application is used later (or more frequently) than the second application and the third application, and displays a second application icon (e.g., 660-2) having a second location in the list when the second application is used later (or more frequently) than the third application and used later (or more frequently) than the first application, and displays a third application icon (e.g., 660-3) having a third location in the list when the third application is used later (or more frequently) than the first application and the second application.
In some embodiments, upon displaying representations (e.g., 660-1, 660-2, 660-3, and/or 660-4) of one or more applications available on a computer system (e.g., 600A) configured to provide content that is capable of being played as synchronized content during a communication session, the computer system detects selection (e.g., 660-21) (e.g., selection of icons, affordances, text, and/or graphical elements corresponding to the first application) of a first application (e.g., 605-1) of the one or more applications via one or more input devices (e.g., 600-1A). In some embodiments, in response to detecting a selection of a first application of the one or more applications, the computer system displays a user interface (e.g., 670) for the first application via the one or more display generating components. Displaying a user interface for the first application in response to detecting the selection of the first application enables the computer system to switch to the user interface for the first application without displaying additional controls. In some embodiments, the computer system launches and/or activates the first application as part of a user interface that displays the first application.
In some implementations, the computer system (e.g., 600A) displaying the user interface (e.g., 670) for the first application (e.g., 660-1) includes displaying the user interface for the first application without initiating playback of first content (e.g., a football program) (e.g., a song, music, program, movie, game, browsing experience, or other interactive experience) associated with the first application as synchronized content during the communication session (e.g., selecting the first application without sharing the first content). In some embodiments, the computer system detects, via one or more input devices (e.g., 600-1A), selection (e.g., 605-22 and/or 605-23) of second content (e.g., 674) (e.g., first content or content other than first content) for playback (e.g., at the computer system and/or one or more external computer systems). In some embodiments, in response to detecting selection of the second content for playback (e.g., at the external computer system or at the computer system) (in some embodiments, and in accordance with a determination that the second content is capable of playing as synchronized content during the communication session), the computer system initiates playback of the second content (e.g., football programming) as synchronized content during the communication session (e.g., as shown in fig. 6R). Initiating playback of the second content as synchronized content during the communication session in response to detecting selection of the second content for playback causes the computer system to automatically play the second content when the second content is selected for playback in the communication session and is capable of being played as synchronized content. In some implementations, an application can be selected without initiating playback of content associated with the application as synchronized content for a communication session. However, when content that can be played as synchronized content is selected for playback, the content is shared as synchronized content with participants of a communication session (e.g., a shared content session).
In some embodiments, in response to detecting selection (e.g., 605-15, 605-28, and/or 605-43) of the first control option (e.g., 615-6A), the computer system (e.g., 600A) displays a set of one or more playback options (e.g., 645 and/or 656) (e.g., a set of one or more selectable graphical elements, icons, text, and/or affordances) selectable (e.g., via one or more inputs) to set (e.g., enable, disable, and/or modify) automatic playback settings (e.g., controls for enabling, disabling, and/or modifying automatic playback of synchronized content of the communication session) for the synchronized content of the communication session via the one or more display generating components (e.g., 600-1A). In some embodiments, upon displaying a set of one or more playback options, the computer system detects, via the one or more input devices, a set of one or more inputs that includes a selection of one of the playback options. In some embodiments, in response to detecting a set of one or more inputs that includes selection of one of the playback options and in accordance with a determination that the selected playback option is the first playback option (e.g., automatic playback option 652), the computer system enables a mode (e.g., no manual confirmation of playback is required) in which synchronized content (e.g., future synchronized content) is automatically output at the computer system during the communication session. In some embodiments, in response to detecting a set of one or more inputs that includes selection of one of the playback options and in accordance with a determination that the selected playback option is a second playback option (e.g., 654) (e.g.,
The "next query" option) the computer system enables a mode in which the synchronized content (e.g., future synchronized content) is not automatically output at the computer system (e.g., manual confirmation is required to initiate playback of the synchronized content). Displaying a set of one or more playback options that can be selected to set an automatic playback setting for synchronized content of a communication session reduces the number of inputs required to set the automatic playback setting for synchronized content of the communication session.
In some embodiments, in response to detecting selection (e.g., 605-15, 605-28, and/or 605-43) of the first control option (e.g., 615-6A), the computer system (e.g., 600A) displays (e.g., concurrently with presentation of one or more applications available on the computer system configured to provide content capable of being played as synchronized content during the communication session) the application store options (e.g., 662) (e.g., selectable graphical elements, icons, text, and/or affordances) via the one or more display generating components (e.g., 600-1A). In some embodiments, upon displaying the application store option, the computer system detects a set of one or more inputs corresponding to a selection (e.g., 605-17) of the application store option via one or more input devices. In some embodiments, in response to detecting a set of one or more inputs corresponding to selection of an application store option, the computer system displays a user interface (e.g., 664) (e.g., an application store interface) providing the ability to obtain (e.g., download) an application (e.g., an application configured to provide content that can be played as synchronized content during a communication session) via one or more display generating components. Displaying a user interface that provides the ability to obtain an application in response to detecting a selection of an application store option reduces the amount of input required to access the user interface to obtain an application of the computer system.
In some implementations, the user interface (e.g., 664) providing the ability to obtain applications includes a list (e.g., 666) of one or more applications (e.g., 666-1, 666-2, and/or 666-3) configured to provide content that can be played as synchronized content during the communication session (e.g., applications not configured to provide content that can be played as synchronized content during the communication session are not displayed). Displaying a user interface that includes a list of one or more applications configured to provide content that can be played as synchronized content during a communication session reduces input by providing a filtered list of applications configured to provide content that can be played as synchronized content without additional user input.
In some embodiments, upon displaying representations (e.g., 660-1, 660-2, 660-3, and/or 660-4) of one or more applications available on a computer system (e.g., 600A) configured to provide content that can be played as synchronized content during a communication session, the computer system detects a selection (e.g., 605-21) of a respective application (e.g., 660-1) of the one or more applications available on the computer system (e.g., selection of an icon, affordance, text, and/or graphical element corresponding to the respective application). In some embodiments, in response to detecting a selection of a respective application, the computer system displays a user interface (e.g., 670) of the respective application via one or more display generating components (e.g., 600-1A), wherein the user interface of the respective application includes an indication (e.g., 672 and/or 674) (e.g., text, icon, affordance, highlighting, bold, color, and/or graphical element) of the respective content that is provided by the respective application as synchronized content during the communication session (e.g., the respective content is associated with an indicator that indicates that the content is capable of being played as synchronized content during the communication session). Displaying a user interface for the respective application (including an indication of the respective content provided by the respective application that is capable of being played as synchronized content during the communication session) provides feedback regarding the state of the computer system (e.g., a state in which the computer system is capable of playing the respective content as synchronized content during the communication session). In some embodiments, the computer system launches and/or activates the respective application when the user interface for the respective application is displayed. In some embodiments, the computer system does not display an indication of content that cannot be played as synchronized content during the communication session.
In some embodiments, in response to detecting selection (e.g., 605-15, 605-28, and/or 605-43) of the first control option (e.g., 615-6A), the computer system (e.g., 600A) displays (e.g., concurrently with a representation of one or more applications available on the computer system configured to provide content that can be played as synchronized content during the communication session) the one or more applications configured to provide content that can be played as synchronized content (e.g., a notification, icon, graphical element, and/or text indicating that the application can play synchronized content) (e.g., "Apps for Group Play" in area 658) (e.g., text, icon, affordance, and/or graphical element). In response to detecting selection of the first control option, displaying an indication that the one or more applications are configured to provide content that is playable as synchronized content provides feedback regarding a state of the computer system (e.g., a state in which the computer system is able to play content from the one or more applications as synchronized content).
In some embodiments, displaying, by a computer system (e.g., 600A), a representation (e.g., 660-1, 660-2, 660-3, and/or 660-4) of one or more applications available on the computer system that are configured to provide content that can be played as synchronized content during a communication session includes displaying application icons (e.g., 660-1, 660-2, 660-3, and/or 660-4) corresponding to the one or more applications (e.g., affordances and/or selectable graphical elements; a list of application icons). Displaying representations of one or more applications as application icons corresponding to the one or more applications provides feedback indicating that the respective application icons may be selected to launch the corresponding applications. In some embodiments, the computer system detects a selection of an application icon and, in response, launches an application corresponding to the selected application icon, including displaying a user interface for the application corresponding to the selected application icon.
In some embodiments, displaying representations (e.g., 660-1, 660-2, 660-3, and/or 660-4) of one or more applications available on a computer system (e.g., 600A) configured to provide content that can be played as synchronized content during a communication session includes displaying a scrollable list (e.g., 660) of the one or more applications (e.g., application icons, affordances, and/or graphical elements that move (scroll) in response to an input such as a swipe or scroll gesture). In some implementations, the computer system detects an input (e.g., 605-16) (e.g., a swipe or scroll gesture) and, in response, updates the displayed application list. For example, a computer system scrolls a list of applications to display a different application while the previously displayed application is scrolled off the screen.
In some embodiments, displaying representations of one or more applications available on a computer system (e.g., 600A) configured to provide content that can be played as synchronized content during a communication session includes displaying a list (e.g., 660 and/or 666) of one or more applications (e.g., application icons, affordances, and/or graphical elements) that visually emphasizes one or more applications selected based on participants of the communication session over other applications (e.g., based on criteria of the application previously used with the participants, permissions and/or rights of user accounts associated with the participants of the communication session). Displaying a list of one or more applications that visually emphasizes the one or more applications selected based on the participant of the communication session but not the other applications causes the computer system to automatically organize and display relevant applications to a user of the computer system based on the permissions and/or rights of the participant of the communication session. In some embodiments, visual emphasis includes highlighting or marking/giving a badge. In some implementations, the visual emphasis includes an order in which the application list is arranged (e.g., organizing, classifying, and/or filtering to increase the relative emphasis of applications that may be used while in a communication session with a participant in the communication session). In some embodiments, the computer system displays a list of one or more applications arranged in a first order in accordance with determining that the participant of the communication session is a first group of participants. In some embodiments, the computer system displays a list of one or more applications arranged in a second order different from the first order in accordance with a determination that the participant of the communication session is a second group of participants different from the first group of participants.
It should be noted that the details of the process described above with respect to method 900 (e.g., fig. 9) also apply in a similar manner to the methods described above and/or below. For example, methods 700, 800, and 1100 optionally include one or more of the features of the various methods described above with reference to method 900. For example, any of the aspects discussed with respect to method 900 for managing a shared content session may be applied to a shared content session described with respect to any of methods 700, 800, and/or 1100. For the sake of brevity, these details are not repeated.
Fig. 10A-10N illustrate exemplary user interfaces for managing transfer of a real-time communication session, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the process in fig. 11.
Fig. 10A depicts an environment 1001 including Jane 1002 (also referred to herein as a user), earphone 1003, jane's device 600B (also referred to herein as Jane's phone), and computer system 1000 (also referred to herein as "Jane's tablet"). Fig. 10A depicts Jane's device 600B and computer system 1000 in environment 1001, and also depicts additional more detailed views of Jane's device 600B and computer system 1000. Generally, environment 1001 is a schematic depiction of the environment of computer system 1000 including Jane 1002, jane's device 600B, and Jane, and is shown in the figures when useful for understanding and/or illustrating various aspects of the technology described herein. In the embodiments provided herein, one or more tablet computers, laptops, and telephones are used to depict these techniques; however, other devices may be used, such as desktop computers and/or other computer systems or devices. For example, the computer system 1000 may be a laptop computer instead of a tablet computer. As another example, jane's device 600B may be a tablet computer instead of a phone.
In the embodiment shown in FIGS. 10A-10N, jane's device 600B is a computer system that includes a display 600-1B, one or more cameras 600-2B, one or more microphones 600-3B, and one or more speakers 600-4B, as described above. Computer system 1000 includes a display 1000-1, one or more cameras 1000-2, one or more microphones 1000-3, and one or more speakers. For example, computer system 1000 and Jane's device 600B includes one or more elements of devices 100, 300, and/or 500, such as a speaker, microphone, memory, and a processor. In some embodiments, jane is shown wearing an earpiece 1003 that is connected to Jane's device 600B or computer system 1000 via a wired or wireless connection. The headset 1003 includes speakers and is used to provide audio output from a real-time communication session located at Jane's device 600B or computer system 1000.
In fig. 10A, jane 1002 is using Jane's device 600B to engage in a video call with a knapsack guest group, as shown by video call interface 1010 displayed on Jane's device 600B. Video call interface 1010 is a user interface for a video call application at Jane's device 600B. In the implementation depicted in fig. 10A, the video call interface 1010 includes video tiles 1012 and 1014 and an auto view 1015. Video tile 1012 shows a video feed from a device of Ryan, video tile 1014 shows a video feed from a device of John, and self view 1015 shows a video feed from a device 600B of Jane captured using camera 600-2B. Jane's device 600B also displays a pill-shaped camera 1020 with green color to indicate that the video call is active at Jane's device 600B. In some embodiments, the pill camera 1020 may be selected to display controls for video calls, as discussed in more detail below. Jane's earphone 1003 is connected to Jane's device 600B and audio from the video call is output using earphone 1003. When Jane is making a video call, she stands in the environment 1001 with the handheld 600B outside the boundary 1005. Boundary 1005 represents a threshold distance (e.g., based on physical distance and/or signal strength) from computer system 1000 that triggers an option to transfer a real-time communication session (such as an active video call) between Jane's device 600B and computer system 1000 when a criterion is met. Because Jane's device 600B is outside of boundary 1005, the option of transferring the video call from Jane's device 600B to computer system 1000 is not available and computer system 1000 is in a dormant and/or locked state, display 1000-1 is inactive, as shown in fig. 10A.
In FIG. 10B, jane has moved closer to computer system 1000 and is standing in environment 1001 with device 600B positioned within boundary 1005. When Jane's device 600B is within boundary 1005, computer system 1000 detects the presence of device 600B and, if the criteria are met, enables the option to transfer the video call from device 600B to computer system 1000. In some embodiments, the option to transfer the video call is enabled by the device 600B, the computer system 1000, and/or the server. In some embodiments, the criteria include at least the following: 1) Device 600B is within a threshold distance (based on physical distance and/or signal strength) from computer system 1000 (e.g., within boundary 1005), and 2) an active real-time communication session is ongoing at device 600B. In some embodiments, the criteria also include additional criteria, such as criteria that are met when both device 600B and computer system 1000 are logged into the same user account (e.g., jane's user account) and criteria based on settings for enabling the transfer feature at the respective devices. If the criteria are not met, computer system 1000 does not display notification 1025. For example, if Jane's device 600B is within boundary 1005, but is not engaged in an active video call or audio call, computer system 1000 will not display notification 1025 and will optionally remain in the inactive state shown in FIG. 10A.
In fig. 10B, the criteria are met because Jane's device 600B is within boundary 1005, the video call is active at device 600B, and both device 600B and computer system 1000 are logged into Jane's user account. Thus, this option may be used to transfer a video call from Jane's device 600B to computer system 1000. When the option to transfer the call becomes available, jane's device 600B changes the appearance of the pill camera 1020 (e.g., shadows the pill camera to indicate that the option is available as shown in fig. 10B) and computer system 1000 displays notification 1025. When computer system 1000 detects that device 600B is present within boundary 1005 and determines that the criteria are met, computer system 1000 activates display 1000-1 and displays lock screen interface 1024 with notification 1025. Since computer system 1000 is still in the locked state, as indicated by lock icon 1026, the computer system displays a notification 1025 with the content that notified Jane of the detected video call. In particular, text 1025-1 indicates that the video call is nearby active, and indicator 1025-2 represents a video call application that may be used to provide the video call. In the embodiment depicted in FIG. 10B, computer system 1000 displays a notification 1025 on lock screen interface 1024 because the computer system is in a locked state when the criteria are met. However, computer system 1000 can display notification 1025 in other ways, such as, for example, a notification (e.g., similar to notifications 1048 and/or 1062, described below) or a banner displayed on an application user interface or home screen user interface that is displayed when computer system 1000 is unlocked and criteria are met.
In fig. 10C, computer system 1000 is unlocked (e.g., jane uses a password, biometric input, and/or other means to unlock the device), as indicated by unlock icon 1028, and computer system 1000 updates notification 1025 to include more detailed information about the option to transfer the video call from Jane's device 600B to computer system 1000. For example, as shown in FIG. 10C, computer system 1000 displays text 1025-3 that informs Jane that a video call is currently being made with the backpack team on Jane's phone, and Jane may tap notification 1025 to transfer the video call to computer system 1000. The notification 1025 also includes an indicator 1025-4 that includes a logo or avatar representing the backpack team and an icon indicating that the video call application is being used to provide video calls to the backpack team. Computer system 1000 detects input 1030-1 selecting notification 1025 and, in response, initiates transfer of the video call from Jane's device 600B to computer system 1000, as depicted in fig. 10D and 10E.
As part of initiating the transfer of the video call to computer system 1000, computer system 1000 displays video call interface 1032 as shown in fig. 10D. Video call interface 1032 is a user interface for a video call application that provides video calls at computer system 1000. In some embodiments, the video call application at computer system 1000 is the same as the video call application used at Jane's device 600B. Before transferring the video call to computer system 1000, video call interface 1032 includes a camera preview 1034 showing the video feed from computer system camera 1000-2 before transferring the video call from Jane's device 600B to computer system 1000. Jane is centered in the field of view of camera 1000-2, as shown by camera preview 1034, and is partially outside the field of view of phone camera 600-2B, as shown in self-view 1015 at Jane's device 600B. Video call interface 1032 also includes a video call control area 1036 that provides information associated with the video call and includes control options for controlling the operation, parameters, and/or settings of the video call. Prior to transferring the video call to computer system 1000, computer system 1000 displays control area 1036 with countdown 1038 that is automatically updated to indicate the number of seconds remaining until the video call is transferred to computer system 1000. In some embodiments, countdown 1038 may be selected to cancel the process for transferring the video call from Jane's device 600B to computer system 1000. In some implementations, countdown 1038 may be selected to immediately transfer the video call to computer system 1000 (without having to wait for the countdown to complete).
Video call control area 1036 includes a status area 1036-1 that includes status information associated with the video call and, in some embodiments, may be selected to display additional information regarding the video call. As depicted in fig. 10D, the status area 1036-1 currently indicates that two members of the backpack guest group (other than Jane) are participating in the video call. The control region 1036 also includes various options that can be selected to control the operation, parameters, and/or settings of the video call. For example, in some embodiments, message option 1036-2 can be selected to view a message conversation between participants of a backpack team. In some implementations, speaker options 1036-3 can be selected to enable or disable audio output for a video call (e.g., audio output at a speaker of headset 1003 or computer system 1000). In some implementations, the speaker option 1036-3 can be selected to select a different audio output device (e.g., a headset or a flat panel speaker) for the video call. In some implementations, the Mic option 1036-4 can be selected to enable or disable a microphone at the computer system 1000 to mute or un-mute the audio input of Jane for a video call. In some implementations, the camera option 1036-5 can be selected to enable or disable the camera 1000-2, thereby enabling or disabling the video feed provided by the camera 1000-2 for the video call. In some implementations, camera options 1036-5 can be selected to select a different camera as the video source of the video call. In some implementations, the sharing options 1036-6 can be selected to access various options and settings for sharing content.
In some embodiments, video call interface 1032 includes one or more camera setup options 1040 that are selectable to enable or disable one or more cameras and/or video settings using camera 1000-2 of computer system 1000. For example, camera option 1040-1 is a control for enabling or disabling camera lighting effects, camera option 1040-2 is a control for enabling or disabling image blur settings, camera option 1040-3 is a control for enabling or disabling object tracking settings, and camera option 1040-4 is a control for selecting a camera of computer system 1000 for a video feed. In some implementations, one or more of the camera settings options 1040 are displayed in the video call control area 1036. In some implementations, the controls provided in the camera settings option 1040 are based on the capabilities of the camera (e.g., camera 1000-2) at the computer system 1000. In some implementations, the camera settings options 1040 include other controls, such as controls for implementing views of the surface detected in the field of view of the camera (e.g., camera 1000-2). In some implementations, one or more of the camera settings options 1040 can be selected to modify the camera preview 1034 so that Jane can apply the desired camera or video effect before transferring the video call to the computer system 1000. In some implementations, the camera settings option 1040 can be selected after the video call has been transferred to the computer system 1000.
Fig. 10D also depicts John's computer system 1050, which represents the devices of the participants of the video call. In FIG. 10D, john's computer system 1050 displays via display 1050-1 a video call interface 1042, which is a user interface for a video call application at John's computer system and includes video tiles 1043-1 and 1043-2, an auto view 1044, and a video call control region 1046. The video tile 1043-1 shows a video feed 1043-1a of Ryan, and the video tile 1043-2 shows a video feed 1043-2a of Jane, which is a video feed provided from a camera 600-2B of the device 600B of Jane. An auto-view 1044 shows a video feed for John provided by camera 1050-2 of John's computer system 1050. Video call interface 1042 is similar to video call interface 1032 and video call control region 1046 is similar to video call control region 1036.
At the end of countdown 1038, the video call is automatically transferred from Jane's device 600B to computer system 1000, as shown in FIG. 10E. When the video call is transferred from Jane's device 600B to computer system 1000, the headset 1003 is automatically disconnected from device 600B and connected to computer system 1000 so that the audio connection transitions seamlessly with the video call without Jane having to manually connect the headset at computer system 1000. Jane's device 600B stops displaying video call interface 1010 and displays home screen 1047 with notification 1048 informing Jane that the video call has been transferred to computer system 1000. Notification 1048 includes an option 1048-1 that may be selected to transfer the video call back to Jane's device 600B. In some embodiments, jane's device 600B automatically stops displaying notifications 1048 after a predetermined amount of time or in response to an input to dismiss the notification (e.g., a tap or swipe gesture). In some embodiments, if Jane's device 600B is moved outside boundary 1005 (thereby violating the criteria for transferring a real-time communication session), then the device stops displaying notifications 1048 (and similar notifications).
When a video call is transferred to computer system 1000, computer system 1000 displays video call interface 1032 with video tiles 1033-1 and 1033-2 and a self view 1035. Video tile 1033-1 shows video feed 1033-1a of Ryan, and video tile 1033-2 shows video feed 1033-2a of John. An auto-view 1035 shows a video feed of Jane provided by camera 1000-2 of computer system 1000. Countdown 1038 is replaced with a leave option 1036-7 that, in some embodiments, can be selected to leave computer system 1000 from the video call, optionally without terminating the video call for the other participants of the call. In the embodiment depicted in fig. 10D and 10E, the video call remains active at Jane's device 600B until it is transferred to computer system 1000. However, in some embodiments, the video call is terminated at Jane's device 600B prior to being established using the computer system 1000. In some embodiments, the computer system 1000 is used to establish a video call while the original video call is still active at Jane's device 600B.
In the embodiment depicted in fig. 10D and 10E, the diversion of the video call is represented at the participant's device (e.g., john's computer system 1050) by replacing the video feed in Jane's video tile, without displaying additional video tiles or removing video tiles displayed prior to the diversion. For example, as shown in FIG. 10E, john's computer system 1050 continues to display video call interface 1042 with Jane's video tile 1043-2. However, rather than showing Jane's tile with video feed 1043-2a as a video feed from Jane's device 600B, jane's video tile 1043-2B is included as a video feed from Jane's computer system 1000. In some embodiments, the transition is displayed differently at the participant's device depending on whether certain criteria are met. For example, when the transfer of the video call is authenticated (e.g., the video call is being transferred between two devices belonging to Jane, both devices being logged into an account of Jane or otherwise authorized for use by Jane), the transfer is depicted by replacing the video feed in the video tile, as shown in fig. 10D and 10E. In some implementations, authentication of the transfer may be more critical for the case where the participant is making a video call, but has not yet enabled its video feed, and thus other participants are unable to visually verify whether the transfer of the video feed is associated with the same person. However, if the criteria are not met, the transfer of the video call is represented at the participant's device by displaying a new tile with a new video feed. Examples of such transfers are depicted in fig. 10H and 10I and described in more detail below.
Fig. 10F-10H depict a user interface of an embodiment of device 600B in which Jane transfers a video call back to Jane. In fig. 10F, jane selects the pill camera 1020 via input 1030-2 while still providing a video call at the computer system 1000. The pill camera 1020 is displayed in the status area 1049 of Jane's device 600B, which provides status information such as, for example, battery life, connection status information, and signal strength to the device 600B. In response to detecting input 1030-2, jane's device 600B redisplays notification 1048 as shown in FIG. 10G. Jane then selects option 1048-1 via input 1030-3, which initiates transfer of the video call from computer system 1000 back to Jane's device 600B.
In fig. 10H, jane's device 600B displays a video call interface 1010 with a camera preview 1054, showing a video feed from Jane's device camera 600-2B before a video call is transferred from computer system 1000 to Jane's device 600B. The interface depicted on Jane's device 600B in fig. 10H is similar to the interface depicted on computer system 1000 in fig. 10D. For example, jane's device 600B displays a video call control area 1056 that is similar to control area 1036 and includes a countdown 1058 (similar to countdown 1038). In some embodiments, countdown 1058 may be selected to cancel the process of device 600B for transferring the video call from computer system 1000 to Jane. In some embodiments, countdown 1058 may be selected to immediately transfer the video call to Jane's device 600B (without having to wait for the countdown to complete). Jane's device 600B also displays a camera settings option 1060 that is similar to camera settings option 1040 and that can be selected to enable or disable one or more cameras and/or video settings using Jane's device 600B's camera 600-2B. For example, camera option 1060-1 is a control for enabling or disabling camera lighting effects, camera option 1060-2 is a control for enabling or disabling image blur settings, and camera option 1060-3 is a control for selecting a camera of Jane's device 600B for video feed. In some implementations, one or more of the camera settings options 1060 are displayed in the video call control area 1056. In some implementations, the control provided in the camera settings option 1060 is based on the capabilities of the camera (e.g., camera 600-2B) at Jane's device 600B. In some embodiments, camera settings options 1060 include other controls, such as controls for enabling or disabling object tracking settings and/or controls for enabling a view of a surface detected in the field of view of a camera (e.g., camera 600-2B). In some implementations, one or more of the camera settings options 1060 can be selected to modify the camera preview 1054 so that Jane can apply a desired camera or video effect before transferring the video call to Jane's device 600B. In some embodiments, the camera settings option 1060 can be selected after the video call has been transferred to Jane's device 600B. In some embodiments, jane's device 600B displays control region 1056 in response to selection of pill camera 1020. In some embodiments, control region 1056 is similar to control region 615B.
At the end of countdown 1058, the video call is automatically transferred from computer system 1000 to Jane's device 600B, as shown in FIG. 10I. Jane's device 600B displays a video call interface 1010 with video tiles 1012 and 1014 and a view 1015. When the video call is transferred to Jane's device 600B, the headset 1003 is automatically disconnected from the computer system 1000 and connected to Jane's device 600B, so that the audio connection transitions seamlessly with the video call without the Jane having to manually connect the headset at Jane's device 600B. Computer system 1000 stops displaying video call interface 1032 and displays notification 1062 (similar to notification 1048) notifying Jane that the video call has been transferred to Jane's device 600B. Notification 1062 includes an option 1062-1 that may be selected to transfer the video call back to computer system 1000 (similar to the selection of option 1048-1). In some implementations, the computer system 1000 automatically stops displaying the notification 1062 after a predetermined amount of time or in response to an input to dismiss the notification (e.g., a tap or swipe gesture). After ceasing to display video call interface 1032, the computer system displays home screen 1066 with video call application icons 1068 displayed in application dock area 1070. Computer system 1000 displays an indicator 1068-1 on video call application icon 1068 to indicate that the video call was recently provided using the video call application. Computer system 1000 also displays a pill camera 1064 in status area 1065 to indicate that a video call may be transferred to computer system 1000. In some embodiments, computer system 1000 displays pill camera 1064 after dismissing notification 1062. In some embodiments, the pill camera 1064 may be selected to display a notification 1062 with an option 1062-1. Status area 1065 is an area that includes status information (e.g., current time, connection status information, signal strength, and battery life) of computer system 1000.
In the implementation depicted in fig. 10H and 10I, the transfer of the video call is represented at the participant's device (e.g., john's computer system 1050) by adding a new tile with a new video feed from the device to which the video call is being transferred. For example, in FIG. 10H, before a video call is transferred from computer system 1000 to Jane's device 600B, john's computer system 1050 is shown with Jane's video tile 1043-2, which includes video feed 1043-2B (a video feed from computer system camera 1000-2). When the video call is transferred to Jane's device 600B in FIG. 10I, john's computer system 1050 displays a new video tile 1043-3, which includes video feed 1043-3a, which is a video feed from Jane's device 600B's camera 600-2B. John's computer system 1050 displays a new video tile 1043-3 simultaneously with Jane's previously displayed video tile 1043-2. In some implementations, after the transfer of the video call is completed, john's computer system 1050 stops displaying Jane's previous video tile 1043-2, as shown in FIG. 10J. Although the transfer of the video call is shown on John's computer system 1050 by displaying additional video tiles 1043-3, the transfer may be shown by replacing the video feed in video tile 1043-2 (without displaying a new video tile 1043-3), similar to that shown in FIGS. 10D and 10E.
Referring again to computer system 1000 in FIG. 10I, a video call can be transferred back to computer system 1000 by selecting option 1062-1 or using video call interface 1032. For example, in response to detecting input 1030-4 on video call application icon 1068, computer system 1000 displays video call interface 1032 as shown in FIG. 10J. The video call interface 1032 is similar to the configuration depicted in fig. 10D, rather than displaying the countdown 1038, the computer system displays a join option 1072 that may be selected to transfer a video call from Jane's device 600B to the computer system 1000 in a manner similar to that described above with respect to fig. 10D.
Fig. 10A-10J depict an exemplary user interface for various embodiments in which the real-time communication session being transferred between Jane's device 600B and computer system 1000 is a video call. However, it should be understood that the real-time communication session may be an audio call. Fig. 10K-10N depict exemplary user interfaces of various embodiments in which an audio call is transferred between Jane's device 600B and computer system 1000. The implementations depicted in fig. 10K-10N are similar to those depicted in fig. 10A-10J, but where the real-time communication session is an audio call rather than a video call. Details are not repeated for the sake of brevity. Thus, unless explicitly indicated otherwise, aspects of audio call forwarding are similar to corresponding aspects of video call forwarding.
In fig. 10K, jane 1002 stands in environment 1001, similar to the environment shown in fig. 10A. Jane 1002 is using Jane's device 600B to engage in an audio call with a backpack team, as shown by audio call interface 1074 displayed on Jane's device 600B. Audio call interface 1074 is a user interface for an audio call application (e.g., a telephone application) at Jane's device 600B. In the embodiment depicted in fig. 10K, the audio call interface 1074 includes audio controls 1073 and information about the call 1077, such as the name of the participant and the current duration of the call. Audio control 1073 provides options for controlling various aspects of the audio call such as muting the call, displaying a keypad, enabling speakers, selecting different audio output sources, adding additional participants to the call, enabling video channels, viewing a contact interface, and/or ending the call. Similar to the embodiment in FIG. 10A, jane's device 600B is not within boundary 1005, and therefore computer system 1000 is inactive, as shown in FIG. 10K.
In FIG. 10L, jane moves Jane's device 600B to a location near computer system 1000, similar to that discussed above with respect to FIG. 10B. Because the criteria for transferring the audio call to computer system 1000 are met, computer system 1000 displays notification 1075 on lock screen interface 1024. The notification 1075 is similar to the notification 1025 and includes text 1075-1 and an indicator 1075-2. Text 1075-1 indicates that an audio call is detected in the vicinity of computer system 1000 and indicator 1075-2 represents an audio call application that may be used to provide an audio call.
In fig. 10M Jane unlocks computer system 1000 and computer system 1000 updates notification 1075 to include additional details regarding the audio call. For example, computer system 1000 displays text 1075-3 indicating that an audio call is being made with the backpack team at Jane's phone, and Jane may click on notification 1075 to transfer the audio call to computer system 1000. The notification 1075 also includes an indicator 1075-4 that includes a logo or avatar representing the backpack team and an icon indicating that the audio call application is being used to provide audio calls to the backpack team. Computer system 1000 detects input 1030-5 of selection notification 1027 and, in response, initiates transfer of an audio call from Jane's device 600B to computer system 1000, as depicted in fig. 10N.
When an audio call is transferred to computer system 1000, computer system 1000 displays audio call interface 1078, as shown in fig. 10N. Audio call interface 1078 is a user interface for an audio call application that provides audio calls at computer system 1000. In some embodiments, the audio call application at computer system 1000 is the same as the audio call application used at Jane's device 600B. The audio call interface includes an audio control 1080 (similar to audio control 1073) for controlling various aspects of the audio call and information 1082 about the call (such as the name of the participant and the current duration of the call).
When notification 1075 is selected, the audio call is automatically transferred and there is no countdown or other intentional delay. This is in contrast to the transfer of video calls, which includes intentional delays so that the user can prepare the camera and video settings of the device to which the call is being transferred (as discussed above with respect to fig. 10D and 10H). Because the video feed is not included in the audio call, the user does not need to prepare the video settings of the computer system 1000 and thus transfer the call without intentional delay and without displaying the camera preview and related controls described above with respect to fig. 10D.
When the call is transferred to computer system 1000, jane's device 600B displays notification 1076, which is similar to notification 1048. The notification 1076 includes an option 1076-1 that can be selected to transfer the audio call back to Jane's device 600B. Also, because the audio call does not include a video component, the audio call is transferred to Jane's device 600B without intentional delay and without displaying the camera preview and related controls described above with respect to fig. 10H.
It should be appreciated that an audio call may be transferred between Jane's device 600B and computer system 1000 in a manner similar to that described above with respect to fig. 10E-10J, but without including intentional delay or displaying an interface for the video component. For example, option 1076-1 may be selected to transfer the audio call from computer system 1000 to Jane's device 600B. In some embodiments, when the call is transferred from computer system 1000, the computer system displays a notification including an option similar to option 1076-1 for transferring the audio call back to computer system 1000. In addition, when an audio call is transferred in a similar manner as described above, the connection of the headset 1003 is transferred between devices.
Fig. 11 is a flow chart illustrating a method for managing transfer of a real-time communication session using a computer system, in accordance with some embodiments. The method 1100 is performed at a computer system (e.g., 100, 300, 500, 600, 1000, and/or 1050) (e.g., a smart phone, a tablet, a desktop, a laptop, and/or a head-mounted device (e.g., a head-mounted augmented reality and/or augmented reality device)) that communicates with (e.g., includes and/or is connected to) one or more display generating components (e.g., 600-1, 1000-1, and/or 1050-1) (e.g., a display controller, a touch-sensitive display system, speakers, a bone conduction audio output device, a haptic output generator, a projector, a holographic display, and/or a head-mounted display system) and one or more cameras (e.g., 600-2, 1000-2, and/or 1050-2) (e.g., an infrared camera, a depth camera, a visible light camera, and/or one or more optical sensors). Some operations in method 1100 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 1100 provides an intuitive way for managing the transfer of a real-time communication session. The method reduces the cognitive burden on the user to manage the transfer of real-time communication sessions, thereby creating a more efficient human-machine interface. For battery-powered computing devices, enabling users to manage transfer of real-time communication sessions faster and more efficiently, power is saved and the time interval between battery charges is increased.
At method 1100, when a computer system (e.g., 1000 or 600B) (e.g., a smartphone, tablet, desktop, laptop, and/or head-mounted device (e.g., head-mounted augmented reality and/or augmented reality device)) is associated with a respective user account (e.g., a computer system is logged into and/or is being operated by a user associated with the respective user account), the computer system receives (1102) first data indicating whether a first external computer system (e.g., 600B or 1000) (e.g., that is different from the computer system) meets a first set of criteria, wherein when the first external computer system is located within a threshold distance (e.g., 1005) of the computer system (e.g., a physical or measurable distance such as 1 foot, 2 foot, 3 foot, 5 foot, 10 foot, 15 foot, or 20 foot; and/or a distance based on the strength of a wireless connection between the system and the first external computer system) is associated with the respective user account (e.g., a first external computer system is logged into and/or is being operated by a user associated with the respective user account), and wherein when the first external computer system is located within a threshold distance (e.g., 1005) of the physical or measurable distance (e.g., such as between the first external computer system and the first external computer system is different from the respective user account and the computer system) (e.g., the first external computer system and/or the computer system is remote), operated by a remote user and/or logged into a user account associated with the remote user), a first set of criteria is met when in a real-time communication session (e.g., a communication session in which a live audio feed and/or a live video feed is in communication with a second external computer system).
After receiving the first data (or optionally, in response to receiving the first data) and in accordance with a determination that the first data indicates that the first external computer system (e.g., 600B or 1000) meets a first set of criteria, the computer system (e.g., 1000 or 600B) displays (1104) via one or more display generating components (e.g., 1000-1 or 600-1B) (e.g., a display controller, a touch-sensitive display system, speakers, a bone conduction audio output device, a tactile output generator, a projector, a holographic display, and/or a head-mounted display system) a display (1104) including user interface objects (e.g., 1020, 1025, 1038, 1048-1, 1058, 1062-1, 1064, 1068-1, 1072, 1075, 1076, or 1076-1) (e.g., selectable user interface objects, such as graphical elements, icons, text, banners, affordances, and/or notifications) of a respective user interface (e.g., 1010, 1024, 1032, 1047, or 1066) (e.g., existing user interfaces, new user interfaces, graphical user interface objects, banners, notifications, camera previews, and/or self-views), selectable (e.g., selectable in response to a set of one or more inputs directed to the respective user interface and/or user interface object) to initiate a process for joining (e.g., causing a computer system to join) a real-time communication session with a second external computer system (e.g., 1050) (e.g., transferring the real-time communication session to the computer system, optionally simultaneously disconnecting the first external computer system from the real-time communication session).
In some embodiments, the process for joining a real-time communication session with a second external computer system (e.g., 1050) includes launching an application at the computer system (e.g., 1000 or 600B) for joining the real-time communication session. In some embodiments, the computer system joins the real-time communication session by switching the real-time communication session from the first external computer system (e.g., 600B or 1000) to the computer system. In some embodiments, the computer system joins the real-time communication session using a new connection at the computer system (e.g., created by a server). In some embodiments, after the computer system joins the real-time communication session, the previous connection at the first external computer system is severed (e.g., by the server, the computer system, and/or the first external computer system).
Displaying the respective user interfaces includes displaying (1106) representations (e.g., 1034 or 1054) of fields of view (e.g., self-view and/or camera preview) of one or more cameras (e.g., 1000-2 or 600-2B) (e.g., cameras of a computer system, cameras integrated into a monitor or display generation component of a computer system, and/or cameras connected to a computer system via a wired connection) (e.g., representations (e.g., 1015) of fields of view (e.g., 600-2B) that do not include a camera of a first external computer system and/or representations (e.g., 1033-1 or 1033-2) of fields of view that do not include a camera of a second external computer system (e.g., 1050-2) in accordance with determining that the real-time communication session includes a live video feed (e.g., 1012, 1014, and/or 1015) (e.g., live video feed enabled and/or real-time communication session includes video call)). In accordance with a determination that the real-time communication session includes a live video feed to display a representation of the field of view of the one or more cameras provides feedback regarding the status of the computer system (e.g., the status of the video feed will be provided using the one or more cameras), which provides improved visual feedback and improved security and/or privacy by informing the user of the transmission of video information to the remote computer system. In some embodiments, a representation of the field of view of one or more cameras is displayed while the real-time communication session is active at the first external computer system (e.g., and before the real-time communication session is transferred to or active at the computer system).
In some embodiments, in response to receiving the first data and in accordance with a determination that the first data indicates that the first external computer system (e.g., 600B or 1000) is not in a real-time communication session with the second external computer system (e.g., 1050) (and thus does not meet the first set of criteria), the computer system (e.g., 1000 or 600B) foregoes displaying a respective user interface (e.g., 1010, 1024, 1032, 1047 or 1066) that includes user interface objects (e.g., 1020, 1025, 1038, 1048-1, 1058, 1062-1, 1064, 1068-1, 1072, 1075, 1076, or 1076-1) that are selectable to initiate a process for joining the real-time communication session with the second external computer system. Responsive to receiving the first data and in accordance with a determination that the first data indicates that the first external computer system is not in a real-time communication session with the second external computer system, discarding the corresponding user interface including a user interface object selectable to initiate a process for joining the real-time communication session with the second external computer system reduces a computational workload of the computer system by discarding display content, which performs operations without further user input when a set of conditions is met. In some embodiments, the computer system displays a corresponding user interface and discards displaying a user interface object that can be selected to initiate a process for joining a real-time communication session with the second external computer system.
In some embodiments, displaying a respective user interface including a user interface object selectable to initiate a process for joining a real-time communication session with a second external computer system (e.g., 1050) includes displaying the user interface object as a first user interface object (e.g., 1020, 1025, 1038, 1048-1, 1058, 1062-1, 1064, 1068-1, and/or 1072) (e.g., a user interface object associated with a video call application) in accordance with a determination that the real-time communication session is provided using the first application (e.g., the video call application). In some embodiments, displaying a respective user interface including a user interface object selectable to initiate a process for joining a real-time communication session with a second external computer system includes displaying the user interface object as a second user interface object (e.g., 1075, 1076, and/or 1076-1) different from the first user interface object (e.g., a user interface object associated with an audio call application) in accordance with a determination that the real-time communication session is provided using a second application different from the first application (e.g., the audio call application). In accordance with a determination that the real-time communication session is provided using the first application or the second application, displaying the user interface object as the first user interface object or the second user interface object causes the computer system to automatically display the user interface object corresponding to the application for providing the real-time communication session without user input, which performs the operation without further user input when a set of conditions has been met.
In some embodiments, displaying a respective user interface including a user interface object selectable to initiate a process for joining a real-time communication session with a second external computer system (e.g., 1050) includes displaying the user interface object as a notification (e.g., 1025 or 1075) of an event at the computer system (e.g., to a user of the computer system (such as a banner, alarm, and/or selectable graphical element indicating that the first external computer system (e.g., 600B or 1000) meets a first set of criteria) in accordance with determining that one or more display generating components (e.g., 1000-1) are in a first state (e.g., low power, passive, dormant, dimmed, locked, and/or display disabled state) (e.g., when first data is received). Upon receiving the first information, in accordance with a determination that the one or more display generating components are in the first state, displaying the user interface object as an event notification at the computer system provides feedback regarding the state of the computer system (e.g., the state that the computer system has received first data indicating whether the first external computer system meets the first set of criteria), which provides improved visual feedback, and enables the computer system to initiate a process for joining a real-time communication session without displaying additional controls, which provides additional control options without cluttering the user interface. In some embodiments, if the computer system and/or the display generating component is in a locked state, the computer system displays the user interface object as a notification. In some embodiments, the display generation component transitions from the first state to the second state (e.g., as shown in fig. 10B or 10L) (e.g., active, full power, awake, unlocked, always on, and/or display enabled state) after receiving the first data (e.g., if the first data indicates that the first external computer system meets the first set of criteria).
In some embodiments, displaying the user interface object as a notification (e.g., 1025 or 1075) of an event at the computer system (e.g., 1000 or 600B) includes displaying a notification (e.g., an indication of an application for the real-time communication session, an indication of one or more participants of the real-time communication session, an indication of an instruction for joining the real-time communication session, and/or an indication of a real-time communication type) with first information (e.g., 1025-3, 1025-4, 1075-3, and/or 1075-4) about the real-time communication session in accordance with a determination that the first state is an unlocked state (e.g., as shown in fig. 10C and/or fig. 10M) (e.g., the computer system and/or one or more display generating components are unlocked and/or enabled). In some embodiments, displaying the user interface object as a notification of an event at the computer system (e.g., 1000 or 600B) includes, in accordance with a determination that the first state is a locked state (e.g., as shown in fig. 10B and/or fig. 10L) (e.g., the computer system and/or one or more display generating components are locked and/or disabled), displaying a notification without first information regarding the real-time communication session (e.g., displaying a notification with different information (e.g., 1025-1, 1025-2, 1075-1, and/or 1075-2), having a subset of the first information, and/or having no first information). Displaying a notification with first information about the real-time communication session when the computer system is in an unlocked state and displaying a notification without the first information when the computer system is in a locked state provides improved privacy and/or security by omitting the first information unless the computer system is unlocked. In some embodiments, after displaying the notification without the first information about the real-time communication session, the computer system transitions from the locked state to the unlocked state (e.g., the second state) and updates the notification to include the first information about the real-time communication session (e.g., as shown in fig. 10C and/or fig. 10M).
In some embodiments, the computer system (e.g., 1000 or 600B) displaying the respective user interface (e.g., 1010, 1024, or 1032) includes displaying the user interface object (e.g., 1020 or 1064) (e.g., an area including graphical elements indicating status information (such as battery life, signal strength, connectivity information, and/or current time) of the computer system) in an area (e.g., 1049 or 1065) (e.g., status area) of the respective user interface providing status information to the computer system in accordance with a determination that the respective user interface is a first type of user interface (e.g., 1047 or 1066) (e.g., home screen, desktop, and/or application user interface). In accordance with a determination that the respective user interface is of the first type, displaying the user interface object in an area of the respective user interface that provides status information for the computer system improves the user experience by moving the user interface object to a position that does not interfere with a user's view of the respective user interface when the respective user interface is of the first type, which provides additional control options without cluttering the user interface. In some implementations, the home screen and/or desktop is a displayed user interface (e.g., user interface 400) that includes user interface objects corresponding to respective applications. When a user interface object is activated, the computer system displays a corresponding application corresponding to the activated user interface object. In some embodiments, the application user interface is a user interface for a respective application operating at the computer system. In some implementations, the user interface object has a first appearance (e.g., a pill-shaped camera) when displayed in a status region of a respective user interface. In some implementations, when the user interface object is displayed at a location that is different from the status area of the respective user interface, the user interface object has a second appearance (e.g., notification) that is different from the first appearance. In some implementations, when the respective user interface is not a first type of user interface (e.g., a lock screen, a notification screen, a wake screen, and/or an unlock screen), the user interface object is displayed at a location different from the status region. In some embodiments, when the respective user interface is a first type and the computer system is a first type of device (e.g., a smart phone or tablet), the user interface object is displayed in the status area. In some embodiments, when the respective user interface is a first type and the computer system is a second type of device (e.g., a laptop or tablet), the user interface object is not displayed in the status area.
In some embodiments, the computer system (e.g., 1000) displaying the respective user interface includes displaying the user interface object (e.g., 1068 and/or 1068-1) in an area (e.g., 1070) (e.g., a visually distinct area) (e.g., an application dock area) of the respective user interface including a plurality of application icons (e.g., 1068) for launching the respective application in accordance with a determination that the respective user interface is a second type of user interface (e.g., 1066) (e.g., home screen, desktop, and/or application user interface). In accordance with a determination that the respective user interface is of the second type, displaying the user interface object in an area of the respective user interface that includes a plurality of application icons for launching the respective application improves the user experience by moving the user interface object to a position that does not interfere with a user's view of the respective user interface when the respective user interface is of the second type, which provides additional control options without cluttering the user interface. In some implementations, the user interface object has a first appearance (e.g., an indicator of an application icon) when displayed in the application dock area. In some implementations, when the user interface object is displayed at a location of the application dock area that is different from the corresponding user interface, the user interface object has a second appearance (e.g., notification) that is different from the first appearance. In some implementations, when the respective user interface is not a second type of user interface (e.g., a lock screen, a notification screen, a wake screen, and/or an unlock screen), the user interface object is displayed at a location different from the application dock area. In some embodiments, when the respective user interface is of the second type and the computer system is a device of the first type (e.g., a laptop or tablet), the user interface object is displayed in the application dock area. In some embodiments, when the respective user interface is of the second type and the computer system is a device of the second type (e.g., a smart phone), the user interface object is not displayed in the application dock area.
In some implementations, in response to receiving a selection (e.g., 1030-1, 1030-3, or 1030-4) of a user interface object (e.g., 1025, 1038, 1048-1, 1062-1, 1068, and/or 1068-1) (e.g., a set of one or more inputs including a selection of the user interface object), a representation (e.g., 1034 or 1054) of a field of view of one or more cameras is displayed. Displaying a representation of the field of view of the one or more cameras in response to receiving a selection of the user interface object provides improved privacy and/or security by hiding the representation of the field of view of the one or more cameras until a user initiates a process for joining a real-time communication session. In some embodiments, the user interface object is a notification, and the computer system (e.g., 1000 or 600B) displays a representation of the field of view of the one or more cameras in response to selection of the notification.
In some embodiments, the computer system (e.g., 1000 or 600B) displaying the respective user interface (e.g., 1010 or 1032) includes displaying a set of one or more video setting options (e.g., 1040 or 1060) associated with video settings (e.g., image blur settings (e.g., background blur and/or foreground blur), image lighting settings, object tracking settings (e.g., adjusting a displayed camera field of view to maintain a display of a person, object, and/or surface), camera selection settings, and/or camera enable/disable settings) (e.g., selectable options, buttons, icons, affordances, and/or graphical elements). Displaying the set of one or more video setting options associated with the video setting for the real-time communication session reduces the number of inputs required to modify the video setting for the real-time communication session, which reduces the number of inputs required to perform the operation. In some embodiments, the video setting options may be selected via one or more inputs to display, modify, edit, enable, and/or disable one or more video settings for the real-time communication session.
In some embodiments, the representation of the field of view of the one or more cameras includes a view (e.g., 1034 or 1054) (e.g., camera preview and/or self-view) of the user of the computer system (e.g., 1000 or 600B) before the computer system joins the real-time communication session with the second external computer system (e.g., 1050). Displaying a view of a user of a computer system prior to the computer system joining a real-time communication session with a second external computer system provides feedback regarding the status of the computer system (e.g., the status that video feeds will be provided using one or more cameras), which provides improved visual feedback and improved security and/or privacy by informing the user of the transmission of video information to the remote computer system. In some implementations, the representation of the field of view of the one or more cameras is a preview of the video feed captured by the one or more cameras of the computer system that is displayed (e.g., displayed in the application user interface) before the computer system has joined the real-time communication session with the second external computer system. In some embodiments, the representation of the field of view of the one or more cameras is a video feed captured by the one or more cameras of the computer system, the video feed being displayed while the computer system is participating in a real-time communication session with a second external computer system.
In some embodiments, the process for joining a real-time communication session with a second external computer system (e.g., 1050) includes automatically (e.g., without user input) joining a real-time communication session with a second external computer system (e.g., as shown in fig. 10E and/or fig. 10I) in accordance with a determination that a predetermined amount of time (e.g., three seconds, five seconds, or seven seconds) has elapsed (e.g., after displaying a representation of the field of view of one or more cameras and/or after selecting a user interface object). In some embodiments, the process for joining the real-time communication session with the second external computer system includes, in accordance with a determination that a predetermined amount of time has not elapsed, relinquishing joining the real-time communication session with the second external computer system (e.g., as shown in fig. 10D and/or fig. 10H) (e.g., while continuing to display a representation of the field of view of the one or more cameras). Automatically joining the real-time communication session with the second external computer system in accordance with a determination that a predetermined amount of time has elapsed, and relinquishing joining the real-time communication session in accordance with a determination that a predetermined amount of time has not elapsed, provides improved security and/or privacy by providing the user of the computer system with an opportunity to transition to use the computer system to conduct a real-time communication session with the second external computer system.
In some implementations, the computer system (e.g., 1000 or 600B) displaying the respective user interface (e.g., 1010 or 1032) includes displaying a countdown (e.g., 1038 or 1058) of the amount of time remaining until the predetermined amount of time will elapse. Displaying the countdown of the amount of time remaining until the predetermined amount of time will elapse provides feedback regarding the status of the computer system (e.g., the status that video feeds will be provided using one or more cameras), which provides improved visual feedback, and provides improved security and/or privacy by providing the user of the computer system with the opportunity to prepare (or cancel) to transition to a real-time communication session with a second external computer system (e.g., 1050) using the computer system. In some embodiments, the process for joining the real-time communication session with the second external computer system can be canceled during the countdown. For example, the computer system may detect an input (e.g., selection of a displayed countdown, selection of a cancel option, closing the computer system, and/or closing an application providing a real-time communication session and/or a shared content session) before the period of time elapses, and in response, terminate a process for joining the real-time communication session with the second external computer system.
In some implementations, the representation (e.g., 1034 or 1054) of the field of view of the one or more cameras (e.g., 1000-2 or 600-2B) is displayed concurrently with the user interface object (e.g., 1038 or 1058) that is selectable to initiate a process for joining the real-time communication session with the second external computer system (e.g., 1050). Displaying a representation of the field of view of one or more cameras concurrently with a user interface object selectable to initiate a process for joining a real-time communication session provides feedback regarding the status of the computer system (e.g., the status that the computer system has not joined but may be instructed to join a real-time communication session with a second external computer system), which provides improved visual feedback. In some embodiments, in response to detecting a selection of a user interface object displayed concurrently with a representation of a field of view of one or more cameras, a computer system (e.g., 1000) initiates a process for joining a real-time communication session with a second external computer system.
In some implementations, a computer system (e.g., 1000 or 600B) communicates with one or more input devices (e.g., 1000-1 or 600-1B) (e.g., a touch-sensitive surface, a keyboard, a mouse, a touch pad, one or more optical sensors for detecting gestures, one or more capacitive sensors for detecting hover inputs, and/or an accelerometer/gyroscope/inertial measurement unit). In some implementations, a computer system receives, via one or more input devices, a set of one or more inputs including a selection (e.g., 1030-1, 1030-2, 1030-3, 1030-4, and/or 1030-5) of a user interface object (e.g., 1020, 1025, 1038, 1048-1, 1058, 1062-1, 1064, 1068-1, 1072, 1075, 1076, or 1076-1). In some embodiments, in response to receiving a set of one or more inputs comprising a selection of a user interface object, the computer system joins a real-time communication session (in some embodiments, a representation of the field of view of the one or more cameras is not displayed) with a second external computer system (e.g., 1050). Joining the real-time communication session with the second external computer system in response to receiving a set of one or more inputs including a selection of the user interface object causes the computer system to automatically join the real-time communication session with the second external computer system, which performs an operation without further user input when a set of conditions has been met.
In some embodiments, in accordance with a determination that the real-time communication session does not include video (e.g., the real-time communication session is an audio call), displaying the respective user interface includes discarding a representation (e.g., 1034 or 1054) of the field of view of the one or more cameras (and in some embodiments, automatically joining the real-time communication session in response to selection of the user interface object). Discarding the representation of the field of view displaying the one or more cameras in accordance with a determination that the real-time communication session does not include video saves power at the computer system and provides improved privacy and/or security by omitting the display of the video information when the video information is not used for communication purposes. In some implementations, when the real-time communication session is an audio call that does not include video (e.g., a live video feed), a representation of the field of view of the one or more cameras is not displayed. For example, the respective user interfaces (e.g., 1024) (e.g., lock screen, notification screen, and/or unlock screen) include user interface objects (e.g., 1075) (e.g., notifications) that do not display a representation of the field of view of the one or more cameras, and in response to selection of the user interface objects, the computer system (e.g., 1000 or 600B) joins the real-time communication session (e.g., audio call) without displaying a representation of the field of view of the one or more cameras (e.g., as shown in fig. 10N).
In some embodiments, the process for joining a real-time communication session with a second external computer system (e.g., 1050) includes joining a real-time communication session with the second external computer system (e.g., 1000 or 600B) at the computer system (e.g., starting a real-time communication session at the computer system and/or transferring a real-time communication session from the first external computer system (e.g., 600B or 1000) to the computer system). In some embodiments, the process for joining a real-time communication session with a second external computer system includes causing a first external computer system (e.g., 600B or 1000) to terminate the real-time communication session with the second external computer system (e.g., 1050) (e.g., transmitting data to cause the first external computer system to end the real-time communication session at the first external computer system). Joining and having the first external computer system terminate the real-time communication session with the second external computer system saves power at the computer system and provides improved privacy and/or security by terminating the active real-time communication session after it has been transferred from one device to another. In some embodiments, the process for causing the first external computer system to terminate the real-time communication session with the second external computer system is initiated by a different device. For example, the server may terminate the real-time communication session with the second external computer system at the first external computer system, or the server may instruct the first external computer system to terminate the real-time communication session at the first external computer system. As another example, the first external computing system may terminate the real-time communication session at the first external computing system in response to detecting a transfer of the real-time communication session from the first external computing system to the computer system and/or in response to a request from the server to terminate the real-time communication session at the first external computing system.
In some embodiments, after a computer system (e.g., 1000 or 600B) causes a first external computer system (e.g., 600B or 1000) to terminate a real-time communication session with a second external computer system (e.g., 1050), the first external computer system displays a first option (e.g., 1020, 1048-1, 1058, 1076, or 1076-1) (e.g., 1038, 1062-1, 1064, 1068-1, or 1072) (e.g., selectable graphical elements, icons, text, and/or affordances) selectable (e.g., via one or more inputs) to initiate a process for joining (e.g., re-joining) a real-time communication session with the second external computer system at the first external computer system (e.g., switching the real-time communication session back to the first external computer system). After having the first external computer system terminate the real-time communication session with the second external computer system, displaying at the first external computer system a first option that can be selected to initiate a process for joining the real-time communication session with the second external computer system reduces the number of inputs required to transfer the real-time communication session back to the first external computer system after the real-time communication session has been transferred to the computer system, which reduces the number of inputs required to perform the operation. In some embodiments, the computer system terminates the real-time communication session with the second external computer system at the computer system when the first external computer system rejoins the real-time communication session with the second external computer system. In some embodiments, when the real-time communication session is terminated at the computer system, the computer system displays a second option (e.g., an option to switch the real-time communication session back to the computer system) that can be selected to initiate a process for re-joining the real-time communication session with the second external computer system.
In some embodiments, in response to selection (e.g., 1030-2) of a selectable graphical user interface object (e.g., 1020 or 1064) (e.g., a selectable graphical element, icon, text, and/or affordance) (e.g., a gray-color pill-form camera) displayed at a first external computer system, a first option is displayed that is selectable to initiate a process for joining a real-time communication session with a second external computer system (e.g., 1050) at the first external computer system (e.g., 600B or 1000). In response to selection of a selectable graphical user interface object displayed at the first external computer system, the display of a first option selectable to initiate a process for joining a real-time communication session with a second external computer system at the first external computer system provides additional control options without cluttering the user interface. In some embodiments, the first external computer system displays a pill camera (e.g., 1020 or 1064) that can be selected to display an option to transfer the real-time communication session from the computer system (e.g., 1000 or 600B) back to the first external computer system.
In some embodiments, the first option is displayed as a first notification (e.g., 1048, 1062, 1068, or 1076) at the first external computer system (e.g., notifying a user of the first external computer system (e.g., 600B or 1000) of an event at the first external computer system (such as a transfer of a real-time communication session from the first external computer system to the computer system (e.g., 1000 or 600B)) and/or a banner, alert, and/or selectable graphical element of an option that can be selected to transfer the real-time communication session back to the first external computer system, the first notification including the first option (e.g., 1048-1, 1062-1, 1068-1, or 1076-1) that can be selected to initiate a process for joining the real-time communication session with the second external computer system (e.g., 1050) at the first external computer system. Displaying the first option at the first external computer system as a first notification including the first option selectable to initiate a process for joining a real-time communication session with the second external computer system at the first external computer system provides additional control options without cluttering the user interface. In some embodiments, the first external computer system displays a notification including an option that can be selected to transfer the real-time communication session from the computer system back to the first external computer system.
In some implementations, the first external computer system (e.g., 600B or 1000) stops (e.g., automatically and/or without further user input) displaying the first notification (e.g., 1048, 1062, 1068, or 1076) after the first notification has been displayed for a threshold amount of time (e.g., three seconds, five seconds, or seven seconds). Stopping displaying the first notification after the first notification has been displayed for a threshold amount of time, stopping displaying the first notification performing operation after an amount of time of performing the operation when a set of conditions has been met. In some embodiments, the first external computer system causes the notification to disappear after a threshold amount of time has elapsed.
In some implementations, before the computer system (e.g., 1000) has joined the real-time communication session with the second external computer system, the second external computer system (e.g., 1050) displays a first user interface (e.g., 1042 and/or 1043-2) (e.g., an application user interface and/or a window or video tile including a video feed) with a first video feed (e.g., 1043-2 a) (e.g., a video feed from the first external computer system (e.g., 600B) for the real-time communication session). In some implementations, after the computer system has joined the real-time communication session, the second external computer system displays the first user interface (e.g., 1042 and/or 1043-2) with a second video feed (e.g., 1043-2 b) for the real-time communication session that is different from the first video feed (e.g., a video feed using one or more cameras of the computer system). Displaying a first user interface at a second external computer system having a first video feed for a real-time communication session before the computer system has joined the real-time communication session with the second computer system, and displaying a first user interface having a second video feed different from the first video feed for the real-time communication session after the computer system has joined the real-time communication session provides improved privacy and/or security by indicating to a user of the second external computer system whether the video feeds are from the same user (or user account), which provides improved visual feedback. In some embodiments, the transfer of the real-time communication session from the first external computer system to the computer system is shown at the second external computer system by exchanging video feeds shown in a single video tile or window (e.g., as shown in fig. 10D and 10E). For example, the second computer system displays video tiles with video feeds from the first external computer system, and then when the real-time communication session is transferred from the first external computer system to the computer system, the video feeds in the tiles are swapped or replaced by the video feeds from the computer system (without displaying new tiles). In some embodiments, the transfer of the real-time communication session from the first external computer system to the computer system is shown at the second external computer system by replacing an existing tile including an existing video feed from the external computer system with a new tile including a video feed from the computer system (e.g., as shown in fig. 10H-10J). In some implementations, the existing tile and the new tile are displayed simultaneously (at least temporarily) (e.g., as shown in fig. 10I).
In some implementations, when a transition of a real-time communication session from a first external computer system (e.g., 600B) to a computer system (e.g., 1000) (e.g., exchanging video feeds in the same video tile) meets the verification criteria, a second external computer system (e.g., 1050) displays the transition in a first manner (e.g., as shown in fig. 10D and 10E). In some embodiments, when the transition of the real-time communication session from the first external computer system to the computer system does not meet the verification criteria, the second external computer system displays the transition in a second manner (e.g., as shown in fig. 10H-10J) different from the first manner (e.g., displays a new video tile with a video feed from the computer system and removes the video tile with the video feed from the first external computer system). In some embodiments, the authentication criteria are met when the first external computer system and the computer system are logged into the same user account. Displaying the transition of the real-time communication session from the first external computer system to the computer system in the first manner or the second manner based on whether the authentication criteria are met provides improved privacy and/or security by indicating to a user of the second external computer system whether the video feed is from the same user (or user account), which provides improved visual feedback.
In some embodiments, initiating a process for joining a real-time communication session with a second external computer system (e.g., 1050) includes establishing a connection (e.g., a wireless connection and/or an audio connection) with a corresponding audio output device (e.g., 1003) (e.g., a wireless headset, earbud, and/or speaker) for the real-time communication session (e.g., transferring an audio path at the headset from the first external computer system to the computer system) in accordance with a determination that the first external computer system (e.g., 600B) is in the real-time communication session with the second external computer system, and outputting audio for the real-time communication session (e.g., when the computer system (e.g., 1000) joins the real-time communication session with the second external computer system). In some embodiments, initiating a process for joining a real-time communication session with a second external computer system includes, in accordance with a determination that a corresponding audio output device is not being used for the real-time communication session when the first external computer system is in a real-time communication session with the second external computer system, relinquishing establishing a connection with the corresponding audio output device (e.g., outputting audio for the real-time communication session at the computer system or a different audio output device in communication with the computer system). Establishing a connection with the respective audio output device based on whether the respective audio output device is for a real-time communication session at the first external computer system causes the computer system to automatically transfer an audio path for the real-time communication session from the first external computer system to the computer system such that a user does not have to change or adjust the audio output device for the real-time communication session, which performs an operation when a set of conditions has been met without further user input.
It should be noted that the details of the process described above with respect to method 1100 (e.g., fig. 11) also apply in a similar manner to the method described above. For example, any of methods 700, 800, and 900 optionally include one or more of the features of the various methods described above with reference to method 1100. For example, any of the aspects discussed with respect to method 1100 for transferring real-time communications may be applied to the communication session described with respect to any of methods 700, 800, and/or 900. For the sake of brevity, these details are not repeated.
Various embodiments provided herein are generally described using devices 600, 1000, and 1050. However, it should be understood that other computer systems or devices may be used (in addition to or in lieu of device 600/1000/1050) to participate in the shared content session, and that aspects of the shared content session may be implemented differently across the various devices participating in the shared content session. For example, a smart speaker optionally including a display component may be used to participate in the shared content session. In some embodiments, the input at the smart speaker may be provided verbally and optionally via a touch input, and the output may be an audio output and optionally a visual output provided at a connected display component. As another example, a display component of a Head Mounted Device (HMD) may be used to display visual aspects of a shared content session (and speakers for generating audio), and may receive input by detecting gestures, gaze, hand movements, audio inputs, touch inputs, and so forth. In some embodiments, the user interface depicted in the figures can be displayed in an augmented reality environment such as augmented reality or virtual reality. For example, the video tiles, windows, and/or other display areas shown in the figures may be displayed in a floating manner in a three-dimensional environment. As another example, the representation of the user or participant may be displayed as a simulated three-dimensional avatar or a two-dimensional avatar positioned around the three-dimensional environment, rather than a video tile or window in a video call or video conferencing application. In addition, various types of inputs (such as tap, drag, click, and hover gestures) are used herein to describe embodiments, however, it should be understood that the described embodiments may be modified to respond to other forms of input including gestures, gaze, hand movements, audio inputs, and the like. In addition, different devices with different capabilities may be combined in a single shared content session, such as a smart phone, tablet, laptop, desktop, smart speaker, smart TV, earphone or earplug, HMD, and/or smart watch (or a subset thereof) may participate in the same shared content session, where the different devices participate in different ways depending on the capabilities of the device (e.g., HMD presents content in an analog three-dimensional environment or an augmented reality environment, smart speaker provides audio output and input, headphones provides spatial audio output and audio input, laptop and desktop, smart phone and tablet provide audio and visual input and output, smart TV provides audio and visual output and audio input (or audio and visual input)).
The foregoing description, for purposes of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the techniques and their practical applications. Those skilled in the art will be able to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
While the present disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. It should be understood that such variations and modifications are considered to be included within the scope of the disclosure and examples as defined by the claims.
As described above, one aspect of the present technology is to collect and use data from various sources to improve the delivery of content of a shared content session to a user. The present disclosure contemplates that in some examples, such collected data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, telephone numbers, email addresses, social network IDs, home addresses, data or records related to the user's health or fitness level (e.g., vital sign measurements, medication information, exercise information), date of birth, or any other identifying or personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be used to benefit users. For example, the personal information data may be used to deliver targeted content of greater interest to the user. Thus, the use of such personal information data enables a user to have programmatic control over the delivered content. In addition, the present disclosure contemplates other uses for personal information data that are beneficial to the user. For example, health and fitness data may be used to provide insight into the overall health of a user, or may be used as positive feedback to individuals using technology to pursue health goals.
The present disclosure contemplates that entities responsible for collecting, analyzing, disclosing, transmitting, storing, or otherwise using such personal information data will adhere to established privacy policies and/or privacy practices. In particular, such entities should exercise and adhere to privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining the privacy and security of personal information data. Such policies should be readily accessible to the user and should be updated as the collection and/or use of the data changes. Personal information from users should be collected for legal and reasonable use by entities and not shared or sold outside of these legal uses. In addition, such collection/sharing should be performed after informed consent is received from the user. In addition, such entities should consider taking any necessary steps to defend and secure access to such personal information data and to ensure that others who have access to personal information data adhere to their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices. In addition, policies and practices should be adjusted to collect and/or access specific types of personal information data and to suit applicable laws and standards including specific considerations of jurisdiction. For example, in the united states, the collection or acquisition of certain health data may be governed by federal and/or state law, such as the health insurance flow and liability act (HIPAA); while health data in other countries may be subject to other regulations and policies and should be processed accordingly. Thus, different privacy practices should be maintained for different personal data types in each country.
In spite of the foregoing, the present disclosure also contemplates embodiments in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, in some embodiments, the techniques of the present invention may be configured to allow a user to select "opt-in" or "opt-out" to participate in gathering personal information data during or after a registration service. In another example, the present technology may be configured to allow a user to prevent sharing of personal information that may appear on the user's screen (e.g., such as in a screen sharing embodiment). In addition to providing the "opt-in" and "opt-out" options, the present disclosure also contemplates providing notifications related to accessing or using personal information. For example, the user may be notified that his personal information data will be accessed when the application is downloaded, and then be reminded again just before the personal information data is accessed by the application.
Further, it is an object of the present disclosure that personal information data should be managed and processed to minimize the risk of inadvertent or unauthorized access or use. Once the data is no longer needed, risk can be minimized by limiting the data collection and deleting the data. In addition, and when applicable, included in certain health-related applications, the data de-identification may be used to protect the privacy of the user. De-identification may be facilitated by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of stored data (e.g., collecting location data at a city level instead of at an address level), controlling how data is stored (e.g., aggregating data among users), and/or other methods, as appropriate.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, the present disclosure also contemplates that the various embodiments may be implemented without accessing such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, the content may be selected and delivered to the user by inferring preferences based on non-personal information data or absolute minimum amount of personal information such as content requested by a device associated with the user, other non-personal information available to the content delivery service, or publicly available information.

Claims (114)

1. A method, comprising:
at a computer system in communication with one or more display generating components and one or more input devices:
while displaying, via the one or more display generating components, a user interface for initiating a shared content session with the external computer system, receiving, via the one or more input devices, a first set of one or more inputs corresponding to a request to initiate a shared content session with the one or more external computer systems; and
In response to receiving the first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems, the shared content session, when active, enabling the computer system to output the respective content while the one or more external computer systems are outputting the respective content, wherein initiating the shared content session with one or more external computer systems comprises:
in accordance with a determination that the shared content session is initiated via asynchronous communication, the shared content session is initiated in a first mode in which a set of real-time communication features are disabled for the shared content session.
2. The method of claim 1, wherein initiating the shared content session with one or more external computer systems comprises:
in accordance with a determination that the shared content session is initiated via synchronous communication, the shared content session is initiated in a second mode in which a set of real-time communication features are enabled for the shared content session.
3. The method of any of claims 1-2, wherein displaying the user interface for initiating the shared content session with the external computer system comprises displaying a first option selectable to initiate the shared content session via asynchronous communication and a second option selectable to initiate the shared content session via synchronous communication.
4. A method according to claim 3, further comprising:
in response to receiving the first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems:
in accordance with a determination that the first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems includes a selection of the first option, a message composition user interface is displayed via the one or more display generating components.
5. The method of any of claims 1-4, wherein initiating the shared content session with one or more external computer systems comprises:
in accordance with a determination that the number of the one or more external computer systems that have joined the shared content session satisfies a threshold number, initiating playback of the respective content in the shared content session; and
in accordance with a determination that the number of the one or more external computer systems that have joined the shared content session does not satisfy the threshold number, playback of the respective content is abandoned.
6. The method of any one of claims 1 to 5, further comprising:
After initiating the shared content session and displaying the graphical object having the first display state:
in accordance with a determination that a threshold number of the one or more external computer systems have joined the shared content session, the graphical object is displayed having a second display state different from the first display state.
7. The method of any one of claims 1 to 6, further comprising:
detecting a first external computer system joining the shared content session after the shared content session is initiated; and
in response to detecting the first external computer system joining the shared content session, displaying, via the one or more display generating components, a notification selectable to enable playback of content of the shared content session.
8. The method of claim 7, wherein the request to initiate the shared content session with the one or more external computer systems is associated with first content, and wherein the notification includes a first option selectable to initiate playback of the first content of the shared content session.
9. The method of claim 7, wherein the request to initiate the shared content session with the one or more external computer systems is associated with first content, the method further comprising:
Receiving input directed to the notification while displaying the notification selectable to enable playback of content of the shared content session; and
in response to receiving the input directed to the notification, a second option selectable to initiate playback of the first content of the shared content session is displayed via the one or more display generating components.
10. The method of any one of claims 1 to 9, further comprising:
after initiating the shared content session, displaying a first control user interface via the one or more display generating components, comprising:
in accordance with a determination that the shared content session is provided via an asynchronous communication session, displaying the first control user interface with a first set of one or more control options; and
in accordance with a determination that the shared content session is provided via a real-time communication session, the first control user interface is displayed having a second set of one or more control options different from the first set of one or more control options.
11. The method of any one of claims 1 to 10, further comprising:
after initiating the shared content session, displaying, via one or more display generating components, a second control user interface having a first appearance when the shared content session is in a first mode;
Detecting a change in the shared content session from the first mode to a third mode different from the first mode; and
in response to detecting a change in the shared content session from the first mode to the third mode, the second control user interface is displayed having a second appearance that is different from the first appearance.
12. The method of claim 11, wherein displaying the second control user interface having the first appearance comprises displaying a first set of one or more selectable control options, and wherein displaying the second control user interface having the second appearance comprises displaying a second set of one or more selectable control options different from the first set of one or more selectable control options.
13. The method of any of claims 11-12, wherein displaying the second control user interface having the first appearance comprises displaying a background of the second control user interface having a first state, and wherein displaying the second control user interface having the second appearance comprises displaying the background of the second control user interface having a second state different from the first state.
14. The method of any of claims 11-13, wherein the third mode is a mode in which one or more real-time communication features are enabled for the shared content session, and wherein the appearance of the second control user interface changes based on a change in state of the computer system when the shared content session changes from the first mode to the third mode.
15. The method of any of claims 1-14, wherein the request to initiate the shared content session includes sending a link to the one or more external computer systems to join the shared content session.
16. The method of claim 15, wherein the link includes a join option, the method further comprising:
detecting, via the one or more input devices, one or more user inputs corresponding to selection of the join option; and
in response to detecting the one or more inputs corresponding to selection of the join option, a process for joining the shared content session is initiated.
17. The method of any one of claims 1 to 16, further comprising:
After initiating the shared content session with one or more external computer systems, displaying a status user interface including call options via the one or more display generating components;
while displaying the status user interface including the call option, detecting, via the one or more input devices, one or more inputs corresponding to selection of the call option; and
in response to detecting the one or more inputs corresponding to selection of the call option, a process for enabling real-time communication for the shared content session is initiated.
18. The method of any one of claims 1 to 17, further comprising:
after initiating the shared content session with one or more external computer systems, displaying, via the one or more display generating components, a user status interface including an indication of participants of the shared content session that participated in the shared content session if real-time communication is enabled and an indication of participants of the shared content session that participated in the shared content session if real-time communication is disabled.
19. The method of any one of claims 1 to 18, further comprising:
While the shared content session is in the first mode at the computer system, and in response to a second external computer system enabling real-time communication for the shared content session at the second external computer system, displaying an incoming communication user interface including an accept option;
while displaying the incoming communication user interface including the acceptance option, detecting, via the one or more input devices, one or more inputs corresponding to selection of the acceptance option; and
in response to detecting the one or more inputs corresponding to selection of the acceptance option, a process for enabling real-time communication at the computer system for the shared content session is initiated.
20. The method of any one of claims 1 to 19, further comprising:
while the shared content session is in a fourth mode in which the set of real-time communication features are enabled for the shared content session:
receiving, via the one or more input devices, a request to transition the shared content session from the fourth mode; and
in response to receiving the request to transition the shared content session from the fourth mode, displaying a session option user interface via the one or more display generating components, wherein the session option user interface comprises:
A termination option selectable to terminate the shared content session at the computer system; and
a continue option selectable to continue the shared content session at the computer system.
21. The method of claim 20, further comprising:
in response to receiving the request to transition the shared content session from the fourth mode, ceasing to display a third control user interface for the shared content session.
22. The method of any of claims 20 to 21, further comprising:
receiving a selection of the continue option via the one or more input devices; and
in response to receiving the selection of the continue option, the shared content session is transitioned from the fourth mode to the first mode.
23. The method of any of claims 20-22, wherein displaying the session options user interface comprises:
in accordance with a determination that the shared content session is initiated via asynchronous communication, displaying the continuation option selectable to continue the shared content session at the computer system; and
in accordance with a determination that the shared content session is not initiated via asynchronous communication, the continuation option is relinquished from being displayed that can be selected to continue the shared content session at the computer system.
24. The method of any of claims 20 to 23, further comprising:
after transitioning the shared content session from the fourth mode to the first mode, receiving, via the one or more input devices, a set of one or more inputs corresponding to a request to display a messaging user interface; and
in response to receiving the set of one or more inputs corresponding to a request to display a message user interface, displaying the message user interface via the one or more display generating components, comprising:
a rejoin option is displayed in the message user interface that is selectable to transition the shared content session at the computer system from the first mode to the fourth mode.
25. The method of any of claims 1-24, wherein the asynchronous communication comprises text-based messaging.
26. The method of any one of claims 1 to 25, further comprising:
receiving, via the one or more input devices, a request to initiate playback of second content at the computer system while first content is output as the respective content of the shared content session, wherein the second content is different from the first content; and
In response to receiving the request to initiate playback of the second content at the computer system, the second content is output as the respective content of the shared content session.
27. The method of any one of claims 1 to 26, further comprising:
receiving, via the one or more input devices, a request to display a messaging user interface while the shared content session is active and a first user interface is displayed; and
in response to receiving the request to display the message user interface, the message user interface is displayed on at least a portion of the first user interface.
28. The method of claim 27, further comprising:
receiving, via the one or more input devices, a set of one or more inputs corresponding to a request to dismiss the message user interface while the message user interface is displayed on at least a portion of the first user interface; and
in response to receiving the set of one or more inputs corresponding to a request to dismiss the message user interface, ceasing to display the message user interface and displaying the first user interface.
29. The method of any of claims 27-28, wherein the first user interface includes a representation of a media output of the respective content as the shared content session, and wherein displaying the message user interface on at least a portion of the first user interface includes:
The representation of the media is displayed at a location that at least partially overlaps the message user interface without overlapping a recipient area of the message user interface and without overlapping at least a portion of a message display area of the message user interface.
30. The method of any of claims 27-29, wherein displaying the message user interface on at least a portion of the first user interface comprises:
in accordance with a determination that the computer system is a first type of device, displaying the message user interface concurrently with at least a portion of the first user interface; and
in accordance with a determination that the computer system is a second type of device, the message user interface is displayed without displaying at least a portion of the first user interface.
31. The method of any one of claims 1 to 30, further comprising:
receiving a set of one or more inputs corresponding to a request to select screen shared content as the respective content of the shared content session while the shared content session is in the first mode; and
in response to receiving the set of one or more inputs corresponding to a request to select screen-shared content as the respective content of the shared content session, selecting the screen-shared content as the respective content of the shared content session and transitioning the shared content session from the first mode to a fifth mode in which a real-time audio channel is active for the shared content session.
32. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for performing the method of any of claims 1-31.
33. A computer system configured to communicate with one or more display generating components and one or more input devices, the computer system comprising:
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-31.
34. A computer system configured to communicate with one or more display generating components and one or more input devices, the computer system comprising:
apparatus for performing the method of any one of claims 1 to 31.
35. A computer program product comprising one or more programs configured for execution by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for performing the method of any of claims 1-31.
36. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for:
while displaying, via the one or more display generating components, a user interface for initiating a shared content session with the external computer system, receiving, via the one or more input devices, a first set of one or more inputs corresponding to a request to initiate a shared content session with the one or more external computer systems; and
in response to receiving the first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems, the shared content session, when active, enabling the computer system to output the respective content while the one or more external computer systems are outputting the respective content, wherein initiating the shared content session with one or more external computer systems comprises:
In accordance with a determination that the shared content session is initiated via asynchronous communication, the shared content session is initiated in a first mode in which a set of real-time communication features are disabled for the shared content session.
37. A computer system configured to communicate with one or more display generating components and one or more input devices, the computer system comprising:
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for:
while displaying, via the one or more display generating components, a user interface for initiating a shared content session with the external computer system, receiving, via the one or more input devices, a first set of one or more inputs corresponding to a request to initiate a shared content session with the one or more external computer systems; and
in response to receiving the first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems, the shared content session, when active, enabling the computer system to output the respective content while the one or more external computer systems are outputting the respective content, wherein initiating the shared content session with one or more external computer systems comprises:
In accordance with a determination that the shared content session is initiated via asynchronous communication, the shared content session is initiated in a first mode in which a set of real-time communication features are disabled for the shared content session.
38. A computer system configured to communicate with one or more display generating components and one or more input devices, the computer system comprising:
apparatus for: while displaying, via the one or more display generating components, a user interface for initiating a shared content session with the external computer system, receiving, via the one or more input devices, a first set of one or more inputs corresponding to a request to initiate a shared content session with the one or more external computer systems; and
apparatus for: in response to receiving the first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems, the shared content session, when active, enabling the computer system to output the respective content while the one or more external computer systems are outputting the respective content, wherein initiating the shared content session with one or more external computer systems comprises:
In accordance with a determination that the shared content session is initiated via asynchronous communication, the shared content session is initiated in a first mode in which a set of real-time communication features are disabled for the shared content session.
39. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for:
while displaying, via the one or more display generating components, a user interface for initiating a shared content session with the external computer system, receiving, via the one or more input devices, a first set of one or more inputs corresponding to a request to initiate a shared content session with the one or more external computer systems; and
in response to receiving the first set of one or more inputs corresponding to a request to initiate a shared content session with one or more external computer systems, the shared content session, when active, enabling the computer system to output the respective content while the one or more external computer systems are outputting the respective content, wherein initiating the shared content session with one or more external computer systems comprises:
In accordance with a determination that the shared content session is initiated via asynchronous communication, the shared content session is initiated in a first mode in which a set of real-time communication features are disabled for the shared content session.
40. A method, comprising:
at a computer system in communication with one or more display generating components and one or more input devices:
receiving an invitation to join a real-time communication session while the computer system is in a shared content session in which synchronized content is enabled for sharing with an external computer system and while a real-time communication session is not enabled, and displaying, via the one or more display generating components, an option to accept the invitation to join the real-time communication session; and
after receiving the invitation to join the real-time communication session:
joining the real-time communication session in accordance with the option to determine that the invitation to join the real-time communication session has been selected; and
in accordance with the option to determine that the invitation to join the real-time communication session has not been selected, joining the real-time communication session is relinquished.
41. The method of claim 40, further comprising:
A graphical object selectable to display a set of controls for the shared content session is displayed via the one or more display generating components and concurrently with the option to accept the invitation to join the real-time communication session.
42. The method of any one of claims 40 to 41, further comprising:
displaying, via the one or more display generating components, an option to decline the invitation to join the real-time communication session; and
after receiving the invitation to join the real-time communication session:
rejecting the invitation to join the real-time communication session in accordance with the option determining that the invitation to join the real-time communication session has been selected; and
in accordance with a determination that the option to decline the invitation to join the real-time communication session has not been selected, the invitation to decline to join the real-time communication session is relinquished.
43. The method of any one of claims 40 to 42, further comprising:
after receiving the invitation to join the real-time communication session:
in accordance with a determination that the option to accept the invitation to join the real-time communication session has not been selected within at least a threshold amount of time, the invitation to join the real-time communication session is stopped from being displayed.
44. The method of any one of claims 40 to 43, further comprising:
in response to receiving the invitation to join the real-time communication session, a notification corresponding to the invitation to join the real-time communication session is displayed via the one or more display generating components.
45. The method of claim 44, further comprising:
receiving a selection of the notification via the one or more input devices while the notification is displayed; and
in response to receiving the selection of the notification, an asynchronous communication user interface is displayed via the one or more display generating components.
46. The method of any one of claims 40 to 45, further comprising:
an invitation to join the shared content session is received before the computer system is in the shared content session in which synchronous content is enabled for sharing with the external computer system, wherein the invitation to join the shared content session is received via asynchronous communication provided using an asynchronous communication application.
47. The method of claim 46, wherein the asynchronous communication includes an indication of the synchronous content for the shared content session.
48. The method of any of claims 46 to 47, wherein the asynchronous communication includes an option selectable to join the shared content session.
49. The method of any of claims 40 to 48, wherein the invitation to join the real-time communication session is initiated via selection of a call option in a synchronized content session user interface at the external computer system.
50. The method of any of claims 40 to 49, wherein the invitation to join the real-time communication session is initiated via selection of a screen sharing option in a synchronized content session user interface at the external computer system.
51. The method of claim 50, wherein the option to accept the invitation to join the real-time communication session includes an indication that a respective user is sharing screen content from an external computer system associated with the respective user.
52. The method of any one of claims 50 to 51, further comprising:
after relinquishing joining the real-time communication session, displaying, via the one or more display generating components, an indication of activity occurring in the shared content session and a prompt to join the real-time communication session.
53. The method of any one of claims 40 to 52, further comprising:
after relinquishing joining the real-time communication session, displaying a control user interface for the shared content session having a first control option via the one or more display generating components;
receiving, via the one or more input devices, a set of one or more inputs comprising a selection of the first control option; and
in response to receiving the set of one or more inputs including selection of the first control option, a process for joining the real-time communication session is initiated.
54. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for performing the method of any of claims 40-53.
55. A computer system configured to communicate with one or more display generating components and one or more input devices, the computer system comprising:
One or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 40-53.
56. A computer system configured to communicate with one or more display generating components and one or more input devices, the computer system comprising:
apparatus for performing the method of any one of claims 40 to 53.
57. A computer program product comprising one or more programs configured for execution by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for performing the method of any of claims 40-53.
58. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for:
Receiving an invitation to join a real-time communication session while the computer system is in a shared content session in which synchronized content is enabled for sharing with an external computer system and while a real-time communication session is not enabled, and displaying, via the one or more display generating components, an option to accept the invitation to join the real-time communication session; and
after receiving the invitation to join the real-time communication session:
joining the real-time communication session in accordance with the option to determine that the invitation to join the real-time communication session has been selected; and
in accordance with the option to determine that the invitation to join the real-time communication session has not been selected, joining the real-time communication session is relinquished.
59. A computer system configured to communicate with one or more display generating components and one or more input devices, the computer system comprising:
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for:
receiving an invitation to join a real-time communication session while the computer system is in a shared content session in which synchronized content is enabled for sharing with an external computer system and while a real-time communication session is not enabled, and displaying, via the one or more display generating components, an option to accept the invitation to join the real-time communication session; and
After receiving the invitation to join the real-time communication session:
joining the real-time communication session in accordance with the option to determine that the invitation to join the real-time communication session has been selected; and
in accordance with the option to determine that the invitation to join the real-time communication session has not been selected, joining the real-time communication session is relinquished.
60. A computer system configured to communicate with one or more display generating components and one or more input devices, the computer system comprising:
apparatus for: receiving an invitation to join a real-time communication session while the computer system is in a shared content session in which synchronized content is enabled for sharing with an external computer system and while a real-time communication session is not enabled, and displaying, via the one or more display generating components, an option to accept the invitation to join the real-time communication session; and
means for, after receiving the invitation to join the real-time communication session:
joining the real-time communication session in accordance with the option to determine that the invitation to join the real-time communication session has been selected; and
In accordance with the option to determine that the invitation to join the real-time communication session has not been selected, joining the real-time communication session is relinquished.
61. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for:
receiving an invitation to join a real-time communication session while the computer system is in a shared content session in which synchronized content is enabled for sharing with an external computer system and while a real-time communication session is not enabled, and displaying, via the one or more display generating components, an option to accept the invitation to join the real-time communication session; and
after receiving the invitation to join the real-time communication session:
joining the real-time communication session in accordance with the option to determine that the invitation to join the real-time communication session has been selected; and
in accordance with the option to determine that the invitation to join the real-time communication session has not been selected, joining the real-time communication session is relinquished.
62. A method, comprising:
at a computer system in communication with one or more display generating components and one or more input devices:
while the computer system is in a communication session with an external computer system:
displaying, via the one or more display generating components, a control user interface for controlling one or more settings of the communication session, wherein the control user interface includes a first control option;
detecting, via the one or more input devices, a set of one or more inputs directed to the control user interface, wherein the set of one or more inputs includes a selection of the first control option; and
in response to detecting the selection of the first control option, a representation of one or more applications available on the computer system configured to provide content that can be played as synchronized content during the communication session is displayed.
63. The method of claim 62, further comprising:
in response to detecting the selection of the first control option, a screen sharing option is displayed via the one or more display generating components that is selectable to initiate a process for selecting screen sharing content for the communication session.
64. The method of any one of claims 62 to 63, further comprising:
in response to detecting the selection of the first control option, display, via the one or more display generating components, a setup option selectable to display a setup control for an application associated with the communication session.
65. The method of any of claims 62-64, wherein displaying a representation of one or more applications available on the computer system includes displaying a list of applications arranged based on usage criteria, the one or more applications configured to provide content that can be played as synchronized content during the communication session.
66. The method of any one of claims 62 to 65, further comprising:
upon displaying a representation of one or more applications available on the computer system, detecting a selection of a first application of the one or more applications via the one or more input devices, the one or more applications configured to provide content that can be played as synchronized content during the communication session; and
In response to detecting the selection of the first application of the one or more applications, a user interface for the first application is displayed via the one or more display generating components.
67. The method of claim 66, wherein displaying the user interface for the first application includes displaying the user interface for the first application during the communication session without initiating playback of first content associated with the first program as synchronized content, the method further comprising:
detecting, via the one or more input devices, selection of second content for playback; and
in response to detecting selection of the second content for playback, playback of the second content as synchronized content is initiated during the communication session.
68. The method of any one of claims 62 to 67, further comprising:
in response to detecting the selection of the first control option, displaying, via the one or more display generating components, a set of one or more playback options selectable to set automatic playback settings for synchronized content of the communication session;
While displaying the set of one or more playback options, detecting, via the one or more input devices, a set of one or more inputs comprising selection of one playback option; and
in response to detecting the set of one or more inputs comprising selecting one of the playback options:
in accordance with a determination that the selected playback option is the first playback option, enabling a mode in which synchronized content is automatically output at the computer system during the communication session; and
in accordance with a determination that the selected playback option is the second playback option, a mode is enabled in which synchronized content is not automatically output at the computer system.
69. The method of any one of claims 62 to 68, further comprising:
in response to detecting the selection of the first control option, displaying an application store option via the one or more display generating components;
detecting, via the one or more input devices, a set of one or more inputs corresponding to selection of the application store option while the application store option is displayed; and
in response to detecting the set of one or more inputs corresponding to selection of the application store option, a user interface providing the ability to obtain an application is displayed via the one or more display generating components.
70. The method of claim 69 wherein the user interface providing the capability to obtain applications includes a list of one or more applications configured to provide content that can be played as synchronized content during the communication session.
71. The method of any one of claims 62 to 70, further comprising:
upon displaying a representation of one or more applications available on the computer system, detecting selection of a respective application of the one or more applications available on the computer system, the one or more applications configured to provide content capable of being played as synchronized content during the communication session; and
in response to detecting the selection of the respective application, displaying, via the one or more display generating components, a user interface for the respective application, wherein the user interface for the respective application includes an indication of respective content provided by the respective application that is capable of being played as synchronized content during the communication session.
72. The method of any one of claims 62 to 71, further comprising:
In response to detecting the selection of the first control option, displaying an indication that the one or more applications are configured to provide content that can be played as synchronized content.
73. The method of any of claims 62-72, wherein displaying a representation of one or more applications available on the computer system includes displaying an application icon corresponding to the one or more applications, the one or more applications configured to provide content that can be played as synchronized content during the communication session.
74. The method of any of claims 62-73, wherein displaying a representation of the one or more applications available on the computer system includes displaying a scrollable list of the one or more applications, the one or more applications configured to provide content that can be played as synchronized content during the communication session.
75. The method of any of claims 62-74, wherein displaying a representation of one or more applications available on the computer system includes displaying a list of the one or more applications that visually emphasizes one or more applications selected based on participants of the communication session, but not other applications, the one or more applications configured to provide content that can be played as synchronized content during the communication session.
76. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for performing the method of any of claims 62-75.
77. A computer system configured to communicate with one or more display generating components and one or more input devices, the computer system comprising:
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 62-75.
78. A computer system configured to communicate with one or more display generating components and one or more input devices, the computer system comprising:
means for performing the method of any one of claims 62 to 75.
79. A computer program product comprising one or more programs configured for execution by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for performing the method of any of claims 62-75.
80. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for:
while the computer system is in a communication session with an external computer system:
displaying, via the one or more display generating components, a control user interface for controlling one or more settings of the communication session, wherein the control user interface includes a first control option;
detecting, via the one or more input devices, a set of one or more inputs directed to the control user interface, wherein the set of one or more inputs includes a selection of the first control option; and
in response to detecting the selection of the first control option, a representation of one or more applications available on the computer system configured to provide content that can be played as synchronized content during the communication session is displayed.
81. A computer system configured to communicate with one or more display generating components and one or more input devices, the computer system comprising:
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for:
while the computer system is in a communication session with an external computer system:
displaying, via the one or more display generating components, a control user interface for controlling one or more settings of the communication session, wherein the control user interface includes a first control option;
detecting, via the one or more input devices, a set of one or more inputs directed to the control user interface, wherein the set of one or more inputs includes a selection of the first control option; and
in response to detecting the selection of the first control option, a representation of one or more applications available on the computer system configured to provide content that can be played as synchronized content during the communication session is displayed.
82. A computer system configured to communicate with one or more display generating components and one or more input devices, the computer system comprising:
means for, while the computer system is in a communication session with an external computer system:
displaying, via the one or more display generating components, a control user interface for controlling one or more settings of the communication session, wherein the control user interface includes a first control option;
detecting, via the one or more input devices, a set of one or more inputs directed to the control user interface, wherein the set of one or more inputs includes a selection of the first control option; and
in response to detecting the selection of the first control option, a representation of one or more applications available on the computer system configured to provide content that can be played as synchronized content during the communication session is displayed.
83. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more input devices, the one or more programs comprising instructions for:
While the computer system is in a communication session with an external computer system:
displaying, via the one or more display generating components, a control user interface for controlling one or more settings of the communication session, wherein the control user interface includes a first control option;
detecting, via the one or more input devices, a set of one or more inputs directed to the control user interface, wherein the set of one or more inputs includes a selection of the first control option; and
in response to detecting the selection of the first control option, a representation of one or more applications available on the computer system configured to provide content that can be played as synchronized content during the communication session is displayed.
84. A method, comprising:
at a computer system in communication with one or more display generating components and one or more cameras:
receiving first data indicating whether a first external computer system meets a first set of criteria when the computer system is associated with a respective user account, wherein the first set of criteria is met when the first external computer system is within a threshold distance of the computer system, the first external computer system is associated with the respective user account, and the first external computer system is in a real-time communication session with a second external computer system; and
After receiving the first data and in accordance with a determination that the first data indicates that the first external computer system meets the first set of criteria, displaying, via the one or more display generating components, a respective user interface comprising user interface objects selectable to initiate a process for joining the real-time communication session with the second external computer system, wherein displaying the respective user interface comprises:
in accordance with a determination that the real-time communication session includes a real-time video feed, a representation of a field of view of the one or more cameras is displayed.
85. The method of claim 84, further comprising:
in response to receiving the first data and in accordance with a determination that the first data indicates that the first external computer system is not in a real-time communication session with a second external computer system, relinquishing display of the respective user interface that includes the user interface object selectable to initiate a procedure for joining the real-time communication session with the second external computer system.
86. The method of any one of claims 84 to 85 wherein displaying the respective user interface comprising the user interface object selectable to initiate a procedure for joining the real-time communication session with the second external computer system comprises:
In accordance with a determination that the real-time communication session is provided using a first application, displaying the user interface object as a first user interface object; and
in accordance with a determination that a second application different from the first application is to be used to provide the real-time communication session, the user interface object is displayed as a second user interface object different from the first user interface object.
87. The method of any one of claims 84 to 86 wherein displaying the respective user interface comprising the user interface object selectable to initiate a procedure for joining the real-time communication session with the second external computer system comprises:
in accordance with a determination that the one or more display generating components are in a first state, the user interface object is displayed as a notification of an event at the computer system.
88. The method of claim 87, wherein displaying the user interface object as a notification of an event at the computer system comprises:
in accordance with a determination that the first state is an unlocked state, displaying the notification with the first information about the real-time communication session; and
in accordance with a determination that the first state is a locked state, the notification is displayed without the first information regarding the real-time communication session.
89. The method of any one of claims 84-88 wherein displaying the respective user interface comprises:
in accordance with a determination that the respective user interface is a first type of user interface, the user interface object is displayed in an area of the respective user interface that provides status information for the computer system.
90. The method of any one of claims 84-89, wherein displaying the respective user interface includes:
in accordance with a determination that the respective user interface is a second type of user interface, the user interface object is displayed in an area of the respective user interface that includes a plurality of application icons for launching respective applications.
91. The method of any of claims 84 to 90 wherein the representation of the field of view of the one or more cameras is displayed in response to receiving a selection of the user interface object.
92. The method of any of claims 84-91, wherein displaying the respective user interface includes displaying a set of one or more video setting options associated with video settings for the real-time communication session.
93. The method of any one of claims 84 to 92 wherein the representation of the field of view of the one or more cameras comprises a view of a user of the computer system before the computer system joins the real-time communication session with the second external computer system.
94. The method of any one of claims 84-93 wherein the process for joining the real-time communication session with the second external computer system comprises:
in accordance with a determination that a predetermined amount of time has elapsed, automatically joining the real-time communication session with the second external computer system; and
in accordance with a determination that the predetermined amount of time has not elapsed, relinquishing participation in the real-time communication session with the second external computer system.
95. The method of claim 94, wherein displaying the respective user interface includes displaying a countdown of an amount of time remaining until the predetermined amount of time elapses.
96. The method of any one of claims 84 to 95 wherein the representation of the field of view of the one or more cameras is displayed simultaneously with the user interface object selectable to initiate the process for joining the real-time communication session with the second external computer system.
97. The method of any one of claims 84 to 96 wherein the computer system is in communication with one or more input devices, the method further comprising:
Receive, via the one or more input devices, a set of one or more inputs comprising a selection of the user interface object; and
in response to receiving the set of one or more inputs including the selection of the user interface object, joining the real-time communication session with the second external computer system.
98. The method of any one of claims 84-97, wherein displaying the respective user interface comprises:
in accordance with a determination that the real-time communication session does not include video, the representation of the field of view of the one or more cameras is discarded from being displayed.
99. The method of any of claims 84-98, wherein the process for joining the real-time communication session with the second external computer system comprises:
joining the real-time communication session with the second external computer system at the computer system; and
causing the first external computer system to terminate the real-time communication session with the second external computer system.
100. The method of claim 99, wherein after having the first external computer system terminate the real-time communication session with the second external computer system, the first external computer system displays a first option selectable to initiate a process for joining the real-time communication session with the second external computer system at the first external computer system.
101. The method of claim 100, wherein the first option is displayed in response to selection of a selectable graphical user interface displayed at the first external computer system, the first option selectable to initiate the process for joining the real-time communication session with the second external computer system at the first external computer system.
102. The method of claim 100, wherein the first option is displayed at the first external computer system as a first notification, the first notification including the first option selectable to initiate the process for joining the real-time communication session with the second external computer system at the first external computer system.
103. The method of claim 102, wherein the first external computer system stops displaying the first notification after the first notification has been displayed for a threshold amount of time.
104. The method of any one of claims 84 to 103 wherein:
before the computer system joins the real-time communication session with the second external computer system, the second external computer system displays a first user interface with a first video feed for the real-time communication session; and
After the computer system joins the real-time communication session, the second external computer system displays the first user interface with a second video feed for the real-time communication session that is different from the first video feed.
105. The method of any one of claims 84 to 104 wherein:
when a transition of the real-time communication session from the first external computer system to the computer system meets a verification criteria, the second external computer system displays the transition in a first manner;
when the transition of the real-time communication session from the first external computer system to the computer system does not meet the verification criteria, the second external computer system displays the transition in a second manner different from the first manner; and
the authentication criteria are met when the first external computer system and the computer system are logged into the same user account.
106. The method of any one of claims 84 to 105 wherein initiating the process for joining the real-time communication session with the second external computer system comprises:
in accordance with a determination that a corresponding audio output device is being used for the real-time communication session when the first external computer system is in the real-time communication session with the second external computer system, establishing a connection with the corresponding audio output device; and
In accordance with a determination that the corresponding audio output device is not being used for the real-time communication session when the first external computer system is in the real-time communication session with the second external computer system, the connection with the corresponding audio output device is relinquished.
107. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with one or more display generating components and one or more cameras, the one or more programs comprising instructions for performing the method of any of claims 84-106.
108. A computer system configured to communicate with one or more display generating components and one or more cameras, the computer system comprising:
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 84-106.
109. A computer system configured to communicate with one or more display generating components and one or more cameras, the computer system comprising:
means for performing the method of any one of claims 84 to 106.
110. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more cameras, the one or more programs comprising instructions for performing the method of any of claims 84-106.
111. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with one or more display generating components and one or more cameras, the one or more programs comprising instructions for:
receiving first data indicating whether a first external computer system meets a first set of criteria when the computer system is associated with a respective user account, wherein the first set of criteria is met when the first external computer system is within a threshold distance of the computer system, the first external computer system is associated with the respective user account, and the first external computer system is in a real-time communication session with a second external computer system; and
After receiving the first data and in accordance with a determination that the first data indicates that the first external computer system meets the first set of criteria, displaying, via the one or more display generating components, a respective user interface comprising a user interface object selectable to initiate a process for joining the real-time communication session with the second external computer system, wherein displaying the respective user interface comprises:
in accordance with a determination that the real-time communication session includes a real-time video feed, a representation of a field of view of the one or more cameras is displayed.
112. A computer system configured to communicate with one or more display generating components and one or more cameras, the computer system comprising:
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for:
receiving first data indicating whether a first external computer system meets a first set of criteria when the computer system is associated with a respective user account, wherein the first set of criteria is met when the first external computer system is within a threshold distance of the computer system, the first external computer system is associated with the respective user account, and the first external computer system is in a real-time communication session with a second external computer system; and
After receiving the first data and in accordance with a determination that the first data indicates that the first external computer system meets the first set of criteria, displaying, via the one or more display generating components, a respective user interface comprising a user interface object selectable to initiate a process for joining the real-time communication session with the second external computer system, wherein displaying the respective user interface comprises:
in accordance with a determination that the real-time communication session includes a real-time video feed, a representation of a field of view of the one or more cameras is displayed.
113. A computer system configured to communicate with one or more display generating components and one or more cameras, the computer system comprising:
apparatus for: receiving first data indicating whether a first external computer system meets a first set of criteria when the computer system is associated with a respective user account, wherein the first set of criteria is met when the first external computer system is within a threshold distance of the computer system, the first external computer system is associated with the respective user account, and the first external computer system is in a real-time communication session with a second external computer system; and
Apparatus for: after receiving the first data and in accordance with a determination that the first data indicates that the first external computer system meets the first set of criteria, displaying, via the one or more display generating components, a respective user interface comprising a user interface object selectable to initiate a process for joining the real-time communication session with the second external computer system, wherein displaying the respective user interface comprises:
in accordance with a determination that the real-time communication session includes a real-time video feed, a representation of a field of view of the one or more cameras is displayed.
114. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with one or more display generating components and one or more cameras, the one or more programs comprising instructions for:
receiving first data indicating whether a first external computer system meets a first set of criteria when the computer system is associated with a respective user account, wherein the first set of criteria is met when the first external computer system is within a threshold distance of the computer system, the first external computer system is associated with the respective user account, and the first external computer system is in a real-time communication session with a second external computer system; and
After receiving the first data and in accordance with a determination that the first data indicates that the first external computer system meets the first set of criteria, displaying, via the one or more display generating components, a respective user interface comprising a user interface object selectable to initiate a process for joining the real-time communication session with the second external computer system, wherein displaying the respective user interface comprises:
in accordance with a determination that the real-time communication session includes a real-time video feed, a representation of a field of view of the one or more cameras is displayed.
CN202310520843.3A 2022-05-10 2023-05-10 User interface for managing shared content sessions Pending CN117041416A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310585927.5A CN117041417A (en) 2022-05-10 2023-05-10 User interface for managing shared content sessions

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US63/340,414 2022-05-10
US18/067,350 US20230370507A1 (en) 2022-05-10 2022-12-16 User interfaces for managing shared-content sessions
US18/067,350 2022-12-16

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310585927.5A Division CN117041417A (en) 2022-05-10 2023-05-10 User interface for managing shared content sessions

Publications (1)

Publication Number Publication Date
CN117041416A true CN117041416A (en) 2023-11-10

Family

ID=88621418

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310585927.5A Pending CN117041417A (en) 2022-05-10 2023-05-10 User interface for managing shared content sessions
CN202310520843.3A Pending CN117041416A (en) 2022-05-10 2023-05-10 User interface for managing shared content sessions

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202310585927.5A Pending CN117041417A (en) 2022-05-10 2023-05-10 User interface for managing shared content sessions

Country Status (1)

Country Link
CN (2) CN117041417A (en)

Also Published As

Publication number Publication date
CN117041417A (en) 2023-11-10

Similar Documents

Publication Publication Date Title
US11849255B2 (en) Multi-participant live communication user interface
CN113162843B (en) User interface for multi-user communication session
US11671697B2 (en) User interfaces for wide angle video conference
US11360634B1 (en) Shared-content session user interfaces
US20220374136A1 (en) Adaptive video conference user interfaces
US11893214B2 (en) Real-time communication user interface
US20230262317A1 (en) User interfaces for wide angle video conference
CN111367603A (en) Multi-participant real-time communication user interface
EP4277238A1 (en) User interfaces for managing shared-content sessions
US20230319413A1 (en) User interfaces for camera sharing
US11943559B2 (en) User interfaces for providing live video
AU2019101062A4 (en) Multi-participant live communication user interface
US20210400131A1 (en) User interfaces for presenting indications of incoming calls
US11785277B2 (en) User interfaces for managing audio for media items
CN117041416A (en) User interface for managing shared content sessions
JP7419458B2 (en) User interface for managing media
US20240118793A1 (en) Real-time communication user interface
EP3659296B1 (en) Multi-participant live communication user interface
CN117378205A (en) Shared content session user interface
KR20230113825A (en) User Interfaces for Wide Angle Video Conferencing
CN117296309A (en) Adaptive video conferencing user interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication