CN116868191A - User interface and device settings based on user identification - Google Patents

User interface and device settings based on user identification Download PDF

Info

Publication number
CN116868191A
CN116868191A CN202280015964.6A CN202280015964A CN116868191A CN 116868191 A CN116868191 A CN 116868191A CN 202280015964 A CN202280015964 A CN 202280015964A CN 116868191 A CN116868191 A CN 116868191A
Authority
CN
China
Prior art keywords
user
computer system
determination
input devices
detecting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280015964.6A
Other languages
Chinese (zh)
Inventor
A·德多纳托
K·E·S·宝尔利
D·D·达尔甘
G·苏祖基
P·D·安东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/582,902 external-priority patent/US20220269333A1/en
Application filed by Apple Inc filed Critical Apple Inc
Priority to CN202311225588.6A priority Critical patent/CN117032465A/en
Priority claimed from PCT/US2022/016804 external-priority patent/WO2022178132A1/en
Publication of CN116868191A publication Critical patent/CN116868191A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates generally to user interfaces and device settings for electronic devices including wearable electronic devices based on user identification.

Description

User interface and device settings based on user identification
Cross Reference to Related Applications
The present application claims priority from U.S. provisional patent application 63/151,597 entitled "USER INTERFACES AND DEVICE SETTINGS BASED ON USER IDENTIFICATION" filed on day 19 at 2 of 2021 and U.S. patent application 17/582,902 entitled "USER INTERFACES AND DEVICE SETTINGS BASED ON USER IDENTIFICATION" filed on day 24 of 2022, the contents of each of which are hereby incorporated by reference in their entirety.
Technical Field
The present disclosure relates generally to computer systems in communication with a display generation component and optionally one or more input devices providing a computer-generated experience, including, but not limited to, electronic devices providing virtual reality and mixed reality experiences via a display.
Background
In recent years, the development of computer systems for augmented reality has increased significantly. An example augmented reality environment includes at least some virtual elements that replace or augment the physical world. Input devices (such as cameras, controllers, joysticks, touch-sensitive surfaces, and touch screen displays) for computer systems and other electronic computing devices are used to interact with the virtual/augmented reality environment. Exemplary virtual elements include virtual objects such as digital images, video, text, icons, and control elements (such as buttons and other graphics).
Disclosure of Invention
Some methods and interfaces for interacting with environments (e.g., applications, augmented reality environments, mixed reality environments, and virtual reality environments) that include at least some virtual elements are cumbersome, inefficient, and limited. For example, providing a system for insufficient feedback of actions associated with virtual objects, a system that requires a series of inputs to achieve desired results in an augmented reality environment, and a system in which virtual objects are complex, cumbersome, and error-prone to manipulate can create a significant cognitive burden on the user and detract from the experience of the virtual/augmented reality environment. In addition, these methods take longer than necessary, wasting energy. This latter consideration is particularly important in battery-powered devices.
Accordingly, there is a need for a computer system with improved methods and interfaces to provide a user with a computer-generated experience, thereby making user interactions with the computer system more efficient and intuitive for the user. Such methods and interfaces optionally complement or replace conventional methods for providing an augmented reality experience to a user. Such methods and interfaces reduce the number, extent, and/or nature of inputs from a user by helping the user understand the association between the inputs provided and the response of the device to those inputs, thereby forming a more efficient human-machine interface.
The disclosed system reduces or eliminates the above-described drawbacks and other problems associated with user interfaces of computer systems for communicating with display generating components and optionally one or more input devices. In some embodiments, the computer system is a desktop computer with an associated display. In some embodiments, the computer system is a portable device (e.g., a notebook, tablet, or handheld device). In some embodiments, the computer system is a personal electronic device (e.g., a wearable electronic device such as a watch or a head-mounted device). In some embodiments, the computer system has a touch pad. In some embodiments, the computer system has one or more cameras. In some implementations, the computer system has a touch-sensitive display (also referred to as a "touch screen" or "touch screen display"). In some embodiments, the computer system has one or more eye tracking components. In some embodiments, the computer system has one or more hand tracking components. In some embodiments, the computer system has, in addition to the display generating component, one or more output devices including one or more haptic output generators and one or more audio output devices. In some embodiments, a computer system has a Graphical User Interface (GUI), one or more processors, memory and one or more modules, a program or set of instructions stored in the memory for performing a plurality of functions. In some embodiments, the user interacts with the GUI through contact and gestures of a stylus and/or finger on the touch-sensitive surface, movements of the user's eyes and hands in space relative to the GUI or user's body (as captured by cameras and other motion sensors), and voice input (as captured by one or more audio input devices). In some embodiments, the functions performed by the interactions optionally include image editing, drawing, presentation, word processing, spreadsheet making, game playing, phone calls, video conferencing, email sending and receiving, instant messaging, test support, digital photography, digital video recording, web browsing, digital music playing, notes taking, and/or digital video playing. Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
There is a need for an electronic device with improved methods and interfaces to interact with a three-dimensional environment. Such methods and interfaces may supplement or replace conventional methods for interacting with a three-dimensional environment. Such methods and interfaces reduce the amount, degree, and/or nature of input from a user and result in a more efficient human-machine interface.
There is a need for an electronic device with improved methods and interfaces for automatically displaying one or more user interfaces and/or automatically applying one or more device settings based on an identification of a user (e.g., automatic identification). Such methods and interfaces may supplement or replace conventional methods for interacting with a computer system. Such methods and interfaces reduce the amount, degree, and/or nature of input from a user and result in a more efficient human-machine interface.
It is noted that the various embodiments described above may be combined with any of the other embodiments described herein. The features and advantages described in this specification are not all-inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following detailed description taken in conjunction with the following drawings, in which like reference numerals designate corresponding parts throughout the several views.
FIG. 1 illustrates an operating environment for a computer system for providing an augmented reality (XR) experience, according to some embodiments.
FIG. 2 is a block diagram illustrating a controller of a computer system configured to manage and coordinate a user's XR experience, according to some embodiments.
FIG. 3 is a block diagram illustrating a display generation component of a computer system configured to provide a visual component of an XR experience to a user, according to some embodiments.
FIG. 4 illustrates a hand tracking unit of a computer system configured to capture gesture inputs of a user, according to some embodiments.
Fig. 5 illustrates an eye tracking unit of a computer system configured to capture gaze input of a user, according to some embodiments.
Fig. 6 is a flow diagram illustrating a flash-assisted gaze tracking pipeline in accordance with some embodiments.
Fig. 7A-7H illustrate an exemplary user interface for automatically applying one or more user settings based on an identification of a user, according to some embodiments.
FIG. 8 is a flowchart illustrating an exemplary process for automatically applying one or more user settings based on an identification of a user, according to some embodiments.
Fig. 9A-9F illustrate an exemplary user interface for automatically applying one or more device calibration settings based on an identification of a user, according to some embodiments.
10A-10B are flowcharts illustrating an exemplary process for automatically applying one or more device calibration settings based on an identification of a user, according to some embodiments.
11A-11F illustrate an exemplary user interface for automatically applying and displaying a user avatar based on the user's identification, according to some embodiments.
12A-12B are flowcharts illustrating an exemplary process for automatically applying and displaying a user avatar based on the user's identification, according to some embodiments.
Fig. 13A-13K illustrate exemplary user interfaces for displaying content based on handover criteria according to some embodiments.
Fig. 14A-14B illustrate exemplary user interfaces for displaying content based on handover criteria, according to some embodiments.
Detailed Description
According to some embodiments, the present disclosure relates to a user interface for providing an augmented reality (XR) experience to a user.
The systems, methods, and GUIs described herein improve user interface interactions with virtual/augmented reality environments in a variety of ways.
In some embodiments, the computer system automatically applies and/or enables one or more user settings based on the identity of the user. The computer system is in communication with the display generation component and one or more input devices. The computer system detects that at least a portion of the computer system has been placed on the body of the respective user. In response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices corresponds to the first registered user, the computer system enables the computer system to be used with one or more settings associated with a first user account associated with the first registered user. In response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first registered user, the computer system relinquishes enabling the computer system to be used with one or more settings associated with a first user account associated with the first registered user.
In some embodiments, the computer system automatically applies the device calibration settings based on the identity of the user. The computer system is in communication with the display generation component and one or more input devices. The computer system detects that at least a portion of the computer system has been placed on the body of the respective user. After detecting that at least a portion of the computer system has been placed on the body of the respective user, the computer system detects input from the respective user based on movement or position of the at least a portion of the body of the respective user. In response to detecting an input from the respective user, the computer system is responsive to the input from the respective user. In accordance with a determination that the respective user is a first user that has previously registered with the computer system, the computer system generates a response to the input based on the movement or location of the portion of the respective user's body and a first set of device calibration settings specific to the first user. In accordance with a determination that the respective user is not the first user, the computer system generates a response to the input based on the movement or position of the portion of the body of the respective user and without using the first set of device calibration settings specific to the first user.
In some embodiments, the first computer system displays a digital avatar based on the user's identification. The first computer system is in communication with the display generation component and one or more input devices. The first computer system detects a request to display an avatar of a user of the respective computer system. In response to detecting a request to display an avatar, the first computer system displays an avatar of a user of the respective computer system. In accordance with a determination that the user of the respective computer system is a registered user of the respective computer system, the first computer system displays an avatar having an appearance selected by the user of the respective computer system, wherein the avatar moves based on movements of the user detected by one or more sensors of the respective computer system. In accordance with a determination that the user of the respective computer system is not a registered user of the respective computer system, the first computer system displays an avatar having a placeholder appearance that does not represent the appearance of the user of the respective computer system, wherein the avatar moves based on movements of the user detected by one or more sensors of the respective computer system.
In some embodiments, the computer system displays the content based on the identity of the user and based on the handover criteria. The computer system is in communication with the display generation component and one or more input devices. When the computer system is placed on the body of the first user, the computer system displays a first user interface corresponding to the first application via the display generation component, wherein the first user interface is displayed in a first mode having allowable access to a plurality of features associated with the first user. When the first user interface is displayed in a first mode having allowable access to a plurality of features associated with the first user, the computer system detects, via one or more input devices, that the computer system has been removed from the body of the first user. After detecting that the computer system has been removed from the body of the first user, the computer system detects, via one or more input devices, that the computer system has been placed on the body of the respective user. In response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices corresponds to the first user, the computer system displays, via the display generation component, the first user interface in a first mode having allowable access to a plurality of features associated with the first user. In response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and has met a set of handoff criteria, the computer system displays, via the display generating component, the first user interface in a second mode having restricted access to one or more of the plurality of features associated with the first user.
Fig. 1-6 provide a description of an exemplary computer system for providing an XR experience to a user. Fig. 7A-7H illustrate an exemplary user interface for automatically applying one or more user settings based on an identification of a user. Fig. 8 is a flow chart illustrating a method of automatically applying one or more user settings based on an identification of a user, according to some embodiments. The user interfaces in fig. 7A-7H are used to illustrate the processes described below, including the process in fig. 8. Fig. 9A-9F illustrate an exemplary user interface for automatically applying one or more device calibration settings based on an identification of a user. 10A-10B are flowcharts illustrating methods of automatically applying one or more device calibration settings based on user identification, according to some embodiments. The user interfaces in fig. 9A to 9F are used to illustrate the processes described below, including the processes in fig. 10A to 10B. 11A-11F illustrate an exemplary user interface for automatically applying and displaying a user avatar based on the user's identification. 12A-12B are flowcharts illustrating methods of automatically applying and displaying a user avatar based on a user's identification, according to some embodiments. The user interfaces in fig. 11A to 11F are used to illustrate processes described below, including the processes in fig. 12A to 12B. Fig. 13A to 13K illustrate exemplary user interfaces for displaying content based on handover standards. Fig. 14A to 14B are flowcharts illustrating a method of displaying contents based on handover criteria according to some embodiments. The user interfaces in fig. 13A to 13K are used to illustrate the processes described below, including the processes in fig. 14A to 14B.
In some embodiments, as shown in fig. 1, an XR experience is provided to a user via an operating environment 100 comprising a computer system 101. The computer system 101 includes a controller 110 (e.g., a processor or remote server of a portable electronic device), a display generation component 120 (e.g., a Head Mounted Device (HMD), a display, a projector, a touch screen, etc.), one or more input devices 125 (e.g., an eye tracking device 130, a hand tracking device 140, other input devices 150), one or more output devices 155 (e.g., a speaker 160, a haptic output generator 170, and other output devices 180), one or more sensors 190 (e.g., an image sensor, a light sensor, a depth sensor, a haptic sensor, an orientation sensor, a proximity sensor, a temperature sensor, a position sensor, a motion sensor, a speed sensor, etc.), and optionally one or more peripheral devices 195 (e.g., a household appliance, a wearable device, etc.). In some implementations, one or more of the input device 125, the output device 155, the sensor 190, and the peripheral device 195 are integrated with the display generating component 120 (e.g., in a head-mounted device or a handheld device).
In describing an XR experience, various terms are used to refer differently to several related but different environments that a user may sense and/or interact with (e.g., interact with inputs detected by computer system 101 that generated the XR experience, which causes the computer system that generated the XR experience to generate audio, visual, and/or tactile feedback corresponding to various inputs provided to computer system 101). The following are a subset of these terms:
physical environment: a physical environment refers to a physical world in which people can sense and/or interact without the assistance of an electronic system. Physical environments such as physical parks include physical objects such as physical trees, physical buildings, and physical people. People can directly sense and/or interact with a physical environment, such as by visual, tactile, auditory, gustatory, and olfactory.
And (3) augmented reality: conversely, an augmented reality (XR) environment refers to a fully or partially simulated environment in which people sense and/or interact via an electronic system. In XR, a subset of the physical movements of the person, or a representation thereof, is tracked, and in response, one or more characteristics of one or more virtual objects simulated in the XR environment are adjusted in a manner consistent with at least one physical law. For example, an XR system may detect a person's head rotation and, in response, adjust the graphical content and sound field presented to the person in a manner similar to the manner in which such views and sounds change in a physical environment. In some cases (e.g., for reachability reasons), the adjustment of the characteristics of the virtual object in the XR environment may be made in response to a representation of the physical motion (e.g., a voice command). A person may utilize any of his sensations to sense and/or interact with XR objects, including vision, hearing, touch, taste, and smell. For example, a person may sense and/or interact with audio objects that create a 3D or spatial audio environment that provides perception of a point audio source in 3D space. As another example, an audio object may enable audio transparency that selectively introduces environmental sounds from a physical environment with or without computer generated audio. In some XR environments, a person may sense and/or interact with only audio objects.
Examples of XRs include virtual reality and mixed reality.
Virtual reality: a Virtual Reality (VR) environment refers to a simulated environment designed to be based entirely on computer-generated sensory input for one or more senses. The VR environment includes a plurality of virtual objects that a person can sense and/or interact with. For example, computer-generated images of trees, buildings, and avatars representing people are examples of virtual objects. A person may sense and/or interact with virtual objects in the VR environment through a simulation of the presence of the person within the computer-generated environment and/or through a simulation of a subset of the physical movements of the person within the computer-generated environment.
Mixed reality: in contrast to VR environments designed to be based entirely on computer-generated sensory input, a Mixed Reality (MR) environment refers to a simulated environment designed to introduce sensory input from a physical environment or a representation thereof in addition to including computer-generated sensory input (e.g., virtual objects). On a virtual continuum, a mixed reality environment is any condition between, but not including, a full physical environment as one end and a virtual reality environment as the other end. In some MR environments, the computer-generated sensory input may be responsive to changes in sensory input from the physical environment. In addition, some electronic systems for rendering MR environments may track the position and/or orientation relative to the physical environment to enable virtual objects to interact with real objects (i.e., physical objects or representations thereof from the physical environment). For example, the system may cause the motion such that the virtual tree appears to be stationary relative to the physical ground.
Examples of mixed reality include augmented reality and augmented virtualization.
Augmented reality: an Augmented Reality (AR) environment refers to a simulated environment in which one or more virtual objects are superimposed over a physical environment or a representation of a physical environment. For example, an electronic system for presenting an AR environment may have a transparent or translucent display through which a person may directly view the physical environment. The system may be configured to present the virtual object on a transparent or semi-transparent display such that a person perceives the virtual object superimposed over the physical environment with the system. Alternatively, the system may have an opaque display and one or more imaging sensors that capture images or videos of the physical environment, which are representations of the physical environment. The system combines the image or video with the virtual object and presents the composition on an opaque display. A person utilizes the system to indirectly view the physical environment via an image or video of the physical environment and perceive a virtual object superimposed over the physical environment. As used herein, video of a physical environment displayed on an opaque display is referred to as "pass-through video," meaning that the system captures images of the physical environment using one or more image sensors and uses those images when rendering an AR environment on the opaque display. Further alternatively, the system may have a projection system that projects the virtual object into the physical environment, for example as a hologram or on a physical surface, such that a person perceives the virtual object superimposed on top of the physical environment with the system. An augmented reality environment also refers to a simulated environment in which a representation of a physical environment is transformed by computer-generated sensory information. For example, in providing a passthrough video, the system may transform one or more sensor images to apply a selected viewing angle (e.g., a viewpoint) that is different from the viewing angle captured by the imaging sensor. As another example, the representation of the physical environment may be transformed by graphically modifying (e.g., magnifying) portions thereof such that the modified portions may be representative but not real versions of the original captured image. For another example, the representation of the physical environment may be transformed by graphically eliminating or blurring portions thereof.
Enhanced virtualization: enhanced virtual (AV) environment refers to a simulated environment in which a virtual environment or computer-generated environment incorporates one or more sensory inputs from a physical environment. The sensory input may be a representation of one or more characteristics of the physical environment. For example, an AV park may have virtual trees and virtual buildings, but the face of a person is realistically reproduced from an image taken of a physical person. As another example, the virtual object may take the shape or color of a physical object imaged by one or more imaging sensors. For another example, the virtual object may employ shadows that conform to the positioning of the sun in the physical environment.
Hardware: there are many different types of electronic systems that enable a person to sense and/or interact with various XR environments. Examples include head-mounted systems, projection-based systems, head-up displays (HUDs), vehicle windshields integrated with display capabilities, windows integrated with display capabilities, displays formed as lenses designed for placement on a human eye (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smart phones, tablet computers, and desktop/laptop computers. The head-mounted system may have one or more speakers and an integrated opaque display. Alternatively, the head-mounted system may be configured to accept an external opaque display (e.g., a smart phone). The head-mounted system may incorporate one or more imaging sensors for capturing images or video of the physical environment, and/or one or more microphones for capturing audio of the physical environment. The head-mounted system may have a transparent or translucent display instead of an opaque display. The transparent or translucent display may have a medium through which light representing an image is directed to the eyes of a person. The display may utilize digital light projection, OLED, LED, uLED, liquid crystal on silicon, laser scanning light sources, or any combination of these techniques. The medium may be an optical waveguide, a holographic medium, an optical combiner, an optical reflector, or any combination thereof. In one embodiment, the transparent or translucent display may be configured to selectively become opaque. Projection-based systems may employ retinal projection techniques that project a graphical image onto a person's retina. The projection system may also be configured to project the virtual object into the physical environment, for example as a hologram or on a physical surface.
In some embodiments, the controller 110 is configured to manage and coordinate the XR experience of the user. In some embodiments, controller 110 includes suitable combinations of software, firmware, and/or hardware. The controller 110 is described in more detail below with reference to fig. 2. In some implementations, the controller 110 is a computing device that is in a local or remote location relative to the scene 105 (e.g., physical environment). For example, the controller 110 is a local server located within the scene 105. As another example, the controller 110 is a remote server (e.g., cloud server, central server, etc.) located outside of the scene 105. In some implementations, the controller 110 is communicatively coupled with the display generation component 120 (e.g., HMD, display, projector, touch-screen, etc.) via one or more wired or wireless communication channels 144 (e.g., bluetooth, IEEE 802.11x, IEEE 802.16x, IEEE 802.3x, etc.). In another example, the controller 110 is included within a housing (e.g., a physical enclosure) of the display generation component 120 (e.g., an HMD or portable electronic device including a display and one or more processors, etc.), one or more of the input devices 125, one or more of the output devices 155, one or more of the sensors 190, and/or one or more of the peripheral devices 195, or shares the same physical housing or support structure with one or more of the above.
In some embodiments, display generation component 120 is configured to provide an XR experience (e.g., at least a visual component of the XR experience) to a user. In some embodiments, display generation component 120 includes suitable combinations of software, firmware, and/or hardware. The display generating section 120 is described in more detail below with respect to fig. 3. In some embodiments, the functionality of the controller 110 is provided by and/or combined with the display generating component 120.
According to some embodiments, display generation component 120 provides an XR experience to a user when the user is virtually and/or physically present within scene 105.
In some embodiments, the display generating component is worn on a portion of the user's body (e.g., on his/her head, on his/her hand, etc.). As such, display generation component 120 includes one or more XR displays provided for displaying XR content. For example, in various embodiments, the display generation component 120 encloses a field of view of a user. In some embodiments, display generation component 120 is a handheld device (such as a smart phone or tablet computer) configured to present XR content, and the user holds the device with a display facing the user's field of view and a camera facing scene 105. In some embodiments, the handheld device is optionally placed within a housing that is worn on the head of the user. In some embodiments, the handheld device is optionally placed on a support (e.g., tripod) in front of the user. In some embodiments, display generation component 120 is an XR room, housing, or room configured to present XR content, wherein the user does not wear or hold display generation component 120. Many of the user interfaces described with reference to one type of hardware for displaying XR content (e.g., a handheld device or a device on a tripod) may be implemented on another type of hardware for displaying XR content (e.g., an HMD or other wearable computing device). For example, a user interface showing interactions with XR content triggered based on interactions occurring in a space in front of a handheld device or a tripod-mounted device may similarly be implemented with an HMD, where the interactions occur in the space in front of the HMD and responses to the XR content are displayed via the HMD. Similarly, a user interface showing interaction with XR content triggered based on movement of a handheld device or tripod-mounted device relative to a physical environment (e.g., a scene 105 or a portion of a user's body (e.g., a user's eye, head, or hand)) may similarly be implemented with an HMD, where the movement is caused by movement of the HMD relative to the physical environment (e.g., the scene 105 or a portion of the user's body (e.g., a user's eye, head, or hand)).
While relevant features of the operating environment 100 are shown in fig. 1, those of ordinary skill in the art will recognize from this disclosure that various other features are not shown for the sake of brevity and so as not to obscure more relevant aspects of the exemplary embodiments disclosed herein.
Fig. 2 is a block diagram of an example of a controller 110 according to some embodiments. While certain specific features are shown, those of ordinary skill in the art will appreciate from the disclosure that various other features are not shown for the sake of brevity and so as not to obscure more pertinent aspects of the embodiments disclosed herein. To this end, as a non-limiting example, in some embodiments, the controller 110 includes one or more processing units 202 (e.g., microprocessors, application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs), graphics Processing Units (GPUs), central Processing Units (CPUs), processing cores, etc.), one or more input/output (I/O) devices 206, one or more communication interfaces 208 (e.g., universal Serial Bus (USB), IEEE 802.3x, IEEE 802.11x, IEEE 802.16x, global system for mobile communications (GSM), code Division Multiple Access (CDMA), time Division Multiple Access (TDMA), global Positioning System (GPS), infrared (IR), bluetooth, ZIGBEE, and/or similar types of interfaces), one or more programming (e.g., I/O) interfaces 210, memory 220, and one or more communication buses 204 for interconnecting these components and various other components.
In some embodiments, one or more of the communication buses 204 include circuitry that interconnects and controls communications between system components. In some implementations, the one or more I/O devices 206 include at least one of a keyboard, a mouse, a touch pad, a joystick, one or more microphones, one or more speakers, one or more image sensors, one or more displays, and the like.
Memory 220 includes high-speed random access memory such as Dynamic Random Access Memory (DRAM), static Random Access Memory (SRAM), double data rate random access memory (DDR RAM), or other random access solid state memory devices. In some embodiments, memory 220 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 220 optionally includes one or more storage devices located remotely from the one or more processing units 202. Memory 220 includes a non-transitory computer-readable storage medium. In some embodiments, memory 220 or a non-transitory computer readable storage medium of memory 220 stores the following programs, modules, and data structures, or a subset thereof, including optional operating system 230 and XR experience module 240.
Operating system 230 includes instructions for handling various basic system services and for performing hardware-related tasks. In some embodiments, XR experience module 240 is configured to manage and coordinate single or multiple XR experiences of one or more users (e.g., single XR experiences of one or more users, or multiple XR experiences of a respective group of one or more users). To this end, in various embodiments, the XR experience module 240 includes a data acquisition unit 242, a tracking unit 244, a coordination unit 246, and a data transmission unit 248.
In some embodiments, the data acquisition unit 242 is configured to acquire data (e.g., presentation data, interaction data, sensor data, location data, etc.) from at least the display generation component 120 of fig. 1, and optionally from one or more of the input device 125, the output device 155, the sensor 190, and/or the peripheral device 195. For this purpose, in various embodiments, the data acquisition unit 242 includes instructions and/or logic for instructions as well as heuristics and metadata for heuristics.
In some embodiments, tracking unit 244 is configured to map scene 105 and track at least the location/position of display generation component 120 relative to scene 105 of fig. 1, and optionally the location of one or more of input device 125, output device 155, sensor 190, and/or peripheral device 195. For this purpose, in various embodiments, tracking unit 244 includes instructions and/or logic for instructions as well as heuristics and metadata for heuristics. In some embodiments, tracking unit 244 includes a hand tracking unit 243 and/or an eye tracking unit 245. In some embodiments, the hand tracking unit 243 is configured to track the location/position of one or more portions of the user's hand, and/or the motion of one or more portions of the user's hand relative to the scene 105 of fig. 1, relative to the display generating component 120, and/or relative to a coordinate system defined relative to the user's hand. The hand tracking unit 243 is described in more detail below with respect to fig. 4. In some implementations, the eye tracking unit 245 is configured to track the positioning or movement of the user gaze (or more generally, the user's eyes, face, or head) relative to the scene 105 (e.g., relative to the physical environment and/or relative to the user (e.g., the user's hand)) or relative to XR content displayed via the display generation component 120. Eye tracking unit 245 is described in more detail below with respect to fig. 5.
In some embodiments, coordination unit 246 is configured to manage and coordinate XR experiences presented to a user by display generation component 120, and optionally by one or more of output device 155 and/or peripheral device 195. For this purpose, in various embodiments, coordination unit 246 includes instructions and/or logic for instructions as well as heuristics and metadata for heuristics.
In some embodiments, the data transmission unit 248 is configured to transmit data (e.g., presentation data, location data, etc.) to at least the display generation component 120, and optionally to one or more of the input device 125, the output device 155, the sensor 190, and/or the peripheral device 195. For this purpose, in various embodiments, the data transmission unit 248 includes instructions and/or logic for instructions as well as heuristics and metadata for heuristics.
While the data acquisition unit 242, tracking unit 244 (e.g., including the eye tracking unit 243 and hand tracking unit 244), coordination unit 246, and data transmission unit 248 are shown as residing on a single device (e.g., controller 110), it should be understood that in other embodiments, any combination of the data acquisition unit 242, tracking unit 244 (e.g., including the eye tracking unit 243 and hand tracking unit 244), coordination unit 246, and data transmission unit 248 may reside in a single computing device.
Furthermore, FIG. 2 is a functional description of various features that may be present in a particular implementation, as opposed to a schematic of the embodiments described herein. As will be appreciated by one of ordinary skill in the art, the individually displayed items may be combined and some items may be separated. For example, some of the functional blocks shown separately in fig. 2 may be implemented in a single block, and the various functions of a single functional block may be implemented by one or more functional blocks in various embodiments. The actual number of modules and the division of particular functions, and how features are allocated among them, will vary depending upon the particular implementation, and in some embodiments, depend in part on the particular combination of hardware, software, and/or firmware selected for a particular implementation.
Fig. 3 is a block diagram of an example of display generation component 120 according to some embodiments. While certain specific features are shown, those of ordinary skill in the art will appreciate from the disclosure that various other features are not shown for the sake of brevity and so as not to obscure more pertinent aspects of the embodiments disclosed herein. For this purpose, as a non-limiting example, in some embodiments, the display generation component 120 (e.g., HMD) includes one or more processing units 302 (e.g., microprocessors, ASIC, FPGA, GPU, CPU, processing cores, etc.), one or more input/output (I/O) devices and sensors 306, one or more communication interfaces 308 (e.g., USB, FIREWIRE, THUNDERBOLT, IEEE 802.3x, IEEE 802.11x, IEEE 802.16x, GSM, CDMA, TDMA, GPS, IR, BLUETOOTH, ZIGBEE, and/or similar types of interfaces), one or more programming (e.g., I/O) interfaces 310, one or more XR displays 312, one or more optional internally and/or externally facing image sensors 314, memory 320, and one or more communication buses 304 for interconnecting these components and various other components.
In some embodiments, one or more communication buses 304 include circuitry for interconnecting and controlling communications between various system components. In some embodiments, the one or more I/O devices and sensors 306 include an Inertial Measurement Unit (IMU), an accelerometer, a gyroscope, a thermometer, one or more physiological sensors (e.g., blood pressure monitor, heart rate monitor, blood oxygen sensor, blood glucose sensor, etc.), one or more microphones, one or more speakers, a haptic engine, and/or one or more depth sensors (e.g., structured light, time of flight, etc.), and/or the like.
In some embodiments, one or more XR displays 312 are configured to provide an XR experience to a user. In some embodiments, one or more XR displays 312 correspond to holographic, digital Light Processing (DLP), liquid Crystal Displays (LCD), liquid crystal on silicon (LCoS), organic light emitting field effect transistors (OLET), organic Light Emitting Diodes (OLED), surface conduction electron emitting displays (SED), field Emission Displays (FED), quantum dot light emitting diodes (QD-LED), microelectromechanical systems (MEMS), and/or similar display types. In some embodiments, one or more XR displays 312 correspond to diffractive, reflective, polarizing, holographic, etc. waveguide displays. For example, the display generation component 120 (e.g., HMD) includes a single XR display. As another example, display generation component 120 includes an XR display for each eye of the user. In some embodiments, one or more XR displays 312 are capable of presenting MR and VR content. In some implementations, one or more XR displays 312 can present MR or VR content.
In some embodiments, the one or more image sensors 314 are configured to acquire image data corresponding to at least a portion of the user's face including the user's eyes (and may be referred to as an eye tracking camera). In some embodiments, the one or more image sensors 314 are configured to acquire image data corresponding to at least a portion of the user's hand and optionally the user's arm (and may be referred to as a hand tracking camera). In some implementations, the one or more image sensors 314 are configured to face forward in order to acquire image data corresponding to a scene that a user would see in the absence of the display generating component 120 (e.g., HMD) (and may be referred to as a scene camera). The one or more optional image sensors 314 may include one or more RGB cameras (e.g., with Complementary Metal Oxide Semiconductor (CMOS) image sensors or Charge Coupled Device (CCD) image sensors), one or more Infrared (IR) cameras, and/or one or more event-based cameras, etc.
Memory 320 includes high-speed random access memory such as DRAM, SRAM, DDR RAM or other random access solid state memory devices. In some embodiments, memory 320 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 320 optionally includes one or more storage devices located remotely from the one or more processing units 302. Memory 320 includes a non-transitory computer-readable storage medium. In some embodiments, memory 320 or a non-transitory computer readable storage medium of memory 320 stores the following programs, modules, and data structures, or a subset thereof, including optional operating system 330 and XR presentation module 340.
Operating system 330 includes processes for handling various basic system services and for performing hardware-related tasks. In some embodiments, XR presentation module 340 is configured to present XR content to a user via one or more XR displays 312. For this purpose, in various embodiments, the XR presentation module 340 includes a data acquisition unit 342, an XR presentation unit 344, an XR map generation unit 346, and a data transmission unit 348.
In some embodiments, the data acquisition unit 342 is configured to at least acquire data (e.g., presentation data, interaction data, sensor data, location data, etc.) from the controller 110 of fig. 1. For this purpose, in various embodiments, the data acquisition unit 342 includes instructions and/or logic for instructions and heuristics and metadata for heuristics.
In some embodiments, XR presentation unit 344 is configured to present XR content via one or more XR displays 312. For this purpose, in various embodiments, XR presentation unit 344 includes instructions and/or logic for instructions and heuristics and metadata for heuristics.
In some embodiments, XR map generation unit 346 is configured to generate an XR map (e.g., a 3D map of a mixed reality scene or a map of a physical environment in which computer-generated objects may be placed to generate augmented reality) based on the media content data. For this purpose, in various embodiments, XR map generation unit 346 includes instructions and/or logic for the instructions as well as heuristics and metadata for the heuristics.
In some embodiments, the data transmission unit 348 is configured to transmit data (e.g., presentation data, location data, etc.) to at least the controller 110, and optionally one or more of the input device 125, the output device 155, the sensor 190, and/or the peripheral device 195. For this purpose, in various embodiments, the data transmission unit 348 includes instructions and/or logic for instructions and heuristics and metadata for heuristics.
Although the data acquisition unit 342, the XR presentation unit 344, the XR map generation unit 346, and the data transmission unit 348 are shown as residing on a single device (e.g., the display generation component 120 of fig. 1), it should be understood that in other embodiments, any combination of the data acquisition unit 342, the XR presentation unit 344, the XR map generation unit 346, and the data transmission unit 348 may be located in separate computing devices.
Furthermore, fig. 3 is used more as a functional description of various features that may be present in a particular embodiment, as opposed to a schematic of the embodiments described herein. As will be appreciated by one of ordinary skill in the art, the individually displayed items may be combined and some items may be separated. For example, some of the functional blocks shown separately in fig. 3 may be implemented in a single block, and the various functions of a single functional block may be implemented by one or more functional blocks in various embodiments. The actual number of modules and the division of particular functions, and how features are allocated among them, will vary depending upon the particular implementation, and in some embodiments, depend in part on the particular combination of hardware, software, and/or firmware selected for a particular implementation.
Fig. 4 is a schematic illustration of an exemplary embodiment of a hand tracking device 140. In some embodiments, the hand tracking device 140 (fig. 1) is controlled by the hand tracking unit 243 (fig. 2) to track the position/location of one or more portions of the user's hand, and/or movement of one or more portions of the user's hand relative to the scene 105 of fig. 1 (e.g., relative to a portion of the physical environment surrounding the user, relative to the display generating component 120, or relative to a portion of the user (e.g., the user's face, eyes, or head), and/or relative to a coordinate system defined relative to the user's hand). In some implementations, the hand tracking device 140 is part of the display generation component 120 (e.g., embedded in or attached to a head-mounted device). In some embodiments, the hand tracking device 140 is separate from the display generation component 120 (e.g., in a separate housing or attached to a separate physical support structure).
In some implementations, the hand tracking device 140 includes an image sensor 404 (e.g., one or more IR cameras, 3D cameras, depth cameras, and/or color cameras, etc.) that captures three-dimensional scene information including at least a human user's hand 406. The image sensor 404 captures the hand image with sufficient resolution to enable the fingers and their respective locations to be distinguished. The image sensor 404 typically captures images of other parts of the user's body, and possibly also all parts of the body, and may have a zoom capability or a dedicated sensor with increased magnification to capture images of the hand with a desired resolution. In some implementations, the image sensor 404 also captures 2D color video images of the hand 406 and other elements of the scene. In some implementations, the image sensor 404 is used in conjunction with other image sensors to capture the physical environment of the scene 105, or as an image sensor that captures the physical environment of the scene 105. In some embodiments, the image sensor 404, or a portion thereof, is positioned relative to the user or the user's environment in a manner that uses the field of view of the image sensor to define an interaction space in which hand movements captured by the image sensor are considered input to the controller 110.
In some embodiments, the image sensor 404 outputs a sequence of frames containing 3D mapping data (and, in addition, possible color image data) to the controller 110, which extracts high-level information from the mapping data. This high-level information is typically provided via an Application Program Interface (API) to an application program running on the controller, which drives the display generating component 120 accordingly. For example, a user may interact with software running on the controller 110 by moving his hands 408 and changing his hand gestures.
In some implementations, the image sensor 404 projects a speckle pattern onto a scene that includes the hand 406 and captures an image of the projected pattern. In some implementations, the controller 110 calculates 3D coordinates of points in the scene (including points on the surface of the user's hand) by triangulation based on lateral offsets of the blobs in the pattern. This approach is advantageous because it does not require the user to hold or wear any kind of beacon, sensor or other marker. The method gives the depth coordinates of points in the scene relative to a predetermined reference plane at a specific distance from the image sensor 404. In this disclosure, it is assumed that the image sensor 404 defines an orthogonal set of x-axis, y-axis, z-axis such that the depth coordinates of points in the scene correspond to the z-component measured by the image sensor. Alternatively, the hand tracking device 440 may use other 3D mapping methods, such as stereoscopic imaging or time-of-flight measurements, based on single or multiple cameras or other types of sensors.
In some implementations, the hand tracking device 140 captures and processes a time series containing a depth map of the user's hand as the user moves his hand (e.g., the entire hand or one or more fingers). Software running on the image sensor 404 and/or a processor in the controller 110 processes the 3D mapping data to extract image block descriptors of the hand in these depth maps. The software may match these descriptors with image block descriptors stored in database 408 based on previous learning processes in order to estimate the pose of the hand in each frame. The pose typically includes the 3D position of the user's hand joints and finger tips.
The software may also analyze the trajectory of the hand and/or finger over multiple frames in the sequence to identify gestures. The pose estimation functions described herein may alternate with motion tracking functions such that image block-based pose estimation is performed only once every two (or more) frames while tracking changes used to find poses that occur on the remaining frames. Pose, motion, and gesture information are provided to an application running on the controller 110 via the APIs described above. The program may move and modify images presented on the display generation component 120, for example, in response to pose and/or gesture information, or perform other functions.
In some embodiments, the software may be downloaded to the controller 110 in electronic form, over a network, for example, or may alternatively be provided on tangible non-transitory media, such as optical, magnetic, or electronic memory media. In some embodiments, database 408 is also stored in a memory associated with controller 110. Alternatively or in addition, some or all of the described functions of the computer may be implemented in dedicated hardware, such as a custom or semi-custom integrated circuit or a programmable Digital Signal Processor (DSP). Although controller 110 is shown in fig. 4, some or all of the processing functions of the controller may be performed by a suitable microprocessor and software or by dedicated circuitry within the housing of hand tracking device 402 or other devices associated with image sensor 404, for example, as a separate unit from image sensor 440. In some embodiments, at least some of these processing functions may be performed by a suitable processor integrated with display generation component 120 (e.g., in a television receiver, handheld device, or head mounted device) or with any other suitable computerized device (such as a game console or media player). The sensing functionality of the image sensor 404 may likewise be integrated into a computer or other computerized device to be controlled by the sensor output.
Fig. 4 also includes a schematic diagram of a depth map 410 captured by the image sensor 404, according to some embodiments. As described above, the depth map comprises a matrix of pixels having corresponding depth values. Pixels 412 corresponding to the hand 406 have been segmented from the background and wrist in the map. The brightness of each pixel within the depth map 410 is inversely proportional to its depth value (i.e., the measured z-distance from the image sensor 404), where the gray shade becomes darker with increasing depth. The controller 110 processes these depth values to identify and segment components of the image (i.e., a set of adjacent pixels) that have human hand features. These features may include, for example, overall size, shape, and frame-to-frame motion from a sequence of depth maps.
Fig. 4 also schematically illustrates the hand bones 414 that the controller 110 eventually extracts from the depth map 410 of the hand 406, according to some embodiments. In fig. 4, bone 414 is superimposed over hand background 416 that has been segmented from the original depth map. In some embodiments, key feature points of the hand and optionally on the wrist or arm connected to the hand (e.g., points corresponding to knuckles, finger tips, palm centers, ends of the hand connected to the wrist, etc.) are identified and located on the hand bones 414. In some embodiments, the controller 110 uses the positions and movements of these key feature points on the plurality of image frames to determine a gesture performed by the hand or a current state of the hand according to some embodiments.
Fig. 5 illustrates an exemplary embodiment of the eye tracking device 130 (fig. 1). In some implementations, the eye tracking device 130 is controlled by the eye tracking unit 245 (fig. 2) to track the positioning and movement of the user gaze relative to the scene 105 or relative to XR content displayed via the display generating component 120. In some embodiments, the eye tracking device 130 is integrated with the display generation component 120. For example, in some embodiments, when display generating component 120 is a head-mounted device (such as a headset, helmet, goggles, or glasses) or a handheld device placed in a wearable frame, the head-mounted device includes both components that generate XR content for viewing by a user and components for tracking the user's gaze with respect to the XR content. In some embodiments, the eye tracking device 130 is separate from the display generation component 120. For example, when the display generating component is a handheld device or an XR chamber, the eye tracking device 130 is optionally a device separate from the handheld device or XR chamber. In some embodiments, the eye tracking device 130 is a head mounted device or a portion of a head mounted device. In some embodiments, the head-mounted eye tracking device 130 is optionally used in combination with a display generating component that is also head-mounted or a display generating component that is not head-mounted. In some embodiments, the eye tracking device 130 is not a head mounted device and is optionally used in conjunction with a head mounted display generating component. In some embodiments, the eye tracking device 130 is not a head mounted device and optionally is part of a non-head mounted display generating component.
In some embodiments, the display generation component 120 uses a display mechanism (e.g., a left near-eye display panel and a right near-eye display panel) to display frames including left and right images in front of the user's eyes, thereby providing a 3D virtual view to the user. For example, the head mounted display generating component may include left and right optical lenses (referred to herein as eye lenses) located between the display and the user's eyes. In some embodiments, the display generation component may include or be coupled to one or more external cameras that capture video of the user's environment for display. In some embodiments, the head mounted display generating component may have a transparent or translucent display and the virtual object is displayed on the transparent or translucent display through which the user may directly view the physical environment. In some embodiments, the display generation component projects the virtual object into the physical environment. The virtual object may be projected, for example, on a physical surface or as a hologram, such that an individual uses the system to observe the virtual object superimposed over the physical environment. In this case, separate display panels and image frames for the left and right eyes may not be required.
As shown in fig. 5, in some embodiments, the gaze tracking device 130 includes at least one eye tracking camera (e.g., an Infrared (IR) or Near Infrared (NIR) camera) and an illumination source (e.g., an array or ring of IR or NIR light sources, such as LEDs) that emits light (e.g., IR or NIR light) toward the user's eyes. The eye-tracking camera may be directed toward the user's eye to receive IR or NIR light reflected directly from the eye by the light source, or alternatively may be directed toward "hot" mirrors located between the user's eye and the display panel that reflect IR or NIR light from the eye to the eye-tracking camera while allowing visible light to pass through. The gaze tracking device 130 optionally captures images of the user's eyes (e.g., as a video stream captured at 60-120 frames per second (fps)), analyzes the images to generate gaze tracking information, and communicates the gaze tracking information to the controller 110. In some embodiments, both eyes of the user are tracked separately by the respective eye tracking camera and illumination source. In some embodiments, only one eye of the user is tracked by the respective eye tracking camera and illumination source.
In some embodiments, the eye tracking device 130 is calibrated using a device-specific calibration process to determine parameters of the eye tracking device for the particular operating environment 100, such as 3D geometry and parameters of LEDs, cameras, hot mirrors (if present), eye lenses, and display screens. The device-specific calibration procedure may be performed at the factory or another facility prior to delivering the AR/VR equipment to the end user. The device-specific calibration process may be an automatic calibration process or a manual calibration process. According to some embodiments, the user-specific calibration process may include an estimation of eye parameters of a specific user, such as pupil position, foveal position, optical axis, visual axis, eye distance, etc. According to some embodiments, once the device-specific parameters and the user-specific parameters are determined for the eye-tracking device 130, the images captured by the eye-tracking camera may be processed using a flash-assist method to determine the current visual axis and gaze point of the user relative to the display.
As shown in fig. 5, the eye tracking device 130 (e.g., 130A or 130B) includes an eye lens 520 and a gaze tracking system including at least one eye tracking camera 540 (e.g., an Infrared (IR) or Near Infrared (NIR) camera) positioned on a side of the user's face on which eye tracking is performed, and an illumination source 530 (e.g., an IR or NIR light source such as an array or ring of NIR Light Emitting Diodes (LEDs)) that emits light (e.g., IR or NIR light) toward the user's eyes 592. The eye-tracking camera 540 may be directed toward a mirror 550 (which reflects IR or NIR light from the eye 592 while allowing visible light to pass) located between the user's eye 592 and the display 510 (e.g., left or right display panel of a head-mounted display, or display of a handheld device, projector, etc.) (e.g., as shown in the top portion of fig. 5), or alternatively may be directed toward the user's eye 592 to receive reflected IR or NIR light from the eye 592 (e.g., as shown in the bottom portion of fig. 5).
In some implementations, the controller 110 renders AR or VR frames 562 (e.g., left and right frames for left and right display panels) and provides the frames 562 to the display 510. The controller 110 uses the gaze tracking input 542 from the eye tracking camera 540 for various purposes, such as for processing the frames 562 for display. The controller 110 optionally estimates the gaze point of the user on the display 510 based on gaze tracking input 542 acquired from the eye tracking camera 540 using a flash assist method or other suitable method. The gaze point estimated from the gaze tracking input 542 is optionally used to determine the direction in which the user is currently looking.
Several possible use cases of the current gaze direction of the user are described below and are not intended to be limiting. As an exemplary use case, the controller 110 may render virtual content differently based on the determined direction of the user's gaze. For example, the controller 110 may generate virtual content in a foveal region determined according to a current gaze direction of the user at a higher resolution than in a peripheral region. As another example, the controller may position or move virtual content in the view based at least in part on the user's current gaze direction. As another example, the controller may display particular virtual content in the view based at least in part on the user's current gaze direction. As another exemplary use case in an AR application, the controller 110 may direct an external camera used to capture the physical environment of the XR experience to focus in the determined direction. The autofocus mechanism of the external camera may then focus on an object or surface in the environment that the user is currently looking at on display 510. As another example use case, the eye lens 520 may be a focusable lens, and the controller uses the gaze tracking information to adjust the focus of the eye lens 520 such that the virtual object the user is currently looking at has the appropriate vergence to match the convergence of the user's eyes 592. The controller 110 may utilize the gaze tracking information to direct the eye lens 520 to adjust the focus such that the approaching object the user is looking at appears at the correct distance.
In some embodiments, the eye tracking device is part of a head mounted device that includes a display (e.g., display 510), two eye lenses (e.g., eye lens 520), an eye tracking camera (e.g., eye tracking camera 540), and a light source (e.g., light source 530 (e.g., IR or NIR LED)) mounted in a wearable housing. The light source emits light (e.g., IR or NIR light) toward the user's eye 592. In some embodiments, the light sources may be arranged in a ring or circle around each of the lenses, as shown in fig. 5. In some embodiments, for example, eight light sources 530 (e.g., LEDs) are arranged around each lens 520. However, more or fewer light sources 530 may be used, and other arrangements and locations of light sources 530 may be used.
In some implementations, the display 510 emits light in the visible range and does not emit light in the IR or NIR range, and thus does not introduce noise in the gaze tracking system. Note that the position and angle of the eye tracking camera 540 is given by way of example and is not intended to be limiting. In some embodiments, a single eye tracking camera 540 may be located on each side of the user's face. In some implementations, two or more NIR cameras 540 may be used on each side of the user's face. In some implementations, a camera 540 with a wider field of view (FOV) and a camera 540 with a narrower FOV may be used on each side of the user's face. In some implementations, a camera 540 operating at one wavelength (e.g., 850 nm) and a camera 540 operating at a different wavelength (e.g., 940 nm) may be used on each side of the user's face.
The embodiment of the gaze tracking system as shown in fig. 5 may be used, for example, in computer-generated reality, virtual reality, and/or mixed reality applications to provide a user with a computer-generated reality, virtual reality, augmented reality, and/or augmented virtual experience.
Fig. 6 illustrates a flash-assisted gaze tracking pipeline in accordance with some embodiments. In some embodiments, the gaze tracking pipeline is implemented by a glint-assisted gaze tracking system (e.g., eye tracking device 130 as shown in fig. 1 and 5). The flash-assisted gaze tracking system may maintain a tracking state. Initially, the tracking state is off or "no". When in the tracking state, the glint-assisted gaze tracking system uses previous information from a previous frame when analyzing the current frame to track pupil contours and glints in the current frame. When not in the tracking state, the glint-assisted gaze tracking system attempts to detect pupils and glints in the current frame and, if successful, initializes the tracking state to "yes" and continues with the next frame in the tracking state.
As shown in fig. 6, the gaze tracking camera may capture left and right images of the left and right eyes of the user. The captured image is then input to the gaze tracking pipeline for processing beginning at 610. As indicated by the arrow returning to element 600, the gaze tracking system may continue to capture images of the user's eyes, for example, at a rate of 60 to 120 frames per second. In some embodiments, each set of captured images may be input to a pipeline for processing. However, in some embodiments or under some conditions, not all captured frames are pipelined.
At 610, for the currently captured image, if the tracking state is yes, the method proceeds to element 640. At 610, if the tracking state is no, the image is analyzed to detect a user's pupil and glints in the image, as indicated at 620. At 630, if the pupil and glints are successfully detected, the method proceeds to element 640. Otherwise, the method returns to element 610 to process the next image of the user's eye.
At 640, if proceeding from element 610, the current frame is analyzed to track pupils and glints based in part on previous information from the previous frame. At 640, if proceeding from element 630, a tracking state is initialized based on the pupil and flash detected in the current frame. The results of the processing at element 640 are checked to verify that the results of the tracking or detection may be trusted. For example, the results may be checked to determine if the pupil and a sufficient number of flashes for performing gaze estimation are successfully tracked or detected in the current frame. If the result is unlikely to be authentic at 650, the tracking state is set to "no" at element 660, and the method returns to element 610 to process the next image of the user's eye. At 650, if the result is trusted, the method proceeds to element 670. At 670, the tracking state is set to "yes" (if not already yes), and pupil and glint information is passed to element 680 to estimate the gaze point of the user.
Fig. 6 is intended to serve as one example of an eye tracking technique that may be used in a particular implementation. As will be appreciated by one of ordinary skill in the art, other eye tracking techniques, currently existing or developed in the future, may be used in place of or in combination with the glint-assisted eye tracking techniques described herein in computer system 101 for providing an XR experience to a user, according to various embodiments.
In this disclosure, various input methods are described with respect to interactions with a computer system. When one input device or input method is used to provide an example and another input device or input method is used to provide another example, it should be understood that each example may be compatible with and optionally utilize the input device or input method described with respect to the other example. Similarly, various output methods are described with respect to interactions with a computer system. When one output device or output method is used to provide an example and another output device or output method is used to provide another example, it should be understood that each example may be compatible with and optionally utilize the output device or output method described with respect to the other example. Similarly, the various methods are described with respect to interactions with a virtual environment or mixed reality environment through a computer system. When examples are provided using interactions with a virtual environment, and another example is provided using a mixed reality environment, it should be understood that each example may be compatible with and optionally utilize the methods described with respect to the other example. Thus, the present disclosure discloses embodiments that are combinations of features of multiple examples, without the need to list all features of the embodiments in detail in the description of each example embodiment.
User interface and associated process
Attention is now directed to embodiments of user interfaces ("UIs") and associated processes that may be implemented on a computer system, such as a portable multifunction device or a head-mounted device, in communication with a display generation component and (optionally) one or more input devices.
Fig. 7A-7H illustrate an exemplary user interface for automatically applying one or more user settings based on an identification of a user, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the process in fig. 8.
Fig. 7A depicts an electronic device 700 that is a smart watch including a touch-sensitive display 702, a rotatable and depressible input mechanism 704 (e.g., rotatable and depressible relative to a housing or frame of the device), a button 706, and a camera 708. In some embodiments described below, the electronic device 700 is a wearable smart watch device. In some embodiments, the electronic device 700 is a smart phone, tablet, headset system (e.g., a headset), or other computer system that includes and/or communicates with a display device (e.g., a display screen, a projection device, etc.). Electronic device 700 is a computer system (e.g., computer system 101 in fig. 1).
In fig. 7A, a user 703 places an electronic device 700 on his wrist. The electronic device 700 detects (e.g., via one or more sensors) that the electronic device 700 has been placed on the body of the user. In some embodiments, the electronic device 700 detects that one or more criteria indicating that the electronic device 700 has been placed on the body of a user have been met. In some embodiments in which the electronic device 700 is a head-mounted system, the electronic device 700 detects (e.g., via one or more sensors) that the electronic device 700 has been placed onto the head and/or face of a user.
In fig. 7A, in response to detecting that the electronic device 700 has been placed on the user's body, the electronic device 700 attempts to automatically identify the user 703. In some embodiments, the electronic device 700 may attempt to identify the user 703 in various ways. For example, the electronic device 700 optionally attempts to automatically identify the user based on biometric information such as facial recognition, eye (e.g., iris) recognition, and/or fingerprint recognition. In fig. 7A, in response to detecting that the electronic device 700 has been placed on the user's body, the electronic device 700 collects biometric information (e.g., a facial scan and/or an eye (e.g., iris) scan using the camera 708).
Fig. 7B depicts a first exemplary scenario in which the electronic device 700 has identified the user 703 (e.g., based on biometric information) as a first registered user John. For example, the electronic device 700 compares the collected biometric information with registered biometric information (of the electronic device 700) to identify the user 703 as a first registered user (and thereby optionally log the user 703 into the electronic device 700 using a first user account). In response to identifying the user 703 as a first registered user, the electronic device 700 displays a personalized user interface 710 corresponding to (e.g., uniquely corresponding to and/or specifically corresponding to) the first registered user. In the depicted embodiment, personalized user interface 710 indicates a successful login to a first user account associated with a first registered user. In some embodiments, the electronic device 700 logging into a user account associated with a registered user includes displaying a personalized user interface associated with the registered user, applying one or more device settings associated with (e.g., specified by) the registered user, providing access to security information associated with the registered user, and so forth. In some embodiments, applying one or more device settings associated with the registered user includes, for example, applying and/or displaying a visual representation (e.g., avatar) corresponding to the registered user and/or applying one or more device calibration settings (e.g., eye movement calibration, hand movement calibration, and/or head movement calibration) corresponding to the registered user. The personalized user interface 710 includes a visual indication 712e that indicates successful identification of the user 703 as a first registered user and that indicates successful sign-in to a first user account associated with the first registered user.
The personalized user interface 710 includes a time indication 712a and a plurality of affordances (e.g., dial complex functions). In some embodiments, each affordance is associated with an application on device 700 (e.g., electronic device 700 launches the associated application upon selection of the respective affordance and/or electronic device 700 displays information from the associated application upon selection of the respective affordance).
Physical activity affordance 712b indicates the measured physical activity level of the first registered user and is specific (e.g., uniquely corresponds) to the first registered user. Physical activity complex function 712b includes three concentric rings, and each ring indicates a different physical activity metric for the first registered user. For example, the first ring indicates the number of calories burned by the first registered user on the day, the second ring indicates the number of minutes the user has been active on the day, and the third ring indicates the number of hours or threshold number of times the user has been standing for a threshold amount of time on the day. In the depicted embodiment, the first loop indicates progress toward the calorie target, the second loop indicates progress toward the target number of exercise minutes of the day, and the third loop indicates progress toward the target number of standing hours of the day. The physical activity affordance 712b is optionally based on physical activity of the user 703 that is collected before the user 703 wears the electronic device 700 (e.g., collected by other devices and transmitted to the electronic device 700).
Weather affordance 712c indicates the current weather conditions for a particular location. In some embodiments, the particular location is associated with (e.g., selected and/or specified by) the first registered user.
Calendar affordance 712d indicates one or more upcoming calendar appointments for the first registered user. In some implementations, one or more upcoming calendar appointments are identified based on calendar information specific to the first registered user. In FIG. 7B, calendar affordance 712d indicates that the next upcoming event in the first registered user's calendar is a calendar entry entitled "yoga" at 11:15 a.m.
In some embodiments, the personalized user interface 710 is specific to the first registered user at least in that the first registered user has selected and/or specified one or more aspects of the personalized user interface 710. For example, the first registered user has selected the affordances 712a-712d and the displayed locations and/or positions of the affordances 712a-712d, and the first registered user has specified information to be displayed in the affordances 712a-712d (e.g., the first registered user has specified the location of the weather affordance 712c, and has entered calendar information to be displayed in the calendar affordance 712 d). Different users will see different affordances and/or different information in each affordance, examples of which are discussed below with reference to fig. 7C.
In fig. 7B, the electronic device 700 is a smart watch and depicts a personalized user interface 710 with personalized affordances/dial-up complex functions. In some implementations, the electronic device 700 is a headset system (e.g., a headset). In some embodiments where the electronic device 700 is a head-mounted system, the personalized user interface 710 includes personalized affordances similar to those discussed above. In some embodiments in which electronic device 700 is a head-mounted system, personalized user interface 710 includes a personalized virtual environment (e.g., a personalized virtual environment that has been selected by and/or specific to a registered user), a real-time virtual communication session user interface associated with (e.g., specific to) a registered user, and/or one or more application icons associated with (e.g., selected by and/or specific to) a registered user.
Fig. 7C depicts a second exemplary scenario in which, in response to detecting that the electronic device 700 has been placed on the user's body, the electronic device 700 has identified (e.g., based on biometric information) the user 703 as a second registered user Sarah. In response to identifying the user 703 as a second registered user, the electronic device 700 displays a second personalized user interface 714 that corresponds to (e.g., uniquely corresponds to and/or specifically corresponds to) the second registered user and that is different from the personalized user interface 710. In the depicted embodiment, personalized user interface 714 indicates a successful login to a second user account associated with a second registered user. The personalized user interface 714 includes a visual indication 716d that indicates successful identification of the user 703 as a second registered user and that successful sign-in to a second user account associated with the second registered user. In some embodiments, logging into a user account associated with the second registered user includes displaying a personalized user interface associated with the second registered user, applying one or more device settings associated with (e.g., specified by) the second registered user, providing access to security information associated with the second registered user, and so forth. In some embodiments, applying one or more device settings associated with the second registered user may include, for example, applying and/or displaying a visual representation (e.g., avatar) corresponding to the second registered user and/or applying one or more device calibration settings (e.g., eye movement calibration, hand movement calibration, and/or head movement calibration) corresponding to the second registered user.
As shown in fig. 7C, personalized user interface 714 has a different visual appearance than personalized user interface 710. For example, personalized user interface 714 has an analog time indication 716a, while personalized user interface 710 has a digital time indication 712a. The personalized user interface 714 has a physical activity affordance 716B similar to the physical activity affordance 712B in fig. 7B, but the physical activity affordance 716B is displayed at a different location on the display 702. In addition, the physical activity affordance 716b corresponds to (e.g., is specific to) the second registered user, and displays a physical activity metric indicative of the physical activity of the second registered user (e.g., based on the physical activity measurement of the second registered user) (while the physical activity affordance 712b displays a physical activity metric indicative of the physical activity of the first registered user). The personalized user interface 714 also has a heart rate affordance 716c that is selected by the second registered user for inclusion in the personalized user interface 714, but not selected by the first registered user for inclusion in the personalized user interface 710.
In fig. 7C, the electronic device 700 is a smart watch. In some implementations, the electronic device 700 is a headset system (e.g., a headset). In some embodiments where the electronic device 700 is a head-mounted system, the personalized user interface 714 comprises personalized affordances similar to those discussed above. In some embodiments in which electronic device 700 is a head-mounted system, personalized user interface 714 includes a personalized virtual environment (e.g., a personalized virtual environment that has been selected by and/or specific to a registered user), a real-time virtual communication session user interface associated with (e.g., specific to) a registered user, and/or one or more application icons associated with (e.g., selected by and/or specific to) a registered user. In some embodiments where the electronic device 700 is a head-mounted system, logging into a user account associated with the second registered user includes displaying a personalized user interface associated with the second registered user, applying one or more device settings associated with (e.g., specified by) the second registered user, providing access to security information associated with the second registered user, and so forth. In some embodiments, applying one or more device settings associated with the second registered user may include, for example, applying and/or displaying a visual representation (e.g., avatar) corresponding to the second registered user and/or applying one or more device calibration settings (e.g., eye movement calibration, hand movement calibration, and/or head movement calibration) corresponding to the second registered user.
Fig. 7D depicts a third exemplary scenario in which, in response to detecting that the electronic device 700 has been placed on the body of the user, the electronic device 700 has determined that the user 703 is not a previously registered user (e.g., the electronic device 700 fails to match biometric information from the user 703 with stored biometric information of the previously registered user). In response to determining that user 703 is not a previously registered user, electronic device 700 displays guest user interface 718. The guest user interface 718 indicates that the user 703 is identified as a guest user (e.g., indicates that the user 703 is not identified as a registered user) and includes a visual indication 720g indicating that the user 703 is identified as a guest user (e.g., indicates that the user 703 is not identified as a registered user).
The guest user interface 718 is not associated with a registered user and contains (e.g., contains only) information that is not specific to any single user. For example, guest user interface 718 includes a time indication 720a, a date affordance 720b, a weather affordance 720c, a battery level affordance 720e, and an air quality affordance 720f.
Fig. 7E depicts an alternative embodiment of a third exemplary scenario in which the electronic device 700 has determined that the user 703 is not a previously registered user (e.g., fails to match biometric information from the user 703 with stored biometric information of a previously registered user). In some embodiments, in response to determining that user 703 is not a previously registered user, electronic device 700 displays user selector user interface 722. User selector user interface 722 includes selectable objects 724a-724c that correspond to different users. The first selectable object 724a corresponds to a first registered user John, the second selectable object 724b corresponds to a second registered user Sarah, and the third selectable object 724c corresponds to an unregistered guest user. A request to log in a first user account associated with a first registered user is received corresponding to a selection of a first selectable object 724a, a request to log in a second user account associated with a second registered user is received corresponding to a selection of a selectable object 724b, and a request to display a guest user interface (e.g., guest user interface 718 of fig. 7D) is received corresponding to a selection of a selectable object 724c.
In fig. 7E, the electronic device 700 detects a user input 726 at a location on the display 702 that corresponds to a selectable object 724a that corresponds to a first registered user.
In fig. 7E, the electronic device 700 is a smart watch. In some implementations, the electronic device 700 is a headset system (e.g., a headset). In some embodiments where the electronic device 700 is a head-mounted system, the user selector interface 722 and/or the selectable objects 724a-724c are presented in a virtual environment. In fig. 7E, user input 726 is received via touch screen input. In some embodiments in which the electronic device 700 is a head-mounted system, user input is received to navigate within a displayed user interface (e.g., user selector interface 722) and select various selectable objects (e.g., selectable objects 724a-724 c), for example, based on a user's eye movement, hand movement, and/or gestures.
In fig. 7F, in response to detecting the user input 726, the electronic device 700 collects updated biometric information (e.g., facial scan information, eye (e.g., retina, iris) scan information, and/or fingerprint information) from the user 703 to determine whether the user 703 is a first registered user (e.g., to determine whether the updated biometric information corresponds to the first registered user and/or to determine whether the updated biometric information corresponds to stored biometric information corresponding to the first registered user). If the updated biometric information is determined to correspond to the first registered user, the electronic device 700 (optionally logs into the first user account and) displays a personalized user interface 710 (FIG. 7B) indicating a successful login to the first user account associated with the first registered user. If the updated biometric information is not determined to correspond to the first registered user, the electronic device 700 foregoes logging into the first user account (and foregoes displaying the personalized user interface 710).
In fig. 7G, in response to a determination that the updated biometric information does not correspond to the first registered user, the electronic device 700 displays a password entry user interface 728. Password entry user interface 728 displays a keypad for the user to enter a password corresponding to the first registered user. If the user types (e.g., via touch input 730, voice input, and/or other user input) the correct password, the electronic device 700 logs in to the first user account (and optionally displays the user interface 710, applies one or more user settings associated with the first user account, and/or provides access to secure content associated with the first user account). If the user does not enter the correct password, the electronic device 700 foregoes logging into the first user account (and optionally foregoes displaying the user interface 710, forego applying one or more user settings associated with the first user account, and/or forego providing access to secure content associated with the first user account).
In some embodiments, the user is provided with the option to enable or disable automatic biometric authentication. In such embodiments, if the user has disabled automatic biometric authentication, the electronic device 700 foregoes storing the user's biometric information. In such a scenario, the user logs in to his or her user account, for example, by typing in a password specific to the user account (e.g., using password entry user interface 728). Thus, in some embodiments, if the first registered user John has selected to exit the automatic biometric authentication (e.g., has disabled the automatic biometric authentication), the electronic device 700 optionally foregoes attempting the automatic biometric authentication (fig. 7F) and displays (e.g., directly) the password input user interface 728 (fig. 7G) in response to the user input 726 in fig. 7E.
In fig. 7F-7G, the electronic device 700 is a smart watch. In some implementations, the electronic device 700 is a headset system (e.g., a headset). In some embodiments where the electronic device 700 is a head-mounted system, the password entry user interface 728 is displayed in a virtual environment. In fig. 7G, user input 730 is received via touch screen input. In some embodiments in which the electronic device 700 is a head-mounted system, user input is received, for example, based on a user's eye movement, hand movement, and/or gestures, to navigate within a displayed user interface (e.g., password input interface 728) and select various selectable objects.
In fig. 7H, the user 703 has removed the electronic device 700 from his body. The electronic device 700 detects that the electronic device 700 is no longer positioned on the body of the user. In response to detecting that the electronic device 700 is no longer positioned on the user's body, the electronic device 700 optionally logs out of any user accounts that the electronic device 700 is logged into, and ceases to display any user interfaces that are being displayed. Logging out of the user account may include, for example, ceasing to display a personalized user interface associated with the user account, ceasing to apply one or more user settings associated with the user account, and/or ceasing to provide access to secure content associated with the user account.
In the depicted embodiment, the electronic device 700 is a smart watch, and fig. 7A-7H depict the user 703 wearing the smart watch on his wrist or removing the smart watch from his wrist. However, as discussed above, in some embodiments, the electronic device 700 is a different device designed to be worn on the head of a user, such as a headset system (e.g., headphones). In such embodiments, the electronic device 700 may attempt to automatically identify the user when it is determined that the electronic device 700 has been placed on the user's head (e.g., based on iris recognition and/or facial recognition when the electronic device 700 is placed on the user's head). Additionally, in such embodiments, content such as user interfaces 710, 714, 718, 722, and 728 are optionally displayed via a display generation component in communication with the head-mounted system, and one or more user inputs are optionally received via one or more input devices in communication with the head-mounted system.
Fig. 8 is a flow chart illustrating a method for automatically applying one or more user settings based on an identification of a user using an electronic device, according to some embodiments. The method 800 is performed at a computer system (e.g., 101, 700) (e.g., a smart phone, a tablet, a head mounted display generating component) in communication with a display generating component (e.g., 702) (e.g., a visual output device, a 3D display, a display having at least a portion of a transparency or translucency on which an image may be projected (e.g., a see-through display), a projector, a heads-up display, a display controller) and one or more input devices (e.g., camera 708, touch screen display 702) (e.g., a touch screen, an infrared camera, a depth camera, a visible light camera, an eye tracking device, a hand tracking device). Some operations in method 800 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 800 provides an intuitive way for automatically applying one or more user settings based on the user's identification. The method reduces the cognitive burden on the user when applying one or more user settings, thereby creating a more efficient human-machine interface. For battery-powered computing devices, enabling a user to apply one or more user settings faster and more efficiently saves power and increases the time between battery charges.
In some embodiments, a computer system (e.g., device 700) (e.g., a smart phone, a smart watch, a tablet, and/or a wearable device) in communication with a display generating component (e.g., display 702) (e.g., a display controller, a touch-sensitive display system, a display (e.g., an integrated and/or connected), a 3D display, a transparent display, a projector, and/or a heads-up display) and one or more input devices (e.g., 702, 704, 706, 708) (e.g., a touch-sensitive surface (e.g., a touch-sensitive display), a mouse, a keyboard, a remote control, a visual input device (e.g., a camera), an audio input device (e.g., a microphone), and/or a biometric sensor (e.g., a fingerprint sensor, a facial recognition sensor, and/or an iris recognition sensor) detects that at least a portion of the computer system has been placed on the body of a respective user (e.g., user 703 of fig. 7A) (802).
In response to detecting that the computer system has been placed on the body of the respective user (804), and in accordance with the biometric information received via the one or more input devices (e.g., fingerprint, image (e.g., photograph and/or scan) representing the face of the respective user, and/or iris identification information (e.g., iris scan information)) corresponding to a determination (e.g., fig. 7A-7B) of a first registered user (e.g., a user that has been previously registered on the computer system) (e.g., in accordance with a determination that the respective user is the first registered user) (806), the computer system enables the computer system to be used with one or more settings associated with (e.g., specified by) a first user account associated with the first registered user (e.g., to log the computer system into (e.g., display of personalized user interface 710 in fig. 7B) (808). In some embodiments, in response to detecting that a computer system has been placed on the body of a respective user, the computer system receives biometric information (e.g., corresponding to the respective user) via one or more input devices. In some embodiments, the biometric information is received while at least a portion of the computer system is being worn by the respective user. In some embodiments, the method further comprises: a first user interface (e.g., personalized user interface 710) corresponding to the first registered user is optionally displayed via the display generation component (in some embodiments, the first user interface indicates successful login to a first user account (e.g., a first user account of the plurality of accounts)) corresponding to the first registered user. In some implementations, enabling the computer system to be used with one or more settings associated with the first user account (e.g., logging the computer system into the first user account) includes one or more of: applying a first set of user preferences associated with the first user account, providing access to certain encrypted and/or secure user files associated with the first user account and/or loading calibration information associated with the first user account.
In response to detecting that the computer system (e.g., 700) has been placed on the body of the respective user (804), and in accordance with a determination (810) that the biometric information received via the one or more input devices (e.g., 708) does not correspond to the first registered user (e.g., in accordance with a determination that the respective user is not the first registered user), the computer system relinquishes enabling the computer system (e.g., 700) to be used with one or more settings associated with a first user account associated with the first registered user (e.g., fig. 7C-7E) (e.g., relinquishes boarding the computer system to the first user account associated with the first registered user) (812). In some embodiments, discarding enabling the computer system to be used with one or more settings associated with a first user account associated with a first registered user includes discarding displaying a first user interface corresponding to the first registered user.
Discarding enabling the computer system to be used with one or more settings associated with a first user account associated with the first registered user when the biometric information is determined not to correspond to the first registered user (e.g., discarding the computer system from being logged into the first user account associated with the first registered user) enhances security and may prevent an unauthorized user from initiating a sensitive operation. Discarding enabling the computer system to be used with one or more settings associated with a first user account associated with the first registered user when the biometric information is determined not to correspond to the first registered user (e.g., discarding the computer system from being logged into the first user account associated with the first registered user) also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
Automatically enabling the computer system to be used with one or more settings associated with a first user account associated with the first registered user (e.g., automatically checking the computer system into the first user account associated with the first registered user) when it is determined that the biometric information received via the one or more input devices corresponds to the first registered user enables the user to check-in to the first user account without the user explicitly requesting a check-in. Performing an operation without further user input when a set of conditions has been met enhances the operability of the device (e.g., by performing the operation without additional user input) and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
Automatically enabling the computer system to be used with one or more settings associated with a first user account associated with a first registered user when the biometric information is determined to correspond to the first registered user (e.g., automatically checking the computer system into the first user account associated with the first registered user) allows the computer system to be placed in a locked state more frequently and for a longer period of time because it is very simple and convenient for the user to reenter the checked-in state. Allowing the computer system to be placed in a locked state for a longer period of time enhances security. Automatically enabling the computer system to be used with one or more settings associated with a first user account associated with the first registered user when the biometric information is determined to correspond to the first registered user (e.g., automatically logging the computer system into the first user account associated with the first registered user) also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, in response to detecting that the computer system (e.g., 700) has been placed on the body of the respective user (804), and in accordance with a determination (e.g., in accordance with a determination that the respective user is a second registered user) (814) that the biometric information received via the one or more input devices (e.g., 708) does not correspond to the first registered user and the biometric information received via the one or more input devices corresponds to a second registered user (e.g., a second registered user of the plurality of users) that is different from the first registered user (e.g., a user that has been previously registered on the computer system), the computer system (e.g., 703) enables the computer system to be used with one or more settings associated with (or specified by) the second user account that is different from the first user account (e.g., to log the computer system into a second user account that is different from and associated with the second registered user) (e.g., a personalized user interface 714 in fig. 7C).
In some embodiments, enabling the computer system to be used with one or more settings associated with a second user account different from the first user account and associated with a second registered user (e.g., to log the computer system into the second user account different from the first user account and associated with the second registered user) includes displaying a second user interface (e.g., personalized user interface 714) corresponding to the second registered user via the display generating component. In some embodiments, the second user interface indicates successful login to a second user account corresponding to the second registered user. In some implementations, enabling the computer system to be used with one or more settings associated with a second user account different from the first user account and associated with a second registered user (e.g., to log the computer system into the second user account) includes one or more of: applying a second set of user preferences associated with the second user account, providing access to certain encrypted and/or secure user files associated with the second user account and/or loading calibration information associated with the second user account.
Automatically enabling the computer system to be used with one or more settings associated with a second user account associated with a second registered user (e.g., automatically checking the computer system into the second user account associated with the second registered user) when it is determined that biometric information received via the one or more input devices corresponds to the second registered user enables the user to check-in to the second user account without the user explicitly requesting a check-in. Performing an operation without further user input when a set of conditions has been met enhances the operability of the device (e.g., by performing the operation without additional user input) and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting that the computer system has been placed on the body of a respective user (e.g., 703), and in accordance with a determination that the biometric information received via the one or more input devices (e.g., 708) does not correspond to a registered user (e.g., identified as a person/user but does not correspond to any user registered on the computer system), the computer system enters a guest operating mode (e.g., displays guest user interface 718 of fig. 7D). In some embodiments, entering the guest operating mode includes enabling the computer system to be used with one or more settings associated with a guest user account that is different from the first user account (e.g., logging the computer system into a guest user account that is different from the first user account). In some embodiments, entering guest operation mode includes displaying, via a display generating component, a guest user interface (e.g., guest user interface 718) corresponding to an unregistered user. In some embodiments, entering guest operation mode includes one or more of: apply a set of default user preferences associated with the guest user account and/or load calibration information (e.g., a set of default calibration settings) associated with the guest user account.
Entering the guest operation mode when it is determined that the biometric information received via the one or more input devices does not correspond to the registered user enhances security and may prevent an unauthorized user from initiating a sensitive operation (e.g., by preventing the guest user from accessing the registered user's security information). Entering guest operation mode when it is determined that biometric information received via one or more input devices does not correspond to a registered user also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, in response to detecting that the computer system (e.g., 700) has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices (e.g., 708) does not correspond to a registered user (e.g., does not correspond to any user registered on the computer system), the computer system relinquishes the computer system to log into any user account (e.g., guest user interface 718 of fig. 7D, user selector user interface 722 of fig. 7E). In some embodiments, relinquishing the computer system from entering any user account optionally includes displaying, via the display generation component, a user interface (e.g., guest user interface 718 of fig. 7D, user selector user interface 722 of fig. 7E) indicating failure to attempt to enter the user account.
Discarding the computer system from entering into any user account when it is determined that biometric information received via the one or more input devices does not correspond to a registered user enhances security and may prevent an unauthorized user from initiating a sensitive operation. Discarding the computer system from logging into any user account when it is determined that biometric information received via one or more input devices does not correspond to a registered user also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, when the computer system (e.g., 700) is enabled for use with one or more settings associated with a first user account associated with a first registered user (e.g., when the computer system logs into the first user account associated with the first registered user), the computer system (e.g., 700) detects that at least a portion of the computer system has been removed from the body of the respective user (e.g., user 703, fig. 7H) (e.g., detects that the computer system is no longer being worn by the respective user). In response to detecting that at least a portion of the computer system has been removed from the body of the respective user, the computer system ceases to enable the computer system to be used with one or more settings associated with the first user account associated with the first registered user (e.g., logs the computer system out of the first user account associated with the first registered user) (fig. 7H). In some implementations, ceasing to enable the computer system to be used with one or more settings associated with the first user account associated with the first registered account (e.g., log the computer system out of the first user account) includes one or more of: an application that removes a first set of user preferences associated with the first user account, blocks access to certain encrypted and/or secure user files associated with the first user account, and/or removes calibration information associated with the first user account.
Stopping when it is determined that the computer system has been removed from the body of the respective user enables the computer system to be used with one or more settings associated with a first user account associated with a first registered user (e.g., log the computer system out of the first user account) enhances security and may prevent an unauthorized user from initiating sensitive operations. Stopping when it is determined that the computer system has been removed from the body of the respective user enables the computer system to be used with one or more settings associated with a first user account associated with a first registered user (e.g., log the computer system out of the first user account) also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, the biometric information received via the one or more input devices is iris identification information; the determination that the biometric information received via the one or more input devices corresponds to the first registered user includes a determination that the iris identification information (e.g., iris scan information) received via the one or more input devices corresponds to the first registered user; and the determination that the biometric information received via the one or more input devices does not correspond to the first registered user includes a determination that the iris identification information (e.g., iris scan information) received via the one or more input devices does not correspond to the first registered user. For example, in some embodiments, the computer system is a headset (e.g., headphones), and the iris identification information is provided via one or more input devices (e.g., eye tracking device 130) in communication with the computer system.
Automatically identifying the user based on the iris identification information provides the user with the ability to perform various actions without explicit input (e.g., log in to his or her user account without the user explicitly requesting a log in). Performing an operation without further user input when a set of conditions has been met enhances the operability of the device (e.g., by performing the operation without additional user input) and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices corresponds to the respective registered user, the computer system displays a visual indication (e.g., personalized user interface 710, indicator 712, personalized user interface 714, indicator 716 d) that the computer system has been enabled for use with one or more settings associated with the respective registered user (e.g., has entered a user account associated with the respective registered user) (e.g., displays text including a name and/or a user name of the respective registered user, displays an avatar and/or image corresponding to the respective registered user).
The display has enabled the computer system to provide feedback to the user regarding the current state of the device (e.g., the computer system has logged into the user account) with a visual indication of one or more settings associated with the respective registered user (e.g., the user account has been logged into the user account associated with the respective registered user). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting that a computer system has been placed on the body of a respective user, and in accordance with a determination that biometric information received via one or more input devices does not correspond to a registered user (e.g., does not correspond to a user that has previously registered on the computer system), the computer system displays a user-selected user interface (e.g., user interface 722) that includes a plurality of selectable options including: a first selectable option (e.g., selectable option 724 a) corresponding to the first registered user (e.g., corresponding to the first registered user's name, avatar, initials, and/or other visual representation); and a second selectable option (e.g., selectable option 724 b) corresponding to a second registered user different from the first registered user. In some embodiments, the determination that the biometric information received via the one or more input devices does not correspond to the registered user includes a determination that the biometric information received via the one or more input devices does not satisfy one or more certainty thresholds with respect to each of the one or more registered users.
Displaying a user selection user interface in response to a determination that biometric information received via the one or more input devices does not correspond to a registered user provides feedback to the user regarding a current state of the device (e.g., biometric information received via the one or more input devices does not correspond to the registered user). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, when a user selection user interface (e.g., user interface 722) is displayed that includes a plurality of selectable options (e.g., options 724a-724 c), the computer system receives user input (e.g., user input 726) corresponding to a selection of a respective selectable option of the plurality of selectable options that corresponds to a respective registered user. After receiving user input corresponding to selection of a respective selectable option, the computer system receives updated biometric information via one or more input devices (e.g., fig. 7F). In response to receiving updated biometric information (e.g., a fingerprint, an image representing a face of a respective user (e.g., a photograph and/or scan), and/or iris identification information (e.g., iris scan information)), and in accordance with a determination that the biometric information received via one or more input devices corresponds to a respective registered user, the computer system is enabled to be used with one or more settings associated with a respective user account associated with the respective registered user (e.g., display user interface 710 of fig. 7B) (e.g., log the computer system into a respective user account associated with the respective registered user). In some embodiments, the computer system optionally displays respective user interfaces (e.g., personalized user interface 610) corresponding to the respective registered users via a display generation component. In some implementations, the respective user interface indicates a successful login to the respective user account corresponding to the respective registered user. In some implementations, enabling the computer system to be used with one or more settings associated with a respective user account associated with a respective registered user (e.g., to log the computer system into the respective user account) includes one or more of: applying a set of respective user preferences associated with the respective user account, providing access to certain encrypted and/or secure user files associated with the respective user account and/or loading calibration information associated with the respective user account). In response to receiving the updated biometric information, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the respective registered user, the computer system relinquishes enabling the computer system to be used with (e.g., and/or specified by) one or more settings associated with (e.g., a password input user interface 628 is displayed for) the respective user account associated with the respective registered user (e.g., relinquishes checking the computer system into the respective user account associated with the respective registered user).
Discarding enabling the computer system to be used with one or more settings associated with a respective user account associated with a respective registered user when the biometric information is determined not to correspond to the respective registered user (e.g., discarding the computer system from being logged into the respective user account associated with the respective registered user) enhances security and may prevent an unauthorized user from initiating a sensitive operation. Discarding the computer system to enable use with one or more settings associated with a respective user account associated with the respective registered user when the biometric information is determined not to correspond to the respective registered user (e.g., discarding the computer system from being logged into the respective user account associated with the respective registered user) also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, when a user selection user interface (e.g., user interface 722) is displayed that includes a plurality of selectable options (e.g., options 724a-724 c), the computer system receives user input (e.g., user input 726) corresponding to a selection of a respective selectable option of the plurality of selectable options that corresponds to a respective registered user. Upon receiving user input corresponding to selection of a respective selectable option, the computer system displays, via the display generating component, a password input user interface (e.g., user interface 728), upon receiving user input corresponding to selection of a respective selectable option and upon failing to satisfy the biometric standard (e.g., upon failing to satisfy the biometric standard when a respective registered user is not registered in the biometric authentication (e.g., does not opt-in to join the biometric authentication), and upon determining that the first setting of a respective registered user is not enabled (e.g., the biometric authentication is enabled), the computer system performs automatic biometric authentication (e.g., fig. 7F) (and optionally displays the password input user interface), upon receiving updated biometric information (e.g., a facial image, a fingerprint, a representation of a respective user, and/or iris image (e.g., a scan of a person), or iris (e.g., a iris image, a scan, etc.)) via one or more input devices (e.g., a computer system) in accordance with a determination that the first setting of a respective registered user is not enabled (e.g., the biometric authentication is not selected to join the biometric authentication), performing automatic biometric authentication includes: based on the updated biometric information (e.g., fingerprint, image (e.g., photograph and/or scan) representing the face of the respective user, and/or iris identification information (e.g., iris scan information) received via the one or more input devices corresponding to the determination of the respective registered user, the computer system is enabled to be used (e.g., display the user interface 710 of fig. 7B) with one or more settings associated with (or specified by) the respective user account associated with the respective registered user (e.g., log the computer system into the respective user account associated with the respective registered user), and based on the determination that the updated biometric information received via the one or more input devices does not correspond to the respective registered user, the computer system is enabled to be abandoned (e.g., display the password input user interface 728 of fig. 7G) with the one or more settings associated with the respective user account associated with the respective registered user (e.g., abandon the computer system from being logged into the respective user account associated with the respective registered user).
In some embodiments, enabling the computer system to be used with one or more settings associated with a respective user account associated with a respective registered user (e.g., to log the computer system into the respective user account associated with the respective registered user) includes displaying, via the display generation component, the respective user interface corresponding to the respective registered user. In some implementations, the respective user interface indicates a successful login to the respective user account corresponding to the respective registered user. In some implementations, enabling the computer system to be used with one or more settings associated with a respective user account associated with a respective registered user (e.g., to log the computer system into the respective user account) includes one or more of: a set of respective user preferences associated with the respective user account is applied providing access to certain encrypted and/or secure user files associated with the respective user account and/or loading calibration information associated with the respective user account.
Discarding enabling the computer system to be used with one or more settings associated with a respective user account associated with a respective registered user when the biometric information is determined not to correspond to the respective registered user (e.g., discarding the computer system from being logged into the respective user account associated with the respective registered user) enhances security and may prevent an unauthorized user from initiating a sensitive operation. Discarding the computer system to enable use with one or more settings associated with a respective user account associated with the respective registered user when the biometric information is determined not to correspond to the respective registered user (e.g., discarding the computer system from being logged into the respective user account associated with the respective registered user) also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
It is noted that the details of the process described above with reference to method 800 (e.g., fig. 8) also apply in a similar manner to the method described below. For example, methods 1000, 1200, and/or 1400 optionally include one or more features of the various methods described above with reference to method 800. For example, a set of user-specific device calibration settings (method 1000) and/or a user avatar with a particular visual appearance (method 1200) may be implemented on a computer system as part of one or more settings associated with a first user account enabled on the computer system, as recited in method 800. As another example, user-specific settings may be applied and/or not applied based on automatic user identification when the device interfaces between users, as recited in method 1400. For the sake of brevity, these details are not repeated hereinafter.
Fig. 9A-9F illustrate an exemplary user interface for automatically applying one or more device calibration settings based on an identification of a user, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 10A-10B.
Fig. 9A depicts an electronic device 700 that is a smart watch including a touch-sensitive display 702, a rotatable and depressible input mechanism 704 (e.g., rotatable and depressible relative to a housing or frame of the device), a button 706, and a camera 708. In some embodiments described below, the electronic device 700 is a wearable smart watch device. In some embodiments, the electronic device 700 is a smart phone, tablet, headset system (e.g., a headset), or other computer system that includes and/or communicates with a display device (e.g., a display screen, a projection device, etc.). Electronic device 700 is a computer system (e.g., computer system 101 in fig. 1).
In fig. 9A, a user 703 places an electronic device 700 on his wrist. The electronic device 700 detects (e.g., via one or more sensors) that the electronic device 700 has been placed on the body of the user. In some embodiments, the electronic device 700 detects that one or more criteria indicating that the electronic device 700 has been placed on the body of a user have been met.
In the depicted embodiment, the electronic device 700 is a smart watch, and fig. 9A depicts the user 703 placing the smart watch on his wrist. However, as discussed above, in some embodiments, the electronic device 700 is a different device designed to be worn on the head of a user, such as a headset system (e.g., headphones). In such embodiments, the electronic device 700 detects (e.g., via one or more sensors) that the electronic device 700 has been placed on the user's head and/or that one or more criteria indicating that the electronic device 700 has been placed on the user's head have been met.
In fig. 9A, in response to detecting that the electronic device 700 has been placed on the user's body, the electronic device 700 attempts to automatically identify the user 703. In some embodiments, the electronic device 700 attempts to identify the user 703 in various ways. For example, the electronic device 700 optionally attempts to automatically identify the user based on biometric information such as facial recognition, eye (e.g., iris) recognition, and/or fingerprint recognition. In fig. 7A, in response to detecting that the electronic device 700 has been placed on the user's body, the electronic device 700 collects biometric information (e.g., a facial scan and/or an eye (e.g., iris) scan using the camera 708).
In the depicted embodiment, the electronic device 700 is a smart watch, and fig. 9A depicts the user 703 placing the smart watch on his wrist. However, as discussed above, in some embodiments, the electronic device 700 is a different device designed to be worn on the head of a user, such as a headset system (e.g., headphones). In such embodiments, the electronic device 700 optionally attempts to automatically identify the user upon determining that the electronic device 700 has been placed on the user's head (e.g., based on iris recognition and/or facial recognition when the device is placed on the user's head).
In fig. 9A, the electronic device 700 displays an application selector user interface 902. The application selector user interface 902 includes a plurality of application icons. Each application icon is associated with an application, and selection of an application icon causes the electronic device 700 to display (e.g., by opening) the application associated with the application icon. The application selector user interface 902 includes a clock application affordance 904a that is displayed at a central location of the display 702. The application selector user interface 902 also includes a compass application affordance 904b, a timer application affordance 904c, and a podcast application affordance 904d.
In FIG. 9B, the electronic device 700 detects one or more user inputs 903a-903e. The electronic device 700 detects one or more user inputs 903a-903e based on information from one or more sensors, such as one or more cameras, gyroscopes, accelerometers, pressure sensors, eye movement sensors (e.g., scanners and/or cameras), and/or microphones. In the depicted embodiment, one or more user inputs 903a-903e correspond to navigational inputs that navigate within the user interface 902. In the depicted embodiment, the one or more user inputs 903a-903e include eye movement inputs (e.g., 903 a) (e.g., gaze movement, eye focus movement) and hand and/or wrist movement inputs 903b-903e. In some embodiments, the user input may include other and/or additional inputs, such as head movement inputs, torso movement inputs, leg and/or foot movement inputs, and/or touch inputs.
Fig. 9C-9E depict various exemplary scenarios in which the electronic device 700 responds to user inputs 903a-903E in different ways based on an automatic identification of the user 703.
Fig. 9C depicts a first exemplary scenario in which the electronic device 700 has identified the user 703 (e.g., based on biometric information) as a first registered user John. In response to identifying user 703 as a first registered user, electronic device 700 applies a first set of device calibration settings corresponding to the first registered user. The device calibration settings may include, for example, eye movement calibration settings, hand movement calibration settings, head movement calibration settings, torso movement calibration settings, foot and/or leg movement calibration settings, and/or touch pressure calibration settings. Further, because the first set of device calibration settings corresponding to the first registered user has been applied, the electronic device 700 responds to the user inputs 903a-903e based on (e.g., in accordance with) the user inputs 903a-903e and the first set of device calibration settings. In the depicted example, the electronic device 700 updates the display of the application selector user interface 902 to display navigation within the application selector user interface 902 based on the user inputs 903a-903e and the first set of device calibration settings. In FIG. 9C, the compass application affordance 904b previously positioned at the upper left position of the display 702 in FIG. 9A has been moved closer to the center position of the display 702 in response to user inputs 903a-903e.
Fig. 9D depicts a second exemplary scenario in which the electronic device 700 has identified the user 703 (e.g., based on biometric information) as a second registered user Sarah. In response to identifying user 703 as a second registered user, electronic device 700 applies a second set of device calibration settings (e.g., different from the first set of device calibration settings) corresponding to the second registered user. The electronic device 700 responds to the user inputs 903a-903e based on (e.g., in accordance with) the user inputs 903a-903e and the second set of device calibration settings. In the depicted example, the electronic device 700 updates the display of the application selector user interface 902 to display navigation within the application selector user interface 902 based on the user inputs 903a-903e and the second set of device calibration settings. In FIG. 9D, the timer application affordance 904c previously positioned at the rightmost position of the display 702 in FIG. 9A has been moved to the center position of the display 702 in response to user inputs 903a-903 e. Thus, as a result of applying different sets of device calibration settings in fig. 9C and 9D, the electronic device 600 has responded differently to the same set of user inputs 903a-903 e.
Fig. 9E depicts a third exemplary scenario in which the electronic device 700 has determined that the user 703 is not a previously registered user (e.g., the electronic device 700 fails to match biometric information from the user 703 with stored biometric information of a previously registered user). In response to determining that user 703 is not a registered user, electronic device 700 applies a third set (e.g., guest) of device calibration settings corresponding to an unregistered guest user (and in some embodiments, different from the first and second sets of device calibration settings). The electronic device 700 responds to the user inputs 903a-903e based on (e.g., upon, in accordance with) the user inputs 903a-903e and a third set (e.g., guest) of device calibration settings. In the depicted example, electronic device 700 updates the display of application selector user interface 902 to display navigation within application selector user interface 902 based on user inputs 903a-903e and a third set (e.g., guest) of device calibration settings. In fig. 9E, the podcast application affordance 904d previously positioned at the bottom center position of the display 702 in fig. 9A has been moved to the center position of the display 702 in response to user inputs 903 a-903E. Thus, as a result of applying different sets of device calibration settings in fig. 9C, 9D, and 9E, the electronic device 700 has responded differently to the same set of user inputs 903 a-903E.
In fig. 9C-9E, the electronic device 700 and application selector user interface 902 are used as examples to demonstrate that different sets of device calibration settings are applied based on the user's identification. Such features may be applied in a similar manner to different situations and scenarios. For example, in some embodiments, the electronic device 700 is a headset system (e.g., a headset). In some such embodiments, the electronic device 700 displays a three-dimensional user interface and/or virtual environment. The user navigates within the three-dimensional user interface and/or virtual environment by providing one or more user inputs that, in some embodiments, include gaze and/or eye movement, hand movement, head movement, and/or torso movement. In response to user input, the electronic device 700 navigates within a three-dimensional user interface and/or virtual environment. Navigation within the three-dimensional user interface and/or virtual environment may be different based on different sets of device calibration settings applied (e.g., different sets of eye movement calibration, hand movement calibration, and/or head movement calibration for different users).
In some embodiments, a set of device calibration settings for a registered user is determined based on one or more calibration inputs provided by the registered user. In some embodiments, the electronic device 700 requests one or more calibration inputs during the enrollment and/or registration process, and the user optionally provides one or more calibration inputs. For example, the electronic device 700 instructs the user to move a portion of the user's body (e.g., hands, arms, legs, feet, torso, head, and/or eyes) in a predefined manner, and/or the electronic device 700 requires the user to track the movement of an object with his or her eyes. Based on one or more calibration inputs, the electronic device 700 optionally determines and stores one or more values (e.g., offset values) that at least partially define device calibration settings for the registered user.
In some embodiments, the set of guest device calibration settings represents a set of default device calibration settings and the set of default device calibration settings is determined without any user input (e.g., without calibration input). In such embodiments, the guest user does not have to provide any calibration inputs, and the user inputs provided by the guest user are processed according to the set of default device calibration settings. In some embodiments, a set of guest device calibration settings is determined based on one or more calibration inputs provided by a guest user. For example, the electronic device 700 optionally requires the guest user to provide one or more calibration inputs in order to determine device calibration settings to apply for the guest user. In some embodiments, the one or more calibration inputs requested to and/or received from the guest user may represent a subset of (e.g., less than) the one or more calibration inputs requested to and/or received from the registered user. For example, the guest user is required to provide less calibration input than the registered user.
In fig. 9F, the user 703 has removed the electronic device 700 from his body. The electronic device 700 detects that the electronic device 700 is no longer positioned on the body of the user. In response to detecting that the electronic device 700 is no longer positioned on the user's body, the electronic device 700 optionally logs out of any user accounts it has logged in to, including ceasing to apply the device calibration settings applied for the user 703. Logging out of the user account may include, for example, ceasing to display a user interface that was being displayed prior to removal of the device 700 from the user's body, ceasing to apply one or more user settings (e.g., device calibration settings) that are being applied, and/or ceasing to provide access to secure content associated with the user account.
In the depicted embodiment, the electronic device 700 is a smart watch, and fig. 9A-9F depict the user 703 wearing the smart watch on his wrist or removing the smart watch from his wrist, and also providing user inputs 903a-903e via the smart watch. However, as discussed above, in some embodiments, the electronic device 700 is a different device designed to be worn on the head of a user, such as a headset system (e.g., headphones). In such embodiments, the electronic device 700 optionally attempts to automatically identify the user upon determining that the electronic device 700 has been placed on the user's head (e.g., based on iris recognition and/or facial recognition when the device is placed on the user's head). Additionally, in such embodiments, content such as user interface 902 is optionally displayed via the head-mounted system, and one or more user inputs (e.g., user inputs 903a-903 e) are received via one or more input devices in communication with the head-mounted system. Similarly, device calibration settings are optionally applied for the head-mounted system and one or more input devices in communication with the head-mounted system. For example, the device calibration settings may include an eye gaze calibration setting, a head movement calibration setting, a hand and/or arm movement calibration setting, a torso calibration setting, and/or a foot and/or leg calibration setting.
Fig. 10A-10B are flowcharts illustrating methods for automatically applying one or more device calibration settings based on an identification of a user using an electronic device, according to some embodiments. Method 1000 is performed at a computer system (e.g., 101, 700) in communication with a display generation component and one or more input devices. Some operations in method 1000 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 1000 provides an intuitive way for automatically applying one or more device calibration settings based on the identity of the user. The method reduces the cognitive burden of the user when applying the device calibration settings, thereby creating a more efficient human-machine interface. For battery-powered computing devices, enabling a user to apply device calibration settings faster and more efficiently saves power and increases the time between battery charges.
In some implementations, a computer system (e.g., device 700, computer system 101) (e.g., a smart phone, a smart watch, a tablet, a headset, and/or a wearable device) in communication with a display generating component (e.g., display 702) (e.g., a display controller, a touch-sensitive display system, a display (e.g., integrated and/or connected), a 3D display, a transparent display, a projector, and/or a heads-up display) and one or more input devices (e.g., 702, 704, 706, 708) (e.g., a touch-sensitive surface (e.g., a touch-sensitive display), a mouse, a keyboard, a remote control, a visual input device (e.g., a camera), an audio input device (e.g., a microphone), and/or a biometric sensor (e.g., a fingerprint sensor, a facial recognition sensor, and/or an iris recognition sensor) detects that at least a portion of the computer system has been placed on the body of a respective user (e.g., user 703) (1002). After detecting that at least a portion of the computer system has been placed on the body of the respective user (1004), the computer system detects input from the respective user (e.g., user inputs 903a-903e in fig. 9B) based on movement or position of the at least a portion of the body of the respective user (e.g., position of the head, hands, eyes, or other body part of the respective user).
In response to detecting an input from a respective user, the computer system (e.g., device 700) responds (1010) to the input from the respective user (e.g., user inputs 903a-903e in FIG. 9B). In accordance with a determination (1012) that the respective user is a first user that has been previously registered with the computer system (e.g., based on an option selected by the respective user to identify the respective user as the first user, or based on identifying the respective user as the first user an automatic biometric), the computer system generates a response to the input (e.g., fig. 9C) based on the movement or location of the portion of the respective user's body and a first set of device calibration settings (e.g., movement calibration, hand calibration, and/or eye calibration) specific to the first user (1014). In accordance with a determination that the respective user is not the first user (1016) (e.g., based on an option selected by the respective user indicating that the respective user is not the first user, or based on identifying the respective user as someone other than the first user) the computer system generates a response (e.g., fig. 9D) to the input (e.g., user inputs 903a-903e in fig. 9B) based on the movement or location of the portion of the respective user's body and without using the first set of device calibration settings specific to the first user (1018).
In some embodiments, generating a response to the input based on the movement or position of the portion of the body of the respective user and without using the first set of device calibration settings includes generating a response to the input based on the movement or position of the portion of the body of the respective user and a second set of device calibration settings (e.g., a set of default device calibration settings and/or guest device calibration settings) different from the first set of device calibration settings (e.g., without applying the first set of device calibration settings).
In some embodiments, in response to detecting that at least a portion of the computer system has been placed on the body of the respective user, the computer system receives biometric information (e.g., corresponding to the respective user) via one or more input devices. Based on a determination that biometric information (e.g., a fingerprint, an image (e.g., a photograph and/or a scan) representing a corresponding user's face, and/or iris identification information (e.g., iris scan information)) received via one or more input devices corresponds to a first registered user (e.g., a first registered user of a plurality of users) (e.g., a user that has been previously registered on a computer system), the computer system applies a first set of device calibration settings (e.g., mobile calibration, hand calibration, and/or eye calibration) corresponding to the first registered user (e.g., does not apply a second set of device calibration settings); and based on a determination that the biometric information received via the one or more input devices does not correspond to a registered user (e.g., does not correspond to any user that has previously registered on the computer system), the computer system applies a second set of device calibration settings (e.g., a default set of device calibration settings and/or guest device calibration settings) that is different from the first set of device calibration settings (e.g., does not apply the first set of device calibration settings)).
Automatically applying a first set of device calibration settings specific to a first user when the respective user is determined to be the first user provides the user with the ability to use the system with user-specific settings without explicitly requesting application of those settings. Performing an operation without further user input when a set of conditions has been met enhances the operability of the device (e.g., by performing the operation without additional user input) and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, generating a response to an input (e.g., user inputs 903a-903e in fig. 9B) based on movement or position of a portion of the body of the respective user and without using a first set of device calibration settings specific to the first user includes: in accordance with a determination (1020) that the respective user is an unregistered user (e.g., a user that is not registered on the computer system) (e.g., based on an option selected by the respective user that indicates that the respective user is not a registered user (e.g., is a guest user), or based on identifying the respective user as an unregistered user) a response (e.g., fig. 9E) to an input (e.g., user inputs 903a-903E in fig. 9B) is generated based on a movement or location of a portion of the body of the respective user and a second set of device calibration settings (e.g., movement calibration, hand calibration, and/or eye calibration) that are different from and representative of a set of guest device calibration settings for the unregistered user (e.g., no first set of device calibration settings is applied) (1022). In some embodiments, a second set of device calibration settings representing the set of guest device calibration settings is applied for any user identified as an unregistered user. In some embodiments, after generating a response to the input based on the movement or position of the portion of the body of the respective user and a second set of device calibration settings representing a set of guest device calibration settings for the unregistered user, the computer system detects that the computer system has been removed from the body of the respective user. After detecting that the computer system has been removed from the body of the respective user, the computer system detects that at least a portion of the computer system has been placed on the body of a second respective user, wherein the second respective user is different from the respective user. After detecting that at least a portion of the computer system has been placed on the body of the second respective user, the computer system detects input from the second respective user based on movement or position of at least a portion of the body of the second respective user. In response to detecting input from the second respective user, the computer system is responsive to the input from the second respective user, comprising: in accordance with a determination that the second phase user is an unregistered user (e.g., a user that is not registered on the computer system) (e.g., based on an option selected by the second phase user indicating that the second respective user is not a registered user (e.g., is a guest user), or based on identifying the second phase user as an unregistered user, a response to input from the second respective user is generated based on movement or location of a portion of the second phase user's body and a second set of device calibration settings (e.g., without application of the first set of device calibration settings) that represent the set of guest device calibration settings for the unregistered user.
Automatically applying a second set of device calibration settings for an unregistered user when the respective user is determined to be an unregistered user provides the user with the ability to use the system with various settings without explicitly requesting application of those settings. Performing an operation without further user input when a set of conditions has been met enhances the operability of the device (e.g., by performing the operation without additional user input) and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, generating a response to the input based on the movement or position of the portion of the body of the respective user and without using the first set of device calibration settings specific to the first user comprises: in accordance with a determination (1024) that the respective user is a second user different from the first user that has been previously registered with the computer system (e.g., based on an option selected by the respective user, or based on identifying the respective user as the second user automatically), a response (e.g., fig. 9D) to the input (e.g., user input 903a-903e in fig. 9B) is generated (1026) based on the movement or location of the portion of the respective user's body and a third set of device calibration settings (e.g., movement calibration, hand calibration, and/or eye calibration) that are different from the first set of device calibration settings and that are specific to the second user (e.g., no first set of device calibration settings is applied).
Automatically applying a third set of device calibration settings specific to the second user when the respective user is determined to be the second user provides the user with the ability to use the system with the user-specific settings without explicitly requesting application of those settings. Performing an operation without further user input when a set of conditions has been met enhances the operability of the device (e.g., by performing the operation without additional user input) and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first set of device calibration settings is determined based on a plurality of device calibration inputs received from a first user. In some embodiments, as part of a registration process for registering a first user on a computer system, a plurality of device calibration inputs are received (e.g., one or more hand movement inputs, arm movement inputs, eye (e.g., iris) movement inputs detected in response to one or more prompts such as a prompt to perform a predetermined gesture or move an eye in a predetermined gaze pattern). Customizing device calibration settings for a user based on device calibration inputs received from the user enables the device to more accurately and efficiently respond to user inputs. Customizing device calibration and response to a particular user enhances operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, generating a response to an input (e.g., user inputs 903a-903e in fig. 9B) based on movement or position of a portion of the body of the respective user and without using a first set of device calibration settings specific to the first user includes: in accordance with a determination that the respective user is an unregistered user (e.g., not a user that has been previously registered on the computer system) (e.g., based on an option selected by the respective user indicating that the respective user is not a registered user (e.g., is a guest user), or based on identifying the respective user as an unregistered user), a response to the input is generated based on movement or location of a portion of the body of the respective user and a second set of device calibration settings (e.g., movement calibration, hand calibration, and/or eye calibration) that are different from and represent a set of guest device calibration settings for the unregistered user (e.g., without applying the first set of device calibration settings), wherein the second set of device calibration settings is determined based on a plurality of device calibration inputs received from the unregistered user. In some embodiments, the plurality of device calibration inputs from the unregistered user are optional, and a set of default device calibration settings may be applied without receiving the plurality of device calibration inputs from the unregistered user. In some embodiments, multiple device calibration inputs from unregistered users are mandatory, and unregistered users cannot continue to interact with the user interface until multiple device calibration inputs from unregistered users have been received.
In some embodiments, upon detecting that at least a portion of the computer system has been placed on the body of the respective user, and in accordance with a determination that the respective user is an unregistered user, the computer system displays one or more prompts to the unregistered user to provide a plurality of device calibration inputs; and wherein generating a response to the input based on the movement or position of the portion of the body of the respective user and without using at least some of the first set of device calibration settings specific to the first user comprises: after displaying one or more prompts to an unregistered user to provide a plurality of device calibration inputs: in accordance with a determination that an unregistered user has provided a plurality of device calibration inputs, generating a response to the inputs based on movement or position of a portion of a body of the respective user and a second set of device calibration settings (e.g., movement calibration, hand calibration, and/or eye calibration) different from the first set of device calibration settings, wherein the second set of device calibration settings is determined based on the plurality of device calibration inputs received from the unregistered user; and generating a response to the input based on the movement or location of the portion of the body of the respective user and a third set of device calibration settings different from the first and second sets of device calibration settings in accordance with a determination that the unregistered user has not provided the plurality of device calibration inputs (e.g., the unregistered user has refused to provide the plurality of device calibration inputs and/or the threshold period of time has elapsed without the unregistered user providing the plurality of device calibration inputs), wherein the third set of device calibration settings represents a set of default guest calibration settings.
In some embodiments, upon detecting that at least a portion of the computer system has been placed on the body of the respective user, and in accordance with a determination that the respective user is an unregistered user, the computer system displays one or more prompts to the unregistered user to provide a plurality of device calibration inputs; and wherein generating a response to the input based on the movement or position of the portion of the body of the respective user and without using at least some of the first set of device calibration settings specific to the first user comprises: after displaying one or more prompts to an unregistered user to provide a plurality of device calibration inputs: in accordance with a determination that an unregistered user has provided a plurality of device calibration inputs, generating a response to the inputs based on movement or position of a portion of a body of the respective user and a second set of device calibration settings (e.g., movement calibration, hand calibration, and/or eye calibration) different from the first set of device calibration settings, wherein the second set of device calibration settings is determined based on the plurality of device calibration inputs received from the unregistered user; and in accordance with a determination that the unregistered user has not provided the plurality of device calibration inputs (e.g., the unregistered user has refused to provide the plurality of device calibration inputs and/or a threshold period of time has elapsed without the unregistered user providing the plurality of device calibration inputs), forgoing to generate a response to the input based on the movement or location of the portion of the body of the respective user.
Customizing device calibration settings for a user based on device calibration inputs received from the user enables the device to more accurately and efficiently respond to user inputs. Customizing device calibration and response to a particular user enhances operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the plurality of device calibration inputs received from the unregistered user is a subset of device calibration inputs less than the plurality of device calibration inputs received from the first user. Customizing device calibration settings for a user based on device calibration inputs received from the user enables the device to more accurately and efficiently respond to user inputs. Customizing device calibration and response to a particular user enhances operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, generating a response to an input (e.g., navigation within user interface 901 in fig. 9C-9E) based on movement or position of a portion of a body of a respective user and without using a first set of device calibration settings specific to the first user includes: in accordance with a determination that the respective user is an unregistered user (e.g., not a user that has been previously registered on the computer system) (e.g., based on an option selected by the respective user indicating that the respective user is not a registered user (e.g., is a guest user), or based on identifying the respective user as an unregistered user), a response to the input is generated based on a movement or location of a portion of the body of the respective user and a second set of device calibration settings (e.g., movement calibration, hand calibration, and/or eye calibration) that are different from and representative of a set of guest device calibration settings for the unregistered user (e.g., without application of the first set of device calibration settings) (e.g., guest calibration settings in fig. 9E), wherein the second set of device calibration settings are a set of default device calibration settings and are not based on user input from the unregistered user.
Automatically applying a second set of device calibration settings for an unregistered user when the respective user is determined to be an unregistered user provides the user with the ability to use the system with various settings without explicitly requesting application of those settings. Performing an operation without further user input when a set of conditions has been met enhances the operability of the device (e.g., by performing the operation without additional user input) and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first set of device calibration settings includes one or more eye and/or gaze movement calibration settings. Customizing the device calibration settings (including the eye calibration settings) for the user enables the device to respond more accurately and efficiently to user input. Customizing device calibration and response to a particular user enhances operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first set of device calibration settings includes one or more hand movement calibration settings. Customizing the device calibration settings (including the hand calibration settings) for the user enables the device to respond more accurately and efficiently to user input. Customizing device calibration and response to a particular user enhances operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, generating a response to the input based on the movement or position of the portion of the body of the respective user and the first set of device calibration settings specific to the user includes enabling a computer system (e.g., 700) to be used with the first set of device calibration settings specific to the first user. When the computer system is enabled for use with a first set of device calibration settings specific to a first user (e.g., when the computer system logs into a first user account), the computer system detects that at least a portion of the computer system has been removed from the body of the respective user (e.g., detects that the computer system is no longer being worn by the respective user). In response to detecting that at least a portion of the computer system has been removed from the body of the respective user (e.g., for longer than a predetermined threshold duration), the computer system ceases to enable the computer system to be used with a first set of device calibration settings specific to the first user (e.g., fig. 9F) (e.g., logs the computer system out of a first user account associated with the first user). In some embodiments, logging the computer system out of the first user account includes removing an application specific to the first user of the first set of device calibration settings. In some embodiments, in response to detecting that at least a portion of the computer system has been removed from the body of the respective user for longer than a predetermined threshold duration, the computer system is checked out of the first user account regardless of how the computer system checked in to the first user account (e.g., through biometric authentication, password authentication, or other authentication).
Stopping when it is determined that the computer system has been removed from the body of the respective user enables the computer system to be used with a first set of device calibration settings specific to the first user (e.g., log the computer system off of the first user account) enhances security. For example, ceasing to enable the computer system to be used with the first set of device calibration settings when it is determined that the computer system has been removed from the body of the respective user (e.g., logging the computer system out of the first user account) may prevent an unauthorized user from initiating a sensitive operation. Stopping when it is determined that the computer system has been removed from the body of the respective user enables the computer system to be used with the first set of device calibration settings (e.g., log the computer system out of the first user account) also enhances the operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access and by preventing the user from using the device with device calibration settings specific to another user), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, the determination that the respective user is the first user is performed automatically (e.g., based on automatic biometric identification/authentication of biometric information collected from the respective user via one or more input devices) in response to detecting that at least a portion of the computer system has been placed on the user's body (e.g., fig. 9A).
Automatically identifying a user when it is determined that at least a portion of the computer system has been placed on the user's body and applying a set of user-specific device calibration settings provides the user with the ability to use the system with the user-specific settings without explicitly requesting application of those settings. Performing an operation without further user input when a set of conditions has been met enhances the operability of the device (e.g., by performing the operation without additional user input) and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
It is noted that the details of the process described above with reference to method 1000 (e.g., fig. 10A-10B) also apply in a similar manner to the methods described elsewhere herein. For example, methods 800, 1200, and/or 1400 optionally include one or more features of the various methods described above with reference to method 1000. For example, a user-specific set of device calibration settings may be automatically applied as part of one or more settings associated with the user, as recited in method 800, and/or a user-specific set of device calibration settings may be automatically applied with a user-specific avatar, as recited in method 1200. As another example, user-specific device calibration settings may be applied and/or not applied based on automatic user identification when devices are handed over between users, as recited in method 1400. For the sake of brevity, these details are not repeated hereinafter.
11A-11F illustrate an exemplary user interface for automatically applying and displaying a user avatar based on the user's identification, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 12A-12B.
Fig. 11A depicts an electronic device 700 that is a smart watch that includes a touch-sensitive display 702, a rotatable and depressible input mechanism 704 (e.g., rotatable and depressible relative to a housing or frame of the device), buttons 706, and a camera 708. In some embodiments described below, the electronic device 700 is a wearable smart watch device. In some embodiments, the electronic device 700 is a smart phone, tablet, headset system (e.g., a headset), or other computer system that includes and/or communicates with a display device (e.g., a display screen, a projection device, etc.). Electronic device 700 is a computer system (e.g., computer system 101 in fig. 1).
In fig. 11A, a user 703 places an electronic device 700 on his wrist. The electronic device 700 detects (e.g., via one or more sensors) that the electronic device 700 has been placed on the body of the user. In some embodiments, the electronic device 700 detects that one or more criteria indicating that the electronic device 700 has been placed on the body of a user have been met.
In the depicted embodiment, the electronic device 700 is a smart watch, and fig. 11A depicts the user 703 placing the smart watch on his wrist. However, as discussed above, in some embodiments, the electronic device 700 is a different device designed to be worn on the head of a user, such as a headset system (e.g., headphones). In such embodiments, the electronic device 700 detects (e.g., via one or more sensors) that the electronic device 700 has been placed on the user's head and/or that one or more criteria indicating that the electronic device 700 has been placed on the user's head have been met.
In fig. 11A, in response to detecting that the electronic device 700 has been placed on the user's body, the electronic device 700 attempts to automatically identify the user 703. In some embodiments, the electronic device 700 attempts to identify the user 703 in various ways. For example, the electronic device 700 optionally attempts to automatically identify the user based on biometric information such as facial recognition, eye (e.g., iris) recognition, and/or fingerprint recognition. In fig. 7A, in response to detecting that the electronic device 700 has been placed on the user's body, the electronic device 700 collects biometric information (e.g., a facial scan and/or an eye (e.g., iris) scan using the camera 708). In the exemplary scenario depicted in fig. 11A, the electronic device 700 determines that the biometric information from the user 703 corresponds to the first registered user John.
In fig. 11A, the electronic device 700 displays (e.g., after determining that the biometric information from the user 703 corresponds to a first registered user) a user interface 1102 that includes a selectable object 1104. Selectable object 1104 corresponds to a coexistence application in which multiple users may enter a coexistence environment to communicate with each other, as will be described in more detail with reference to later figures. In some embodiments, the coexistence environment is an XR environment (e.g., a virtual environment) with one or more avatars representing users present in the XR environment. The electronic device 700 detects a user input 1106 at a location on the user interface 1102 corresponding to the selection of the selectable object 1104.
In fig. 11B, in response to user input 1106, electronic device 700 replaces the display of user interface 1102 with coexistence user interface 1108. Coexistence user interface 1108 includes a first avatar 1110 and a second avatar 1112. The first avatar 1110 has a visual appearance corresponding to a remote user that is participating in a coexistence situation with the user 703. For example, the remote user is a registered user (e.g., a registered user on a remote electronic device operated by the registered user; a registered user of the service), and the user account corresponding to the remote user may be associated with avatar appearance information (e.g., avatar appearance information selected and/or specified by the remote user). As discussed above, in fig. 11A, the electronic device 700 has identified the user 703 as the first registered user John. Based on the identification, electronic device 700 displays avatar 1112, which is an avatar having a visual appearance corresponding to the first registered user (e.g., corresponding to the first user account corresponding to the first registered user). In some embodiments, the first registered user has selected and/or specified one or more visual characteristics (e.g., color, hairstyle, face shape, face size, eye color, eye size, mouth shape, mouth size, nose shape, nose size, and/or skin color) of avatar 1112. In some embodiments, one or more visual features of avatar 1112 move within user interface 1108 in response to movement of user 703. For example, if user 703 moves his or her head, eyebrows, eyes, nose, and/or mouth, avatar 1112 optionally moves within coexistence user interface 1108 in a corresponding manner. Similarly, one or more visual features of avatar 1110 optionally move within user interface 1108 in response to movement of the remote user. While the depicted coexistence user interface 1108 shows avatars 1110, 1112 comprising only heads, the avatars may comprise representations of additional parts of the user's body, and in some embodiments those avatar parts may move in accordance with corresponding movements of respective parts of the user's body by the user.
While FIG. 11B is shown from the perspective of electronic device 700, as seen by user 703, it is understood that similar features may be described from the perspective of a remote user (represented by avatar 1110) and an electronic device used and/or operated by the remote user. The electronic device of the remote user optionally displays a coexistence user interface similar to coexistence user interface 1108, and optionally displays two avatars 1110, 1112 corresponding to the remote user and the first registered user (user 703).
In the depicted embodiment, the electronic device 700 is a smart watch, and fig. 11A-11B depict a user interface (e.g., 1102, 1108) via a touch screen display 702 of the electronic device 700. However, as discussed above, in some embodiments, the electronic device 700 is a different device designed to be worn on the head of a user, such as a headset system (e.g., headphones). In some implementations, the head-mounted system displays a user interface substantially similar to the user interface 1102 of fig. 11A and receives one or more user inputs corresponding to selection of the selectable object 1104. The user interface may differ from the user interface 1102 in one or more respects. For example, in some embodiments, the user interface displayed by the head-mounted system is a virtual three-dimensional user interface, while the user interface 1102 is displayed as a two-dimensional user interface. The one or more user inputs may include, for example, eye/gaze movement, hand movement, and one or more gestures corresponding to selection of the selectable object 1104. In some embodiments, in response to one or more user inputs corresponding to selection of selectable object 1104, the headset displays a coexistence user interface substantially similar to user interface 1108. The coexistence user interface may differ from the user interface 1108 in one or more respects. For example, in some embodiments, the coexistence user interface is a virtual three-dimensional user interface having an avatar (e.g., three-dimensional representation) of each user participating in the coexistence session (similar to avatars 1110 and 1112 of fig. 11B). Similar to that described above with reference to fig. 11B, in some embodiments, each user's avatar representation has one or more visual features selected and/or specified by the respective user of the avatar representation. Further, in some embodiments, each user's avatar representation has one or more visual features that move within the three-dimensional virtual environment based on the movement of the corresponding user of the avatar representation.
In fig. 11C, the user 703 has removed the electronic device 700 from his body. The electronic device 700 detects that the electronic device 700 is no longer positioned on the body of the user. In some embodiments, in response to detecting that the electronic device 700 is no longer positioned on the user's body, the electronic device 700 ceases to display the avatar 1112 within the coexistence user interface 1108 (similarly, a remote electronic device operated by a remote user may also cease to display an avatar representing and corresponding to the first registered user 703).
In fig. 11D, a second user 1103 (different from the first user and different from the remote user) places the electronic device 700 on her wrist. The electronic device 700 detects that the electronic device 700 has been placed on the body of the user.
In the depicted embodiment, the electronic device 700 is a smart watch, and fig. 11A-11B depict a user interface (e.g., 1102, 1108) via a touch screen display 702 of the electronic device 700. However, as discussed above, in some embodiments, the electronic device 700 is a different device designed to be worn on the head of a user, such as a headset system (e.g., headphones). In some such implementations, in fig. 11C, the headset detects that the headset is no longer positioned on the head of the user 703. In response to detecting that the head-mounted system is no longer positioned on the head of the user 703, the head-mounted system ceases to display a representation of the user 703 within the virtual (e.g., three-dimensional) coexistence environment (e.g., ceases to display a (e.g., three-dimensional) avatar associated with the user 703). In fig. 11D, the user 1103 places the head-mounted system on her head, and the head-mounted system detects that the head-mounted system has been placed on the user's head.
In fig. 11D, in response to detecting that the electronic device 700 has been placed on the user's body, the electronic device 700 attempts to automatically identify the user 1103. In some embodiments, the electronic device 700 optionally attempts to identify the user 1103 in various ways. For example, the electronic device 700 optionally attempts to automatically identify the user based on biometric information such as facial recognition, eye (e.g., iris) recognition, and/or fingerprint recognition. In fig. 11D, the electronic device 700 collects biometric information (e.g., facial scan and/or eye (e.g., iris) scan using the camera 708) from the user 1103 in response to detecting that the electronic device 700 has been placed on the user's body and determines that the biometric information corresponds to a second registered user Sarah.
As discussed above, the first registered user 703 is associated with a particular set of avatar appearance information and the avatar 1112 is displayed using the avatar appearance information corresponding to the first registered user 703. Similarly, the second registered user Sarah is associated with a second set of avatar appearance information (different from the first set) such that the second registered user is represented by an avatar having a different visual appearance than the avatar of the first registered user.
In fig. 11E, in response to determining that the user 1103 is a second registered user (e.g., in response to determining that the biometric information corresponds to the second registered user), the electronic device 700 displays a new avatar 1114 having a visual appearance corresponding to the second registered user (e.g., having one or more visual features that have been selected and/or specified by the second registered user). Similar to avatar 1112, in some embodiments, one or more visual features of avatar 1114 optionally move within user interface 1108 in response to movement of user 1103. For example, if user 1103 moves her head, eyebrows, eyes, nose, and/or mouth, avatar 1114 moves within coexistence user interface 1108 in a corresponding manner (although avatar 1114 does not move based on the movement of user 703). It may be appreciated that based on the automatic user identification (e.g., based on the biometric identification), the electronic device 700 automatically displays the avatar 1112 corresponding to the first registered user 703, and then when the user of the electronic device 700 switches from the first registered user 703 to the second registered user 1103, the electronic device 700 automatically replaces the display of the avatar 1112 with the display of the avatar 1114 corresponding to the second registered user.
Fig. 11F depicts a different exemplary scenario in which the second user 1103 is an unregistered guest user. In this exemplary scenario, the electronic device 700 has determined that the second user 1103 is an unregistered user (e.g., based on biometric information). In response to determining that the second user 1103 is an unregistered user, the electronic device 700 displays an avatar 1116 having a placeholder appearance with a set of default visual features within the coexistence environment 1108. For example, in FIG. 11F, the placeholder appearance is an abstract circular representation. In some embodiments, while the registered user may select and/or specify one or more visual characteristics of his or her representative avatar, the unregistered guest user is optionally not given the option of selecting and/or specifying visual characteristics of his or her avatar.
In the illustrated embodiment, the electronic device 700 is a smart watch, and fig. 11A-11F depict the users 703, 1103 wearing the smart watch on their wrists or removing the smart watch from their wrists. However, as discussed above, in some embodiments, the electronic device 700 is a different device designed to be worn on the head of a user, such as a headset system (e.g., headphones). In such embodiments, the electronic device 700 optionally attempts to automatically identify the user upon determining that the electronic device 700 has been placed on the user's head (e.g., based on iris recognition and/or facial recognition when the device is placed on the user's head). Additionally, in such embodiments, content such as user interface 1108 and avatars 1110, 1112, 1114, 1116 are optionally displayed via the head-mounted system, and one or more user inputs may be received via one or more input devices in communication with the head-mounted system. For example, in fig. 11E, in response to detecting that the head-mounted system has been placed on the head of the user 1103, the head-mounted system identifies the user 1103 (e.g., via iris scan authentication) as a second registered user. In some implementations, in response to identifying the user 1103 as the second registered user, the head-mounted system replaces the display of the avatar corresponding to the user 1003 within the three-dimensional virtual coexistence environment with the (e.g., three-dimensional) avatar corresponding to the user 703. In some embodiments, in fig. 11F, if the head-mounted system identifies the user 1103 as an unregistered and/or guest user, the head-mounted system replaces the display of the avatar corresponding to the user 703 within the three-dimensional coexistence environment with the (e.g., three-dimensional) avatar corresponding to the unregistered and/or guest user.
Fig. 12A-12B are flowcharts illustrating methods for automatically applying and displaying a user avatar based on an identification of a user using an electronic device, according to some embodiments. The method 1200 is performed at a first computing system (e.g., 700, 101) in communication with a display generation component and one or more input devices. Some operations in method 1200 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 1200 provides an intuitive way for automatically applying and displaying a user avatar based on the user's identity. The method reduces the cognitive burden on the user when the user avatar is displayed, thereby creating a more efficient human-machine interface. For battery-powered computing devices, enabling a user to more quickly and efficiently display a user avatar saves power and increases the time between battery charges.
In some embodiments, a first computer system (e.g., device 700, computer system 101) (e.g., a smart phone, a smart watch, a tablet, a head-mounted system, a wearable device) in communication with a display generating component (e.g., display 702) (e.g., a display controller, a touch-sensitive display system, a display (e.g., an integrated or connected), a 3D display, a transparent display, a projector, or a heads-up display) and one or more input devices (e.g., 702, 704, 706, 708) (e.g., a touch-sensitive surface (e.g., a touch-sensitive display), a mouse, a keyboard, a remote control, a visual input device (e.g., a camera), an audio input device (e.g., a microphone), a biometric sensor (e.g., a fingerprint sensor, a facial recognition sensor, an iris recognition sensor), a device) detects a request (e.g., 1106) to display a user (e.g., user 703, user avatar and/or a remote user 1110 represented by an avatar) of a corresponding computer system (e.g., a first computer system or a second computer system (e.g., a smart phone, a smart watch, a tablet, a headset, or a second computer system different from the first computer system) (e.g., a virtual avatar, or a virtual avatar, an XR, and a virtual environment (e.g., a real-world representation of a virtual avatar, a first computer system) of the user) including the first system and the user or the first computer system) Augmented Reality (AR) and/or Mixed Reality (MR))) (1202). In some embodiments, the request to display the avatar of the user of the respective computer system corresponds to a request to enter a communication session that includes the user of the respective computer system (e.g., and one or more other users of other computer systems). In some embodiments, the request to display the avatar of the user of the respective computer system occurs during a communication session that includes the user of the respective computer system (e.g., and one or more other users of other computer systems).
In response to detecting a request to display an avatar (1204) (e.g., user input 1106), the first computer system displays an avatar (1206) of a user of the respective computer system (e.g., 1110, 1112, 1114, 1116). In accordance with a determination that the user of the respective computer system is a registered user of the respective computer system (1208) (e.g., based on an option selected by the user and/or one or more user inputs indicating that the user is a registered user, and/or based on identifying the user as being a registered user) the first computer system displays an avatar having an appearance selected by the user of the respective computer system (e.g., based on information provided by the user during a registration process such as a biometric scan or avatar creation process) (e.g., avatars 1110, 1112, 1114), wherein the avatar moves based on movement of the user detected by one or more sensors of the respective computer system (1210). In some embodiments, facial features of the avatar move based on movements of the user's face detected by one or more sensors of the respective computer system. In some embodiments, the determination that the user of the respective computer system is a registered user of the respective computer system is performed in response to a determination that at least a portion of the respective computer system has been placed on the body of the user of the respective computer system. In some embodiments, biometric information (e.g., corresponding to a user of a respective computer system) is received (e.g., by and/or at the respective computer system) in response to detecting that the respective computer system has been placed on the body of the user of the respective computer system. In some implementations, the avatar is displayed within an XR environment (e.g., within the coexistence user interface 1008) (e.g., virtual Reality (VR), augmented Reality (AR), and/or Mixed Reality (MR)).
In accordance with a determination that the user of the respective computer system is not a registered user of the respective computer system (1212) (e.g., based on an option selected by the user and/or one or more user inputs indicating that the user is not a registered user, and/or based on identifying the user automatic biometric as not being a registered user), the first computer system displays an avatar (e.g., avatar 1116 of fig. 11F) having a placeholder appearance that does not represent the appearance of the user of the respective computer system (in some embodiments, and is not selected by the user of the respective computer system), wherein the avatar moves based on movement of the user detected by one or more sensors of the respective computer system (1214). In some embodiments, the avatar's features move based on movements of the user's body detected by one or more sensors of the respective computer system. In some embodiments, the determination that the user of the respective computer system is not a registered user of the respective computer system is performed in response to a determination that at least a portion of the respective computer system has been placed on the body of the user of the respective computer system. In some embodiments, biometric information (e.g., corresponding to a user of a respective computer system) is received (e.g., by and/or at the respective computer system) in response to detecting that the respective computer system has been placed on the body of the user of the respective computer system. In some embodiments, the respective computer system is a first computer system and the user of the respective computer system is a user of the first computer system. In some embodiments, the determination that the user of the respective computer system is a registered user of the respective computer system is a determination that the user of the first computer system is a registered user of the first computer system. In some embodiments, the determination that the user of the respective computer system is a registered user of the respective computer system is performed at and/or by the first computer system. In some embodiments, the determination that the user of the respective computer system is not a registered user of the respective computer system is performed at and/or by the first computer system. In some embodiments, the respective computer system is a second computer system different from the first computer system, and the user of the respective computer system is a user of the second computer system. In some embodiments, the determination that the user of the respective computer system is a registered user of the respective computer system is a determination that the user of the second computer system is a registered user of the second computer system. In some embodiments, the determination that the user of the respective computer system is a registered user of the respective computer system is performed at and/or by the second computer system. In some embodiments, the determination that the user of the respective computer system is not a registered user of the respective computer system is a determination that the user of the second computer system is not a registered user of the second computer system. In some embodiments, the determination that the user of the respective computer system is not a registered user of the respective computer system is performed at and/or by the second computer system.
Displaying an avatar having a particular appearance based on a determination that the user of the respective computer system is a registered user of the respective computer system provides feedback to the user regarding the current state of the device (e.g., the user of the respective computer system is a registered user (e.g., a particular registered user)). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
Automatically displaying an avatar having a particular appearance based on a determination that the user of the respective computer system is a registered user of the respective computer system provides the device with the ability to switch between different avatars associated with different users without requiring complex and/or extensive user input. Performing an operation without further user input when a set of conditions has been met enhances the operability of the device (e.g., by performing the operation without additional user input) and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
Displaying an avatar having a placeholder appearance when it is determined that the user is not a registered user provides security. Displaying an avatar with a placeholder appearance when it is determined that the user is not a registered user also enhances operability of the device and makes the user-device interface more efficient and/or secure (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, an avatar (e.g., 1112) visually represents the user (1216) of the first computer system (e.g., the respective computer system is the first computer system) (e.g., avatar 1112 visually represents user 703, avatar 1114 visually represents user 1103). Displaying an avatar representing the user of the first computer system provides feedback to the user regarding the current state of the device (e.g., the first computer system has identified the user of the first computer system). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the avatar (e.g., 1110) visually represents a user (1218) of a second computer system different from the first computer system (e.g., the respective computer system is a second computer system different from the first computer system) (e.g., avatar 1110 visually represents a user of a second computer system different from electronic device 700). Displaying an avatar representing the user of the second computer system provides feedback to the user regarding the current state of the device (e.g., the second computer system has identified the user of the second computer system). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, an avatar (1220) (e.g., at a remote computer system being used by a remote user represented by avatar 1110) is displayed (e.g., to a user of the one or more computer systems) at the one or more computer systems of the one or more users that are interacting with the user of the respective computer systems (e.g., one or more users in a co-existence communication session with the user of the respective computer systems and/or one or more users that are virtually in the same virtual environment as the user of the respective computer systems). Displaying avatars at one or more computer systems of one or more users interacting with users of the respective computer systems provides feedback to those users regarding the current state of the device (e.g., the respective computer systems have identified users of the respective computer systems). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, at least a portion of an avatar (e.g., the avatar's hands, head, and/or body) is displayed via a display generation component in communication with a respective computer system (1222). In some embodiments, the respective computer system is a first computer system (e.g., device 700), and the display generating component in communication with the respective computer system is a display generating component (e.g., display 702) in communication with the first computer system. In some embodiments, the respective computer system is a second computer system different from the first computer system, and the display generating component in communication with the respective computer system is a second display generating component different from the display generating component in communication with the first computer system. In some embodiments, the avatar represents a user of the first computer system (e.g., user 703, user 1103) and the first computer system displays at least a portion of the avatar (e.g., the avatar's hands and/or body are displayed for viewing by the user of the first computer system) via the display generating component (e.g., avatar 1112, avatar 1114). Displaying an avatar representing the user of the first computer system provides feedback to the user regarding the current state of the device (e.g., the first computer system has identified the user of the first computer system). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the registered user is a first registered user and the appearance is a first appearance. In accordance with a determination that the user of the respective computer system is a second registered user (e.g., user 1103) of the respective computer system that is different from the first registered user (e.g., user 703) (e.g., based on an option selected by the user and/or one or more user inputs indicating that the user is the second registered user, and/or based on identifying the user as the second registered user an automatic biometric feature), an avatar (e.g., avatar 1114) having a second appearance that is different from the first appearance (e.g., avatar 1112) is displayed, wherein the second appearance is selected by the second registered user (e.g., based on information provided by the second registered user during a registration process such as a biometric scan or avatar creation process). In some implementations, the avatar is displayed within an XR environment (e.g., virtual Reality (VR), augmented Reality (AR), and/or Mixed Reality (MR)). In some embodiments, the avatar moves based on movements of the user detected by one or more sensors of the respective computer system. In some embodiments, facial features of the avatar move based on movements of the user's face detected by one or more sensors of the respective computer system. Displaying an avatar having a second appearance selected by a second registered user provides feedback to the user regarding the current state of the device (e.g., the respective computer system has identified the user of the respective computer system). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the determination that the user of the respective computer system is a registered user of the respective computer system is performed based on identifying the user automatic biometric as the registered user, and wherein the automatic biometric identification comprises an eye-based identification (e.g., an iris-based identification). In some embodiments, the respective computer system is a first computer system (e.g., device 700), and the user of the respective computer system is a user of the first computer system (e.g., users 703, 1103). In some embodiments, the determination that the user of the respective computer system is a registered user of the respective computer system is a determination that the user of the first computer system is a registered user of the first computer system. In some embodiments, the determination that the user of the respective computer system is a registered user of the respective computer system is performed at and/or by the first computer system. In some embodiments, the user automatic biometric identification is performed at and/or by the first computer system as a registered user. In some embodiments, the method further comprises: after detecting that at least a portion of the computer system has been placed on the body of the respective user, the respective user is identified as a registered user of the respective computer system (e.g., a registered user of the first computer system) based on an automatic biometric identification, wherein the automatic biometric identification includes an eye-based identification.
In some embodiments, the respective computer system is a second computer system (e.g., a remote computer system in communication with device 700) that is different from the first computer system, and the user of the respective computer system is a user of the second computer system (e.g., a remote user represented by avatar 1110). In some embodiments, the determination that the user of the respective computer system is a registered user of the respective computer system is a determination that the user of the second computer system is a registered user of the second computer system. In some embodiments, the determination that the user of the respective computer system is not a registered user of the respective computer system is a determination that the user of the second computer system is not a registered user of the second computer system. In some embodiments, the determination that the user of the respective computer system is a registered user of the respective computer system is performed at and/or by the second computer system. In some embodiments, the determination that the user of the respective computer system is not a registered user of the respective computer system is performed at and/or by the second computer system. In some embodiments, the user automatic biometric is identified as a registered user and/or an unregistered user is performed at and/or by the second computer system (e.g., not performed by the first computer system).
In some implementations, the respective computer system is a headset system (e.g., a headset). In some embodiments, automatic biometric identification of the user is performed in response to a determination that the respective computer system has been placed on the user's head. In some embodiments, the eye-based identification is performed by one or more eye-tracking devices in communication with (e.g., incorporated in) the respective computer system. In some embodiments, iris scan information is collected by a respective computer system in response to a determination that the respective computer system has been placed on the user's head.
Automatically identifying the user based on the biometric identification provides the device with the ability to perform various actions without explicit user input (e.g., automatically identifying the user and applying an appropriate (e.g., user-selected avatar)). Performing an operation without further user input when a set of conditions has been met enhances the operability of the device (e.g., by performing the operation without additional user input) and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, automatic biometric identification is performed automatically in response to a determination that at least a portion of the respective computer system has been placed on the body of the user (e.g., in response to a determination that the respective computer system has been worn by the user) (e.g., in FIG. 11A, automatic biometric identification is performed automatically in response to a determination that the electronic device 700 has been placed on the body of the user 703; in FIG. 11D, automatic biometric identification is performed automatically in response to a determination that the electronic device 700 has been placed on the body of the user 1103).
In some implementations, the respective computer system is a headset system (e.g., a headset). In some embodiments, the automatic biometric identification of the user is performed automatically in response to a determination that the respective computer system has been placed on the user's head. In some embodiments, biometric information (e.g., iris scan information, facial scan information) is automatically collected by the respective computer system in response to a determination that the respective computer system has been placed on the user's head.
In some embodiments, the respective computer system is a first computer system (e.g., device 700), and the user of the respective computer system is a user of the first computer system (e.g., users 703, 1103). In some embodiments, the determination that the user of the respective computer system is a registered user of the respective computer system is a determination that the user of the first computer system is a registered user of the first computer system. In some embodiments, the determination that the user of the respective computer system is a registered user of the respective computer system is performed at and/or by the first computer system. In some embodiments, the user automatic biometric identification is performed at and/or by the first computer system as a registered user. In some embodiments, the method further comprises: in response to detecting that at least a portion of the computer system has been placed on the body of the respective user, the respective user is identified as a registered user of the respective computer system (e.g., a registered user of the first computer system) based on the automatic biometric identification, wherein the automatic biometric identification includes an eye-based identification.
In some embodiments, the respective computer system is a second computer system (e.g., a remote computer system in communication with device 700) that is different from the first computer system, and the user of the respective computer system is a user of the second computer system (e.g., a remote user represented by avatar 1110). In some embodiments, the determination that the user of the respective computer system is a registered user of the respective computer system is a determination that the user of the second computer system is a registered user of the second computer system. In some embodiments, the determination that the user of the respective computer system is not a registered user of the respective computer system is a determination that the user of the second computer system is not a registered user of the second computer system. In some embodiments, the determination that the user of the respective computer system is a registered user of the respective computer system is performed at and/or by the second computer system. In some embodiments, the determination that the user of the respective computer system is not a registered user of the respective computer system is performed at and/or by the second computer system. In some embodiments, the user automatic biometric is identified as a registered user and/or an unregistered user is performed at and/or by the second computer system (e.g., not performed by the first computer system).
Automatically identifying the user based on biometric identification when the computer system has been placed on the user's body provides the device with the ability to perform various actions without explicit user input (e.g., automatically identifying the user and applying an appropriate (e.g., user-selected avatar)). Performing an operation without further user input when a set of conditions has been met enhances the operability of the device (e.g., by performing the operation without additional user input) and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the registered user is a first user and the appearance is a first appearance selected by the first user. The computer system detects, via one or more input devices, that the computer system has been removed from the first user's body (e.g., the user has stopped wearing the computer system) (e.g., fig. 11C). After detecting that the computer system has been removed from the body of the first user, the computer system detects, via one or more input devices, that the computer system has been placed on the body of the respective user (e.g., the respective user has worn the computer system) (e.g., fig. 11D). In response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the first user is no longer a user of the respective computer system (e.g., in accordance with a determination that at least a portion of the respective computer system has been removed from the body of the first registered user (in some embodiments, in accordance with a determination that the respective computer system is no longer being worn by the first registered user)), the computer system ceases to display an avatar having the first appearance selected by the first user (e.g., FIG. 11E, avatar 1112 is no longer displayed). In response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the user of the respective computer system is a second user (e.g., user 1103) of the respective computer system that is different from the first user (e.g., user 703) (e.g., in accordance with a determination that at least a portion of the respective computer system has been placed on the body of the second registered user (in some embodiments, in accordance with a determination that the second registered user has worn the computer system)), the computer system displays an avatar (e.g., avatar 1114) having a second appearance (e.g., selected by the second registered user and that is different from the first appearance). In some implementations, the avatar is displayed within an XR environment (e.g., virtual Reality (VR), augmented Reality (AR), and/or Mixed Reality (MR)). In some embodiments, the avatar moves based on movements of the user detected by one or more sensors of the respective computer system. In some embodiments, facial features of the avatar move based on movements of the user's face detected by one or more sensors of the respective computer system. In some embodiments, displaying the avatar having the second appearance includes replacing the display of the avatar having the first appearance with the display of the avatar having the second appearance. In some embodiments, the method further comprises: in response to detecting that the computer system has been placed on the body of the respective user, biometric information (e.g., corresponding to the respective user) is received via the one or more input devices. Techniques for handling devices between users (and/or removing/replacing devices between different users/the same user) are further described with reference to fig. 13A-13K and corresponding descriptions. The techniques described with reference to fig. 13A through 13K may be implemented with reference to the techniques described with reference to fig. 11A through 11F.
Displaying an avatar having a second appearance based on a determination that the user of the respective computer system is a second user provides feedback to the user regarding the current state of the device (e.g., the respective computer system has identified the user of the respective computer system). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the placeholder appearance is an abstract representation (e.g., avatar 1116) (e.g., geometry, point cloud, blurred figure, non-humanoid shape). In some embodiments, one or more visual characteristics of the avatar move based on movements of the user's face detected by one or more sensors of the respective computer system. Displaying an avatar having a placeholder appearance that is an abstract representation based on a determination that the user of the respective computer system is not a registered user of the respective computer system provides feedback to the user regarding the current state of the device (e.g., the user of the respective computer system is not a registered user). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
It is noted that the details of the process described above with reference to method 1200 (e.g., fig. 12A-12B) also apply in a similar manner to the methods described elsewhere herein. For example, methods 800, 1000, and/or 1400 optionally include one or more features of the various methods described above with reference to method 1200. For example, a user-specific avatar may be automatically applied as part of one or more settings associated with the user, as recited in method 800, and/or a user-specific avatar may be automatically applied with a set of device calibration settings specific to the user, as recited in method 1000. As another example, when a device interfaces between users, user-specific avatars may be applied and/or not applied based on automatic user identification, as recited in method 1400. For the sake of brevity, these details are not repeated hereinafter.
Fig. 13A-13K illustrate exemplary user interfaces for displaying content based on handover criteria according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 14A-14B.
Fig. 13A depicts an electronic device 700 that is a smart watch that includes a touch-sensitive display 702, a rotatable and depressible input mechanism 704 (e.g., rotatable and depressible relative to a housing or frame of the device), buttons 706, and a camera 708. In some embodiments described below, the electronic device 700 is a wearable smart watch device. In some embodiments, the electronic device 700 is a smart phone, tablet, headset system (e.g., a headset), or other computer system that includes and/or communicates with a display device (e.g., a display screen, a projection device, etc.). Electronic device 700 is a computer system (e.g., computer system 101 in fig. 1).
In fig. 13A, a user 703 is wearing an electronic device 700. The electronic device 700 has identified the user 703 as a first registered user (e.g., via login credentials and/or password input, and/or via automated biometric identification, as discussed above). The electronic device 700 displays a user avatar 1302 to indicate that the electronic device 700 is being used by a first registered user. The electronic device 700 also displays a video player user interface 1304 that includes video content 1306a, a master desktop affordance 1306b, a multi-tasking affordance 1306c, a sharing affordance 1306d, and a play/pause affordance 1306e. Main desktop affordance 1306b may be selected by a user to navigate to a main desktop user interface (e.g., replace the display of video player user interface 1304 with a main desktop user interface). The multi-tasking affordance 1306c may be selected by a user to navigate to the multi-tasking user interface (e.g., replace the display of the video player user interface 1304 with the multi-tasking user interface). The sharing affordance 1306d may be selected by a user to share content (e.g., to share video content 1306 a) (e.g., to display a content sharing user interface). Play/pause affordance 1306e is selectable by a user to pause and/or play video content 1306a.
At fig. 13B, while the user 703 is still wearing the electronic device 700, the electronic device 700 receives an indication of a new message for the first registered user. The electronic device 700 displays a notification 1308 corresponding to the new message overlaid on the video player user interface 1304.
Fig. 13C depicts the user 703 removing the electronic device 700 from his body. The electronic device 700 detects that the electronic device 700 is no longer positioned on the body of the user. In response to detecting that the electronic device 700 is no longer positioned on the user's body, the electronic device 700 optionally ceases to display the video player user interface 1304.
In the depicted embodiment, the electronic device 700 is a smart watch, and fig. 13C depicts the user 703 removing the smart watch from his wrist. However, as discussed above, in some embodiments, the electronic device 700 is a different device designed to be worn on the head of a user, such as a headset system (e.g., headphones). In some such embodiments, the headset system detects that it has been removed from the user's head.
Fig. 13D-13K depict various exemplary scenarios that optionally occur after removal of the electronic device 700 from the body of the user 703.
Fig. 13D depicts a first exemplary scenario in which after the electronic device 700 is removed from the body of the user 703 (and without any intervening users), the same user 703 wears the electronic device 700 back on his body. The electronic device 700 detects and/or determines that the electronic device 700 has been placed on the user's body and identifies the user as a first registered user 703 (e.g., via automated biometric identification and/or login credentials).
In fig. 13D, in response to determining that the electronic device 700 has been placed on the body of the first registered user 703 (e.g., in response to determining that the electronic device 700 has been placed on the body of the first registered user 703, which is also the last previous user of the electronic device 700), the electronic device 700 redisplays the same user interface and/or content (e.g., video player user interface 1304) that was displayed immediately prior to the removal of the electronic device 700 from the body of the user.
In the depicted embodiment, the electronic device 700 is a smart watch, and fig. 13D depicts the user 703 placing the smart watch on his wrist. However, as discussed above, in some embodiments, the electronic device 700 is a different device designed to be worn on the head of a user, such as a headset system (e.g., headphones). In some such embodiments, the user 703 places the head-mounted system on his head, and the head-mounted system detects that the head-mounted system has been placed on the user's head. In some embodiments, in response to determining that the head-mounted system has been placed on the user's head, the head-mounted system automatically identifies the user (e.g., based on automatic biometric identification). In the scenario depicted in fig. 13D, the head-mounted system determines that the user is the first registered user 703 and redisplays the same user interface and/or content that was displayed immediately prior to the head-mounted system being removed from the body of the user 703. In this way, when the user removes the head-mounted system and then re-wears the head-mounted system (e.g., without any intervening users), the user may return to his or her previous viewing experience.
Fig. 13E depicts a second exemplary scenario in which a different user 1103 wears the electronic device 700 on her body after the electronic device 700 is removed from the body of the user 703 (e.g., without any intervening user wearing the electronic device 700). The electronic device 700 detects and/or determines that the electronic device 700 has been placed on the body of a user, and determines that the user (e.g., user 1103) is a different user than the last previous user (e.g., user 703). In addition, in the scenario of fig. 13E, the electronic device 700 detects and/or determines that the handover criteria have not been met.
In some implementations, the handover criteria optionally include, for example, criteria that are met when the electronic device 700 does not receive: user input corresponding to a request to lock the electronic device 700, user input corresponding to a request to turn off the electronic device 700, and/or user input corresponding to a request to put the electronic device 700 to sleep (e.g., when such user input is not received by the electronic device 700 during a predefined period of time, such as in a period of time between the user 703 removing the electronic device 700 and the user 1103 wearing the electronic device 700). For example, if the user 703 provides user input corresponding to a request to lock the electronic device 700 before the user 1103 wears the electronic device 700, the handover criteria will not be met. Such user input is optionally provided, for example, within a digital user interface and/or via physical buttons (e.g., button 706, depressible and rotatable input mechanism 704, etc.). In some implementations, the handover criteria are met when all criteria of the handover criteria are met. In embodiments where the electronic device 700 is a different device, such as a head-mounted system, a similar handoff standard may be applied. For example, in some implementations, the handover criteria may include criteria that are met when the electronic device 700 does not receive: user input corresponding to a request to lock the headset system, user input corresponding to a request to close the headset system, and/or user input corresponding to a request to put the headset system to sleep (e.g., when such user input is not received by the headset system during a predefined period of time, such as in a period of time between the user 703 removing the headset system and the user 1103 wearing the headset system). Such user input is optionally provided, for example, within a digital user interface (e.g., a virtual environment displayed by the head-mounted system) and/or via physical buttons (e.g., physical buttons on the head-mounted system).
In some embodiments, the handover criteria optionally include criteria that are met when the time elapsed between detecting that the electronic device 700 has been removed from the body of a first user (e.g., user 703) and detecting that the electronic device 700 has been placed on the body of a subsequent user (e.g., user 1103) is less than a threshold period of time. For example, if the time elapsed between the user 703 removing the electronic device 700 and the user 1103 wearing the electronic device 700 is greater than a threshold period of time, the handover criteria will not be met. In embodiments where the electronic device 700 is a different device, such as a head-mounted system, a similar handoff standard may be applied. For example, in some implementations, the handover criteria may include criteria that are met when the time elapsed between detecting that the head-mounted system has been removed from the body of the first user (e.g., user 703) (e.g., removed from the head of the first user) and detecting that the head-mounted system has been placed on the body of a subsequent user (e.g., user 1103) (e.g., placed on the head of the subsequent user) is less than a threshold period of time.
In some implementations, the handover criteria optionally include criteria that are met when the previous user (e.g., the first registered user 703) is a registered user. For example, if the previous user (e.g., user 703) is already an unregistered user using electronic device 700 in guest mode, then the handoff criteria will not be met and the subsequent user 1103 will not be able to view user interface 1304 even with restricted capabilities. In embodiments where the electronic device 700 is a different device, such as a head-mounted system, a similar handoff standard may be applied.
In fig. 13E, in response to determining that the electronic device 700 has been placed on a different user's body than the first registered user 703, and in response to determining that the handoff criteria have not been met, the electronic device 700 foregoes displaying the video player user interface 1304 and displays different content (e.g., a different user interface). In the depicted example, the electronic device 700 has identified the user 1103 as a second registered user Sarah. In response to the determination, the electronic device 700 displays a personalized user interface 714 corresponding to the second registered user (personalized user interface 714 is discussed in more detail above with reference to fig. 7C). In an alternative scenario in which the electronic device 700 identifies the user 1103 as an unregistered user, the electronic device 700 displays, for example, a guest user interface (e.g., guest user interface 718 of fig. 7D) and/or a user selector user interface (e.g., user selector user interface 722 of fig. 7E).
In the depicted embodiment, the electronic device 700 is a smart watch, and fig. 13E depicts the user 1103 placing the smart watch on her wrist. However, as discussed above, in some embodiments, the electronic device 700 is a different device designed to be worn on the head of a user, such as a headset system (e.g., headphones). In some such embodiments, the user 1103 places the head-mounted system on her head, and the head-mounted system detects that the head-mounted system has been placed on the user's head. In some embodiments, in response to determining that the head-mounted system has been placed on the user's head, the head-mounted system automatically identifies the user (e.g., based on automatic biometric identification). In the scenario depicted in fig. 13E, the head-mounted system determines that the user is a second registered user 1103. The head-mounted system further determines that the handover criteria have not been met. In some embodiments, the head-mounted system displays content (e.g., a personalized user interface) corresponding to the second registered user 1103 in accordance with a determination that the user is a second registered user 1103 that is different from the previous user (user 703) and has not satisfied the handover criteria. In some implementations, content corresponding to the second registered user 1103 is presented by the head-mounted system within the three-dimensional virtual environment.
Fig. 13F depicts a third exemplary scenario in which after the electronic device 700 is removed from the body of the user 703 (e.g., without any intervening user), the second user 1103 wears the electronic device 700 on her body and the handoff criteria have been met. The electronic device 700 detects that a user other than the first registered user (user 703) has worn the electronic device 700 and satisfies the handover criterion.
In fig. 13F, in response to determining that a user other than the first registered user 703 has donned the electronic device 700 and has met the handoff criteria, the electronic device 700 displays the user interface 1304 in a constrained mode. In some implementations, the electronic device 700 remains logged into the user account corresponding to the previous user/first registered user 703 while the user interface 1304 is displayed in the restricted mode.
In the constrained mode, the subsequent user 1103 is able to view content (e.g., video player user interface 1304 and video content 1306 a) that was previously being viewed by the user 703. However, this content has some constraints. For example, in fig. 13F, the user 1103 is able to view the user interface 1304 and video content 1306a, but is not allowed to navigate away from the content. This is optionally done, for example, to prevent the user 1103 from accessing the secure content or other private information belonging to the user 703. Thus, the master desktop affordance 1306b, the multi-tasking affordance 1306c, and the shared affordance 1306d are disabled and the user 1103 cannot access and/or select these affordances.
In fig. 13F, the electronic device 700 has identified the subsequent user 1103 as the second registered user Sarah. While the electronic device 700 remains logged into the user account associated with the previous user/first registered user, the electronic device 700 displays an avatar 1302 corresponding to the second registered user to indicate that the electronic device 700 is being operated by the second registered user. The electronic device 700 also displays an indicator 1310 to indicate that the electronic device 700 is operating in a constrained mode (e.g., the video player user interface 1304 is displayed in a constrained mode).
In some embodiments, when the electronic device 700 is operating in the constrained mode, the electronic device 700 ceases to apply one or more user settings associated with a previous user (e.g., the first registered user 703). For example, when the electronic device 700 is operating in the constrained mode, the electronic device 700 ceases to apply device calibration settings (e.g., eye movement calibration settings, hand movement calibration settings, head movement calibration settings) associated with (e.g., specific to) the first registered user. In some embodiments, when the electronic device 700 is operating in a constrained mode, the electronic device 700 applies a set of generic (e.g., default) device calibration settings. In some embodiments, rather than (or in addition to) applying the generic settings, the electronic device 700 stops applying user-specific settings for a previous user and may enable and/or begin applying user-specific settings for a subsequent user (e.g., user 1103). For example, in fig. 13F, the electronic device 700 optionally ceases to apply device calibration settings associated with the first registered user 703 and the electronic device 700 applies device calibration settings associated with and/or specific to the second registered user 1103.
In some embodiments, when the electronic device 700 is operating in the constrained mode, the electronic device 700 optionally maintains one or more user settings associated with and/or applied by a previous user (e.g., the first registered user 703). For example, when the second user 1103 operates the electronic device 700 in the constrained mode, one or more accessibility settings (e.g., font size settings, display size settings, accessibility zoom settings, accessibility gesture settings, and/or audio accessibility settings) applied by the first registered user 703 prior to handing over the electronic device 700 to the second user 1103 are maintained. This allows, for example, a previous user (e.g., user 703) to apply one or more accessibility settings appropriate to an intended subsequent user (e.g., user 1103) to make the viewing experience of the subsequent user more pleasant. In some implementations, the previous user (e.g., user 703) is provided with an option of whether to maintain the applied accessibility settings when the electronic device 700 is operating in the constrained mode. In some embodiments, the accessibility settings remain available and/or accessible to subsequent users 1103 when the electronic device 700 is operating in a constrained mode.
In fig. 13G, the user 1103 presses the rotatable and depressible input mechanism 704 (user input 1314), and the electronic device 700 detects a user input 1314 corresponding to a press and/or activation of the rotatable and depressible input mechanism 704. In an unconstrained experience, as presented in fig. 13A, 13B, and 13D, user input 1314 will typically cause the electronic device 700 to navigate away from the video player user interface 1304 to the master desktop user interface. However, because the electronic device 700 is operating in the constrained mode, the user 1103 is prohibited from navigating away from the user interface 1304. Thus, the electronic device 700 foregoes navigating away from the video player user interface 1304 despite detecting the user input 1314.
In fig. 13H, the user 1103 rotates the input mechanism 704 (user input 1316), and the electronic device 700 detects the user input 1316 corresponding to the rotation of the input mechanism 704. While certain functions are constrained when the electronic device 700 is operating in a constrained mode, other operations are optionally unconstrained. For example, certain system controls (such as volume controls and/or display brightness controls) remain accessible to the user 1103 because these controls do not provide access to sensitive or private information. Thus, in fig. 13H, in response to detecting the user input 1316, the electronic device 700 increases the volume setting and displays the volume slider 1318. In some embodiments, one or more accessibility settings are also accessible to the user 1103 when the electronic device 700 is operating in the constrained mode.
Fig. 13F-13G depict an exemplary embodiment in which the electronic device 700 is a smart watch and the handoff criteria are met for a subsequent user (e.g., user 1103) and the electronic device 700 operates in a constrained mode accordingly. As discussed above, in some embodiments, the electronic device 700 is a different device designed to be worn on the head of a user, such as a headset system (e.g., headphones). In some such embodiments, the user 1103 places the head-mounted system on her head, and the head-mounted system detects that the head-mounted system has been placed on the user's head. In some embodiments, in response to determining that the head-mounted system has been placed on the user's head, the head-mounted system automatically identifies the user (e.g., based on automatic biometric identification). In the scenario depicted in fig. 13F, the head-mounted system determines that the user is a second registered user 1103. The head-mounted system further determines that the handover criteria have been met. In some embodiments, the head-mounted system displays content previously accessed by the previous user (user 703) in a constrained mode based on a determination that the user is a second registered user 1103 that is different from the previous user (user 703) and has met the handover criteria. For example, if a previous user is viewing video content within the three-dimensional virtual environment (e.g., in an unconstrained mode that enables access to one or more features), the head-mounted system displays the same video content for subsequent users within the three-dimensional virtual environment, but in a constrained mode (e.g., constrained access to at least some of the one or more features). In some implementations, the features of the constrained mode described above and the additional features described below are also applicable to the head-mounted system with reference to the electronic device 700. For example, in some embodiments, one or more system controls and accessibility settings are accessible to subsequent users while the head-mounted system is operating in a constrained mode, but certain functions, such as navigating away from the displayed application and/or user interface, are constrained. In this way, a user using the head-mounted system (e.g., user 703) may remove the head-mounted system and pass the head-mounted system to another subsequent user (e.g., user 1103) in order to share the content he or she is viewing without the risk of the subsequent user accessing personal or sensitive information.
In addition to the electronic device 700, fig. 13I also depicts an electronic device 1350. The electronic device 1350 is a smart phone with a touch screen display 1352. In the embodiments described below, the electronic device 1350 is a smart phone. In some embodiments, the electronic device 1350 is a tablet, smart watch, laptop, or other computer system that includes and/or communicates with a display device (e.g., a display screen, a projection device, etc.).
In fig. 13I, an electronic device 1350 is associated with the first registered user 703. For example, the electronic device 1350 is logged into a user account associated with the first registered user 703. As discussed above, the electronic device 700 is also logged into a user account associated with the first registered user 703 (e.g., the same user account as logged into the electronic device 1350), while the second registered user 1103 operates the electronic device 700 in a constrained mode. In fig. 13I, in response to determining that electronic device 700 is operating in the constrained mode, electronic device 700 transmits (e.g., via a direct connection and/or via a network) a notification to electronic device 1350 that electronic device 700 is operating in the constrained mode. The electronic device 1350 displays a notification 1354 that the electronic device 700 is operating in a constrained mode. The electronic device 1350 detects a user input 1356 corresponding to a selection of the notification 1354. As discussed above, in some embodiments, the electronic device 700 is a different device, such as a head-mounted system. In some implementations, when the head-mounted system is operating in the constrained mode, the head-mounted system is associated with the first registered user 703 (e.g., logs into a user account associated with the first registered user). In some implementations, in response to determining that the head-mounted system is operating in the constrained mode, the head-mounted system transmits a notification to another computer system (e.g., device 1350) associated with the first registered user 703 that the head-mounted system is operating in the constrained mode.
In fig. 13J, in response to user input 1356, electronic device 1350 displays device mirror user interface 1358. The device mirror user interface 1358 displays content that is being displayed on the electronic device 700 when the electronic device 700 is operating in a constrained mode. The electronic device 700 transmits content information to the electronic device 1350 (e.g., via a direct connection and/or via a network) so that the electronic device 1350 can display the content in the device mirror user interface 1358. In this way, the first registered user 703 may monitor (on the electronic device 1350) what content the second user 1103 is viewing on the electronic device 700 when the electronic device 700 is logged into the user account of the first registered user and operating in the restricted mode.
As discussed above, in some embodiments, the electronic device 700 is a different device, such as a head-mounted system. In some implementations, the head-mounted system transmits content information to a second device associated with a previous user (e.g., the first registered user 703) so that the second device (e.g., the electronic device 1350) can display the content in the device mirrored user interface. In this way, a previous user (e.g., the first registered user 703) may monitor (e.g., on the electronic device 1350) what content is being viewed on the head-mounted system by a subsequent user (e.g., the second registered user 1103) when the head-mounted system logs into the user account of the first registered user and operates in a constrained mode.
In fig. 13K, both electronic device 700 and electronic device 1350 receive information indicating new messages for the first registered user. The electronic device 1350 displays a notification 1360 corresponding to the new message for the first registered user. However, the electronic device 700 foregoes displaying a notification corresponding to the new message for the first registered user because the electronic device 700 is operating in the restricted mode. As discussed above, in some embodiments, the electronic device 700 is a different device, such as a head-mounted system. In some implementations, similar to the apparatus 700, the head-mounted system may forgo displaying the notification when the head-mounted system is operating in a constrained mode.
In the depicted embodiment, the electronic device 700 is a smart watch, and fig. 13A-13K depict the users 7603, 1103 wearing the smart watch on their wrists or removing the smart watch from their wrists, and the content being displayed on the smart watch. However, as discussed above, in some embodiments, the electronic device 700 is optionally a different device designed to be worn on the head of a user, such as a headset system (e.g., headphones). In such embodiments, the electronic device 700 optionally attempts to automatically identify the user wearing the device upon determining that the electronic device 700 has been placed on the user's head (e.g., based on iris recognition and/or facial recognition when the device is placed on the user's head). Additionally, in such embodiments, content such as user interface 1304 is optionally displayed via the head-mounted system, and one or more user inputs are optionally received via one or more input devices in communication with the head-mounted system. Similarly, the headset may operate in a normal, unconstrained mode (e.g., fig. 13A-13B, 13D), or in a constrained mode (e.g., fig. 13F-13K). In some implementations, an external portion of the head-mounted system (e.g., an external display separate from an internal display visible only to the operating user) may display an indication of when the head-mounted system is operating in a constrained mode, and may also display an indication of who (e.g., a user name) is operating the head-mounted system in a constrained mode and/or what content the head-mounted system is displaying in a constrained mode. In some implementations, device calibration settings are applied for a head-mounted system and one or more input devices in communication with the head-mounted system. For example, the device calibration settings include an eye gaze calibration setting, a head movement calibration setting, a hand and/or arm movement calibration setting, a torso calibration setting, and/or a foot and/or leg calibration setting. In some embodiments, the electronic device 700 applies the device calibration settings associated with the first user when the device is being used by the first user (e.g., user 703), and the electronic device 700 stops applying the device calibration settings associated with the first user when the electronic device is handed over from a previous user (e.g., user 703) to a subsequent user (e.g., user 1103). In some implementations, device calibration settings associated with a subsequent user (e.g., user 1103) are applied when the subsequent user operates the electronic device 700 in a constrained mode. In some embodiments, generic and/or default device calibration settings are applied when the electronic device 700 is operating in a constrained mode.
Fig. 14A-14B are flowcharts illustrating methods for displaying content using an electronic device based on a handover standard according to some embodiments. Method 1400 is performed at a computer system (e.g., 700, 101) in communication with a display generation component and one or more input devices. Some operations in method 1400 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, method 1400 provides an intuitive way for displaying content based on handover criteria. The method reduces the cognitive burden on the user in retrieving and displaying content, thereby creating a more efficient human-machine interface. For battery-powered computing devices, enabling users to retrieve and display content more quickly and efficiently saves power and increases the time between battery charges.
In some embodiments, a computer system (e.g., device 700) (e.g., a smart phone, a smart watch, a tablet, a head-mounted system, and/or a wearable device) in communication with a display generating component (e.g., display 702) (e.g., a display controller, a touch-sensitive display system, a display (e.g., an integrated and/or connected), a 3D display, a transparent display, a projector, and/or a heads-up display) and one or more input devices (e.g., 702, 704, 706, 708) (e.g., touch-sensitive surfaces (e.g., touch-sensitive displays), a mouse, a keyboard, a remote control, a visual input device (e.g., a camera), an audio input device (e.g., a microphone), and/or a biometric sensor (e.g., a fingerprint sensor, facial recognition sensor, and/or iris recognition sensor)), when the computer system is placed on the body of a first user (e.g., user 703, fig. 13A) (e.g., 1402) (e.g., when the system is worn by the first user) (in some embodiments and when the computer system is logged into the first user), generates video content (e.g., a video program) via a corresponding to a first user interface (e.g., user 13) in a first user account with a first user interface (e.g., a), a first mode display (e.g., fig. 13A-13B) of allowable access to a plurality of features associated with a logged-in user experience (1404). When the first user interface is displayed in a first mode with allowable access to a plurality of features associated with the first user (1406), the computer system detects via one or more input devices that the computer system has been removed from the body of the first user (e.g., the user has stopped wearing the computer system) (e.g., fig. 13C) (1408). After detecting that the computer system has been removed from the first user's body (1410), the computer system detects, via one or more input devices, that the computer system has been placed on the respective user's body (e.g., the respective user has worn the computer system) (1412).
In response to detecting that the computer system has been placed on the body of the respective user (1414) (in some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, biometric information is received via one or more input devices (e.g., corresponding to the respective user)), and based on the biometric information received via the one or more input devices (e.g., fingerprint, image (e.g., photograph and/or scan) representing the face of the respective user, and/or iris identifying information (e.g., iris scan information)) in some embodiments, the biometric information is received while the computer system is being worn by the respective user, the computer system displays the first user interface (e.g., 1304) (1418) in a first mode having allowable access to a plurality of features associated with the first user (e.g., user 703) (e.g., in accordance with the determination that the respective user is the first user (e.g., whether the set of handoff criteria has been met or not) (e.g., fig. 13D)).
In response to detecting that the computer system has been placed on the body of the respective user (1414) (in some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, receiving biometric information (corresponding to the respective user) via the one or more input devices), and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user (e.g., in accordance with a determination that the respective user is not the first user) (e.g., the respective user is not the first user 603) and has met a set of handoff criteria (1420), the computer system displays the first user interface (e.g., the user interface 1304) (e.g., fig. 13F-13K) via the display generating component in a second mode (e.g., a constrained mode, a guest mode, and/or a handoff mode) having restricted access to one or more features of the plurality of features associated with the first user (1422). In some implementations, the second mode having restricted access to one or more features of the plurality of features associated with the first user prohibits access to a subset of the content accessible (e.g., accessible in the first mode) to the first user. In some implementations, the second mode with restricted access provides access only to the first user interface corresponding to the first application. In some embodiments, the second mode with restricted access provides access only to the first application (e.g., inhibits access to other applications).
Displaying the first user interface in the second mode with restricted access based on a determination that the biometric information received via the one or more input devices does not correspond to the first user and the set of handoff criteria has been met enhances security and may prevent an unauthorized user from initiating sensitive operations (e.g., by allowing the user to view the user interface only within restricted access modes with fewer permissions). Displaying the first user interface in the second mode with restricted access based on a determination that the biometric information received via the one or more input devices does not correspond to the first user and has met the set of handoff criteria also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
Displaying the first user interface in the first mode with access permission based on a determination that the biometric information received via the one or more input devices corresponds to the first user provides the user with the ability to resume his or her viewing experience without additional user input. Performing an operation without further user input when a set of conditions has been met enhances the operability of the device (e.g., by performing the operation without additional user input) and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the set of handoff criteria includes a first criterion that is met when the computer system does not receive user input corresponding to a request to lock the computer system before detecting that the computer system has been placed on the body of the respective user (e.g., the electronic device 700 does not receive user input corresponding to a request to lock the computer system during the time between the removal of the electronic device 700 by the user 703 in fig. 13C and the wearing of the electronic device 700 by a subsequent user 1103 in fig. 13E (and optionally after detecting that the computer system has been removed from the body of the first user (e.g., during the time period between the computer system has been removed from the body of the first user and the computer system has been placed on the body of the respective user)). In some implementations, the first criteria is met when no user input corresponding to a request to lock the computer system is received within a period of time between two predefined events (e.g., a period of time after the computer system is removed from the body of the first user and placed on the body of the respective user). In some embodiments, the method further comprises: in response to detecting that the computer system has been placed on the body of the respective user: in accordance with a determination that biometric information received via the one or more input devices does not correspond to the first user and that user input corresponding to a request to lock the computer system has been received before detecting that the computer system has been placed on the body of the respective user (and optionally after detecting that the computer system has been removed from the body of the first user (e.g., during a period of time between the removal of the computer system from the body of the first user and the placement of the computer system on the body of the respective user)), the display of the first user interface (e.g., the display of the first user interface in the first mode or the second mode) is forgone (in some embodiments, the log-out user interface is displayed).
The handover criteria that are met when the computer system does not receive user input corresponding to a request to lock the computer system enhance security and may prevent an unauthorized user from initiating sensitive operations (e.g., by preventing unauthorized access when the user does provide input corresponding to a request to lock the computer system). The handover criteria met when the computer system does not receive user input corresponding to a request to lock the computer system also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, the set of handoff criteria includes a second criterion that is met when an elapsed time since the computer system was detected to have been removed from the body of the first user is less than a threshold period of time (e.g., the elapsed time between the user 703 in fig. 13C removing the electronic device 700 and the subsequent user 1103 wearing the electronic device 700 in fig. 13E is less than a threshold period of time) (e.g., the elapsed time since the computer system had been removed from the body of the first user is less than x seconds). In some embodiments, the method further comprises: in response to detecting that the computer system has been placed on the body of the respective user: in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and that the elapsed time since the detection of the computer system having been removed from the body of the first user is equal to the threshold period of time, the first user interface is forgone being displayed (e.g., the first user interface is forgone being displayed in the first mode or the second mode) (in some embodiments, the log-out user interface is displayed). The handover criteria that are met when the elapsed time since the computer system was detected to have been removed from the body of the first user is less than a threshold period of time enhance security and may prevent an unauthorized user from initiating a sensitive operation (e.g., by preventing an unauthorized user from accessing the privilege information after the threshold period of time has elapsed since the computer system has been removed from the body of the first user). The handover criteria that are met when the elapsed time since the computer system was detected to have been removed from the body of the first user is less than a threshold period of time also enhances the operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, the set of handoff criteria includes a third criterion that is met when the computer system is not turned off or placed in sleep mode after detecting that the computer system has been removed from the body of the first user and before detecting that the computer system has been placed on the body of the respective user (e.g., between the removal of the computer system from the body of the first user and the placement of the computer system on the body of the respective user) (e.g., the electronic device 700 is not turned off or placed in sleep mode during the time between the removal of the electronic device 700 by the user 703 in fig. 13C and the donning of the electronic device 700 by the subsequent user 1103 in fig. 13E). In some embodiments, the third criterion is met when no user input corresponding to a request to put the computer system to sleep or a user input corresponding to a request to shut down the computer system is received within a predefined period of time (e.g., a period of time after the computer system is removed from the body of the first user and the computer system is placed on the body of the respective user). In some embodiments, the method further comprises: in response to detecting that the computer system has been placed on the body of the respective user: in accordance with a determination that biometric information received via the one or more input devices does not correspond to the first user and that the computer system has been turned off or put in a sleep mode after detecting that the computer system has been removed from the body of the first user and before detecting that the computer system has been placed on the body of the respective user, the first user interface is forgone being displayed (e.g., the first user interface is forgone being displayed in the first mode or the second mode) (in some embodiments, the log-out user interface is displayed).
The handover criteria that are met when the computer system is not turned off or placed in sleep mode after removal from the body of the first user enhance security and may prevent unauthorized users from initiating sensitive operations (e.g., by preventing unauthorized access when the user does turn off the computer system or place the computer system in sleep mode). The handover criteria that are met when the computer system is not turned off or placed in sleep mode after removal from the first user's body also enhance operability of the device and make the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, the set of handover criteria includes a fourth criterion that is met when the first user (e.g., user 703) is a registered user (e.g., a user registered on a computer system and/or a user registered on a service). In some embodiments, the fourth criterion is not met if the first user is an unregistered user (e.g., a guest, a user not registered on the computer system, and/or a user not registered on the service). In some embodiments, moving the computer system from being worn by a registered user to being worn by an unregistered user (e.g., a guest user) will cause the first user interface to continue to be displayed in the first mode. In some implementations, the computer system then causes the first user interface to cease to be displayed in the first mode (e.g., the first user interface to be displayed in the second mode) from being worn by the unregistered user to another unregistered user. In some embodiments, the method further comprises: in response to detecting that the computer system has been placed on the body of the respective user: in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and the first user is not a registered user, the first user interface is forgone being displayed (e.g., the first user interface is forgone being displayed in the first mode or the second mode) (in some embodiments, a log-out user interface is displayed).
The handover criteria that are met when the first user is a registered user enhance security and may prevent an unauthorized user from initiating sensitive operations (e.g., by preventing a guest/unauthorized user from providing access to another guest/unauthorized user). The handover criteria that are met when the first user is a registered user also enhance operability of the device and make the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user (1414), and in accordance with a determination (1424) that the biometric information received via the one or more input devices does not correspond to the first user and that the set of handoff criteria has not been met, the computer system relinquishes displaying the first user interface (e.g., fig. 13E, displays user interface 714 instead of user interface 1304) (e.g., relinquishes displaying the first user interface in the first mode or the second mode) (in some embodiments, displays a log-out user interface). Discarding displaying the first user interface when the biometric information does not correspond to the first user and the set of handover criteria has not been met enhances security and may prevent an unauthorized user from initiating a sensitive operation. Discarding displaying the first user interface when the biometric information does not correspond to the first user and the set of handoff criteria has not been met also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and has not met the set of handoff criteria, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to a previously registered user (e.g., a user that has previously been registered on the computer system) (e.g., in accordance with a determination that the respective user is not a registered user), the computer system displays a user interface (e.g., user interface 718 of fig. 7D) for the unregistered user that indicates the respective user is not a registered user.
Displaying a user interface for an unregistered user based on a determination that the biometric information does not correspond to a previously registered user provides feedback for the user regarding a current state of the device (e.g., the computer system has determined that the user is an unregistered user). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and has not met the set of handoff criteria, and in accordance with a determination that the biometric information received via the one or more input devices corresponds to a second registered user (e.g., user 1103) that is different from the first user (e.g., user 703) (e.g., a user that has previously been registered on the computer system) (e.g., in accordance with a determination that the respective user is a second registered user), a second user interface (e.g., personalized user interface 714 in fig. 13E) that is different from the first user interface (e.g., user interface 1304) is displayed, wherein the second user interface corresponds to the second registered user (e.g., a master desktop user interface of the second user and/or a previously displayed user interface of the second user). Displaying the second user interface based on a determination that the biometric information corresponds to the second registered user provides feedback to the user regarding the current state of the device (e.g., the computer system has identified the user as the second registered user). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the set of handoff criteria has not been met and that the biometric information received via the one or more input devices corresponds to the first user (e.g., user 703) (e.g., in accordance with a determination that the respective user is the first user), the computer system displays the first user interface (e.g., user interface 1304) via the display generating component in a first mode having allowable access to a plurality of features associated with the first user (e.g., fig. 13D). Displaying the first user interface in the first mode with access permission based on a determination that the biometric information received via the one or more input devices corresponds to the first user provides the user with the ability to resume his or her viewing experience without additional user input. Performing an operation without further user input when a set of conditions has been met enhances the operability of the device (e.g., by performing the operation without additional user input) and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the second mode having restricted access to one or more of the plurality of features associated with the first user further includes maintaining one or more user settings associated with the first user (e.g., an avatar associated with the first user, and/or device calibration settings associated with the first user (e.g., a hand calibration setting, an eye calibration setting, a body calibration setting)). Maintaining one or more user settings associated with the first user allows the user to apply the one or more settings with the computer system without providing additional user input. Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, when the computer system is placed on the body of the respective user and when the first user interface (e.g., user interface 1304) is displayed, the computer system receives navigational user input (e.g., user input corresponding to a request to navigate within and/or from the first user interface) (e.g., user input to (e.g., display) an application different from the first application, user input to navigate from the first user interface to a different user interface (e.g., display a different user interface), user input to a particular portion of the first user interface (e.g., display a particular portion of the first user interface), user input to access a particular feature within the first user interface (e.g., display a particular feature within the first user interface), and/or user input to access particular content within the first user interface (e.g., display a particular content within the first user interface) (e.g., user input 1314 and/or user input on any of icons 1306b, 1306c, 1306 d). In response to receiving the navigational user input, and in accordance with a determination that the first user interface is displayed in a first mode having permitted access to a plurality of features associated with the first user, navigating through the user interface in accordance with the navigational user input (e.g., displaying an application different from the first application, displaying a particular portion of the first user interface, displaying a particular feature of the first user interface, and/or displaying particular content within the first user interface) (e.g., when the first user interface is displayed in the first mode having permitted access, the user input 1314 causes the electronic device 700 to replace the display of the user interface 1304 with a primary desktop user interface). Responsive to receiving the navigational user input (e.g., user input 1314), and in accordance with a determination that the first user interface is displayed in a second mode having restricted access to one or more of the plurality of features associated with the first user, navigational navigation through the user interface in accordance with the navigational user input is abandoned (e.g., FIG. 13G).
Allowing navigation within and/or from the first user interface when the first user interface is displayed in the first mode and disabling navigation when the first user interface is displayed in the second mode enhances security. For example, disabling navigation when the first user interface is displayed in the second mode may prevent an unauthorized user from initiating a sensitive operation or accessing sensitive information. Allowing navigation within and/or from the first user interface when the first user interface is displayed in the first mode and disabling navigation when the first user interface is displayed in the second mode also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, the computer system receives user input (e.g., user input 1314, user input 1316) when the computer system is placed on the body of the respective user and when the first user interface is displayed. In response to receiving the user input, and in accordance with a determination that the user input corresponds to a request to access a system control (e.g., user input 1316) (e.g., volume control and/or display brightness control) (e.g., user input to display a system control user interface (e.g., volume control user interface and/or display brightness user interface), modify a system control setting (e.g., user input to volume setting and/or display brightness setting)), the computer system performs an operation associated with the system control (e.g., whether the first user interface is displayed in the first mode or the second mode). In response to receiving the user input, and in accordance with a determination that the user input corresponds to a request to access non-system control and that the first user interface is displayed in a first mode having allowable access to a plurality of features associated with the first user (e.g., fig. 13A-13B), the computer system performs an operation associated with the non-system control. In response to receiving the user input and in accordance with a determination that the user input corresponds to a request to access non-system control and the first user interface is displayed in a second mode having restricted access to one or more of the plurality of features associated with the first user (e.g., user input 1314, fig. 13G), the computer system relinquishes performing the operation associated with the non-system control.
In some embodiments, the method further comprises: in response to receiving the user input: from a determination that the user input is a navigational input (e.g., a user input corresponding to a request to navigate within and/or from the first user interface) (e.g., a user input to navigate to (e.g., display) an application different from the first application), a user input to navigate from the first user interface to a different user interface (e.g., display a different user interface), a user input to a particular portion of the first user interface (e.g., display a particular portion of the first user interface), a user input to access a particular feature within the first user interface (e.g., display a particular feature within the first user interface), and/or a user input to access particular content within the first user interface (e.g., display a particular content within the first user interface)). In accordance with a determination that the first user interface is displayed in a first mode having permitted access to a plurality of features associated with the first user, displaying a navigation effect corresponding to navigating the user interface (e.g., displaying an application different from the first application, displaying a user interface different from the first user interface, displaying a particular portion of the first user interface, displaying a particular feature of the first user interface, and/or displaying particular content within the first user interface); and based on the determination that the first user interface is displayed in a second mode having restricted access to one or more of the plurality of features associated with the first user, forgoing displaying a navigation effect corresponding to the navigation user input.
Enabling a user to access one or more system controls while restricting access to other (e.g., more sensitive or user confidential) aspects of the system enhances the operability of the device, and makes the user-device interface more efficient (e.g., by allowing the user to configure the system for use by itself, by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system receives user input when the computer system is placed on the body of the respective user and when the first user interface (e.g., user interface 1304) is displayed. In response to receiving the user input, and in accordance with a determination that the user input corresponds to a request to access one or more accessibility settings (e.g., display size setting, accessibility scaling setting, accessibility gesture setting, and/or audio accessibility setting) (e.g., request to display accessibility setting user interface, request to modify one or more accessibility settings), the computer system performs an operation associated with the one or more accessibility settings (e.g., whether the first user interface is displayed in the first mode or the second mode). In response to receiving the user input, and in accordance with a determination that the user input corresponds to a request to access the non-accessibility setting and that the first user interface is displayed in a first mode having allowable access to a plurality of features associated with the first user (e.g., fig. 13A-13B), the computer system performs an operation associated with the non-accessibility setting. Responsive to receiving the user input, and in accordance with a determination (e.g., fig. 13F-13K) that the user input corresponds to a request to access the non-accessibility setting and that the first user interface is displayed in a second mode having restricted access to one or more of the plurality of features associated with the first user, performing the operation associated with the non-accessibility setting is relinquished.
In some embodiments, the method further comprises: in response to receiving the user input: from a determination that the user input is a navigational input (e.g., a user input corresponding to a request to navigate within and/or from the first user interface) (e.g., a user input to navigate to (e.g., display) an application different from the first application), a user input to navigate from the first user interface to a different user interface (e.g., display a different user interface), a user input to a particular portion of the first user interface (e.g., display a particular portion of the first user interface), a user input to access a particular feature within the first user interface (e.g., display a particular feature within the first user interface), and/or a user input to access particular content within the first user interface (e.g., display a particular content within the first user interface)). In accordance with a determination that the first user interface is displayed in a first mode having permitted access to a plurality of features associated with the first user, displaying a navigation effect corresponding to navigating the user interface (e.g., displaying an application different from the first application, displaying a user interface different from the first user interface, displaying a particular portion of the first user interface, displaying a particular feature of the first user interface, and/or displaying particular content within the first user interface); and based on the determination that the first user interface is displayed in a second mode having restricted access to one or more of the plurality of features associated with the first user, forgoing displaying a navigation effect corresponding to the navigation user input.
Enabling a user to access one or more accessibility settings while restricting access to other (e.g., more sensitive or user confidential) aspects of the system enhances the operability of the device, and makes the user-device interface more efficient (e.g., by allowing the user to configure the system for use by itself, by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system receives user input corresponding to a request to enable one or more accessibility settings (e.g., display size settings, accessibility zoom settings, accessibility gesture settings, and/or audio accessibility settings) when the computer system is placed on the body of the first user and when the first user interface is displayed in a first mode (e.g., fig. 13A-13B) with allowable access to a plurality of features associated with the first user. In response to a user input corresponding to a request to enable one or more accessibility settings, the computer system enables the one or more accessibility settings. When the one or more accessibility settings are enabled, the computer system detects, via the one or more input devices, that the computer system has been removed from the body of the first user (e.g., user 703, fig. 13C) (e.g., the user has stopped wearing the computer system). After detecting that the computer system has been removed from the body of the first user, it is detected via one or more input devices that the computer system has been placed on the body of a second phase user (e.g., fig. 13D, 13E, 13F) (e.g., the second phase user has worn the computer system). In response to detecting that the computer system has been placed on the body of the second respective user, and in accordance with biometric information (e.g., a fingerprint, an image (e.g., a photograph and/or a scan) representing the face of the respective user, and/or iris identifying information (e.g., iris scan information)) received via the one or more input devices (in some embodiments, biometric information is received while the computer system is being worn by the second respective user) corresponding to a determination of the first user (e.g., user 703, fig. 13D) (e.g., in accordance with a determination of the second respective user being the first user (e.g., whether the set of handoff criteria has been met or not)), the computer system displays the first user interface (e.g., user interface 1304) via the display generating component in a first mode having allowable access to a plurality of features associated with the first user while maintaining the one or more accessibility settings in an enabled state.
In response to detecting that the computer system has been placed on the body of the second respective user, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user (e.g., in accordance with a determination that the second respective user is not the first user) and that a set of handoff criteria has been met (e.g., fig. 13F), the computer system displays, via the display generation component, the first user interface (e.g., user interface 1304) in a second mode (e.g., a constrained mode, a guest mode, and/or a handoff mode) having restricted access to one or more features of the plurality of features associated with the first user while maintaining the one or more accessibility settings in an enabled state. In some implementations, the first user can enable one or more accessibility settings (e.g., make font larger, open accessibility gesture settings) while in the first mode prior to handing over the device to the respective user. Thus, even when in the second mode, one or more accessibility settings remain enabled for the benefit of the respective user.
Maintaining the one or more accessibility settings set by the first user allows the user to apply the one or more settings with the computer system without providing additional user input. Performing an operation when a set of conditions has been met without requiring further user input enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, upon displaying a first user interface (e.g., user interface 1304) (and optionally after detecting that the computer system has been placed on the body of the respective user), the computer system receives information (e.g., an email message, an SMS message, an instant message, an alert, calendar information, and/or other information). In response to receiving the information, and in accordance with a determination that the first user interface is displayed in a first mode having allowable access to a plurality of features associated with the first user (e.g., fig. 13B), the computer system provides (e.g., displays, outputs audio, and/or performs tactile output) a notification (e.g., notification 1308) corresponding to the received information (e.g., a visual indication of the received email, SMS message, instant message, alert, calendar information, and/or other information). In response to receiving the information, and in accordance with a determination that the first user interface is displayed in a second mode with restricted access to one or more of the plurality of features associated with the first user (e.g., fig. 13F-13K), the computer system foregoes providing (e.g., displaying, outputting audio, and/or performing tactile output) a notification corresponding to the received information (e.g., fig. 13K, device 700 foregoes providing a notification corresponding to a new message for first user 703). In some implementations, a notification (e.g., a visual indication of a received email) corresponding to the received notification information is provided (e.g., displayed) in accordance with a determination that the first user interface is displayed in the first mode. In some implementations, displaying the first user interface in a second mode having restricted access to one or more of the plurality of features associated with the first user includes forgoing displaying the one or more notifications for the first user (e.g., a user viewing the user interface in the second mode having restricted access does not see notifications that have been displayed in the first mode for the first user).
Discarding providing notifications when the computer system is operating in the second mode with restricted access enhances security and may prevent unauthorized users from initiating sensitive operations (e.g., by preventing users other than the first user from viewing notifications intended for the first user). The forgoing of providing notifications when the computer system is operating in the second mode with restricted access also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, notifications that were not provided during the display of the first user interface in the second mode with restricted access (e.g., notification 1360 on device 1350, fig. 13K) are provided on an external computer system (e.g., smart phone, smart watch, tablet, and/or wearable device) that is different from the computer system and associated with the first user. Providing notifications on an external computer system provides feedback to the user regarding the current state of the device (e.g., the computer system has received information and/or notifications). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. Providing a notification on an external computer system when a user other than the first user is using the computer system enhances security. For example, providing a notification on an external computer system may prevent an unauthorized user from viewing sensitive information. Providing notifications on an external computer system while a user other than the first user is using the computer system also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user (e.g., fig. 13E, the electronic device 700 determines that the biometric information of the user 1103 does not correspond to the first user (e.g., user 703)) (in some embodiments, whether the set of handoff criteria has been met or not), the computer system switches the eye tracking calibration settings from a first set of eye tracking calibration settings specific to the first user (e.g., eye calibration settings specific to user 703) to a second set of eye tracking calibration settings (e.g., a set of generic and/or default eye tracking calibration settings) that are different from the first set of eye tracking calibration settings, and/or a set of eye tracking calibration settings specific to the respective user. In some embodiments, the method further comprises: in response to detecting that the computer system has been placed on the body of the respective user: a first set of eye tracking calibration settings specific to the first user is maintained based on a determination that biometric information received via one or more input devices corresponds to the first user.
Automatically switching eye tracking calibration settings from a first set of eye tracking calibration settings to a second set of eye tracking calibration settings based on a determination that the biometric information does not correspond to the first user provides the user with the ability to apply various settings (e.g., eye tracking calibration settings) without explicitly requesting application of those settings. Performing an operation without further user input when a set of conditions has been met enhances the operability of the device (e.g., by performing the operation without additional user input) and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
Automatically switching the eye tracking calibration settings from the first set of eye tracking calibration settings to the second set of eye tracking calibration settings based on a determination that the biometric information does not correspond to the first user makes the calibration more accurate by removing calibration corrections that may be specific to the first user when a different user is using the computer system. Improving calibration accuracy enhances operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user (e.g., fig. 13E, the electronic device 700 determines that the biometric information of the user 1103 does not correspond to the first user (e.g., user 703)) (in some embodiments, whether the set of handoff criteria has been met or not), the computer system switches the hand tracking calibration settings from a first set of hand tracking calibration settings specific to the first user (e.g., a set of hand tracking calibration settings specific to the user 703) to a second set of hand tracking calibration settings (e.g., a set of general and/or default hand tracking calibration settings) that are different from the first set of hand tracking calibration settings, and/or a set of hand tracking calibration settings specific to the respective user. In some embodiments, the method further comprises: in response to detecting that the computer system has been placed on the body of the respective user: a first set of hand tracking calibration settings specific to the first user is maintained based on a determination that biometric information received via one or more input devices corresponds to the first user. In some embodiments, the method further comprises: in response to detecting that the computer system has been placed on the body of the respective user: a first set of eye tracking calibration settings specific to the first user is maintained based on a determination that biometric information received via one or more input devices corresponds to the first user.
Automatically switching hand tracking calibration settings from the first set of eye tracking calibration settings to the second set of hand tracking calibration settings based on a determination that the biometric information does not correspond to the first user provides the user with the ability to apply various settings (e.g., hand tracking calibration settings) without explicitly requesting application of those settings. Performing an operation without further user input when a set of conditions has been met enhances the operability of the device (e.g., by performing the operation without additional user input) and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
Automatically switching the hand tracking calibration settings from the first set of hand tracking calibration settings to the second set of hand tracking calibration settings based on a determination that the biometric information does not correspond to the first user makes the calibration more accurate by removing calibration corrections that may be specific to the first user when a different user is using the computer system. Improving calibration accuracy enhances operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, a first user interface (e.g., user interface 1304) is displayed on a first display portion (e.g., first display, internal display) of the computer system in a second mode having restricted access. In response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user, and that the set of handoff criteria has been met, and that the computer system is operating in a second mode having restricted access to one or more of the plurality of features associated with the first user, the computer system displays an indication of what content is being displayed on the first display portion on a second display portion of the computer system (e.g., a second display separate and distinct from the first display, an external display) that is distinct from the first display portion. In some embodiments, the method further comprises: in response to detecting that the computer system has been placed on the body of the respective user: in accordance with a determination that the biometric information received via the one or more input devices corresponds to the first user, an indication of what content is being displayed on the first display portion is relinquished from being displayed on the second display portion. Displaying an indication of what content is being displayed on the first display portion on the second display portion of the computer system provides feedback to the user regarding the current state of the device (e.g., what content is being displayed on the first display portion of the computer system). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system displays an indication of the currently logged-in user (e.g., displays a name, user name, and/or avatar (e.g., avatar 1302) corresponding to the user currently logged into the computer system) on an external portion of the computer system (e.g., on a second display). In some embodiments, in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user (e.g., in accordance with a determination that the respective user is not the first user) and that the set of handoff criteria has been met, the computer system displays, on an external portion of the computer system, an indication that the first user is currently logged into the computer system even when the computer system is placed on the body of the respective user that is not the first user. In some implementations, the computer system is a headset system (e.g., a headset). In some implementations, the head-mounted system has an internal display that displays a user interface (e.g., user interface 1304) and an external display (e.g., separate from the internal display) that displays an indication of a currently logged-in user. In some embodiments, the internal display is visible to (e.g., only visible to) a user of the head-mounted system. In some embodiments, the external display is visible to other individuals who are not users of the head-mounted system. In some embodiments, the external display is not visible to a user of the head-mounted system when the head-mounted system is in use. Displaying an indication of the currently logged-in user on an external portion of the computer system provides feedback to the user regarding the current state of the device (e.g., who is currently logged into the computer system). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and the set of handoff criteria has been met, the computer system (e.g., electronic device 700) transmits a notification (e.g., notification 1354) to a second computer system (e.g., electronic device 1350) of the first user (e.g., a smart phone, tablet, desktop computer, laptop computer, smartwatch, and/or wearable device) that the computer system is operating in a second mode with restricted access to one or more of the plurality of features associated with the first user. In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and has met the set of handoff criteria, the computer system initiates a process of displaying, on an external computer system (e.g., a smartphone, tablet, desktop computer, laptop computer, smartwatch, and/or wearable device) that corresponds to the first user and that is different from the computer system, a notification that the computer system is operating in a second mode with restricted access to one or more of the plurality of features associated with the first user.
In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices corresponds to the first user, the computer system foregoes transmitting a notification that the computer system is operating in a second mode having restricted access to one or more of the plurality of features associated with the first user. In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and the set of handoff criteria has not been met, the computer system foregoes transmitting a notification that the computer system is operating in a second mode having restricted access to one or more of the plurality of features associated with the first user.
Transmitting a notification that the computer system is operating in the second mode with restricted access provides feedback to the user regarding the current state of the device (e.g., that the computer system is operating in the second mode with restricted access). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
Transmitting a notification to the second computer system that the computer system is operating in a second mode with restricted access enhances security. For example, transmitting a notification to the second computer system that the computer system is operating in the second mode may inform the user whether their computer system is being used by other users, and may prevent unauthorized users from viewing sensitive information or performing sensitive operations. Transmitting a notification to the second computer system that the computer system is operating in the second mode with restricted access also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of the restricted operation.
In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and has met the set of handoff criteria, the computer system transmits, to a second computer system (e.g., electronic device 1350) of the first user (e.g., a smartphone, tablet, desktop computer, laptop computer, smartwatch, and/or wearable device) that is operating on the computer system in a second mode with restricted access to one or more features associated with the first user (e.g., user interface 1358 on electronic device 1350), a visual indication of content displayed by the computer system (e.g., copying, on an external computer system, content displayed by the computer system when the computer system is operating in the second mode with restricted access).
In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and has met the set of handoff criteria, the computer system initiates a process of displaying, on an external computer system (e.g., a smartphone, tablet, desktop computer, laptop computer, smartwatch, and/or wearable device) that corresponds to the first user and that is different from the computer system, content displayed by the computer system when the computer system is operating in a second mode having restricted access to one or more of the plurality of features associated with the first user (e.g., copying, on the external computer system, content being displayed by the computer system when the computer system is operating in the second mode having restricted access).
In some embodiments, in response to detecting that a computer system has been placed on the body of a respective user, and in accordance with a determination that biometric information received via one or more input devices corresponds to a first user, the computer system relinquishes transmission of content being displayed by the computer system to a second computer system of the first user. In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and that the set of handoff criteria has not been met, the computer system relinquishes transmission of content being displayed by the computer system to a second computer system of the first user.
Transmitting content being displayed on the computer system to the second computer system of the first user provides feedback to the user regarding the current state of the device (e.g., what content is being displayed on the computer system). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
Transmitting content being displayed on the computer system to the second computer system of the first user enhances security. For example, transmitting content being displayed on a computer system to a second computer system of a first user allows the first user to know what information is being presented on the computer system and may prevent unauthorized users from viewing sensitive information or performing sensitive operations. Transmitting content being displayed on the computer system to the second computer system of the first user also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of restricted operations.
In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and has met the set of handoff criteria, the computer system concurrently displays a visual indication (e.g., indication 1310) that the computer system is operating in a second mode with restricted access (e.g., displays text and/or visual symbols indicating that the computer system is operating in the second mode with restricted access) with the first user interface in the second mode with restricted access (e.g., user interface 1304 in fig. 13F-13K). In some embodiments, in response to detecting that the computer system has been placed on the body of the respective user, and in accordance with a determination that the biometric information received via the one or more input devices corresponds to the first user, the computer system foregoes displaying a visual indication that the computer system is operating in a second mode with restricted access (e.g., fig. 13D).
Displaying a visual indication that the computer system is operating in a second mode with restricted access provides feedback to the user regarding the current state of the device (e.g., the computer system is operating in the second mode with restricted access). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide proper input and reducing user error in operating/interacting with the device), which additionally reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
Displaying a visual indication that the computer system is operating in a second mode with restricted access enhances security. For example, displaying a visual indication that the computer system is operating in a second mode with restricted access informs the first user that the computer system is being operated by another user, and may prevent an unauthorized user from viewing sensitive information or performing sensitive operations. Displaying a visual indication that the computer system is operating in a second mode with restricted access also enhances operability of the device and makes the user-device interface more efficient (e.g., by restricting unauthorized access), which additionally reduces power usage and extends battery life of the device by restricting performance of the restricted operation.
It is noted that the details of the process described above with reference to method 1400 (e.g., fig. 14A-14B) also apply in a similar manner to the method described above. For example, methods 800, 1000, and/or 1200 optionally include one or more features of the various methods described above with reference to method 1400. For example, in method 800, enabling and/or relinquishing enabling the computer system to be used with one or more settings associated with the first user account associated with the first registered user is optionally performed based on the determination as to whether the handoff criteria have been met. As another example, user-specific device calibration settings and/or avatars as recited in methods 1000 and 1200, respectively, may be selectively applied when the device interfaces between different users. For the sake of brevity, these details are not repeated herein.
The foregoing description, for purposes of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the techniques and their practical applications. Those skilled in the art will be able to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
While the present disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. It should be understood that such variations and modifications are considered to be included within the scope of the disclosure and examples as defined by the claims.
As described above, one aspect of the present technology is to collect and use data from various sources to improve delivery of content to a user that may be of interest to the user. The present disclosure contemplates that in some examples, such collected data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, telephone numbers, email addresses, tweet IDs, home addresses, data or records related to the user's health or fitness level (e.g., vital sign measurements, medication information, exercise information), date of birth, or any other identifying or personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be used to benefit users. For example, health and fitness data may be used to provide insight into the overall health of a user, or may be used as positive feedback to individuals using technology to pursue health goals.
The present disclosure contemplates that entities responsible for collecting, analyzing, disclosing, transmitting, storing, or otherwise using such personal information data will adhere to established privacy policies and/or privacy practices. In particular, such entities should exercise and adhere to privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining the privacy and security of personal information data. Such policies should be readily accessible to the user and should be updated as the collection and/or use of the data changes. Personal information from users should be collected for legal and reasonable use by entities and not shared or sold outside of these legal uses. In addition, such collection/sharing should be performed after informed consent is received from the user. In addition, such entities should consider taking any necessary steps to defend and secure access to such personal information data and to ensure that others who have access to personal information data adhere to their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices. In addition, policies and practices should be adjusted to collect and/or access specific types of personal information data and to suit applicable laws and standards including specific considerations of jurisdiction. For example, in the united states, the collection or acquisition of certain health data may be governed by federal and/or state law, such as the health insurance flow and liability act (HIPAA); while health data in other countries may be subject to other regulations and policies and should be processed accordingly. Thus, different privacy practices should be maintained for different personal data types in each country.
In spite of the foregoing, the present disclosure also contemplates embodiments in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, with respect to user authentication, the present technology may be configured to allow a user to choose to "opt-in" or "opt-out" to participate in the collection of personal information data during or at any time after registration with a service. As another example, the user may choose not to provide personal information, such as biometric information, for user authentication. As another example, the user may choose to limit the length of time that personal information is maintained or to prohibit collection of personal information altogether. In addition to providing the "opt-in" and "opt-out" options, the present disclosure also contemplates providing notifications related to accessing or using personal information. For example, the user may be notified that his personal information data will be accessed when the application is downloaded, and then be reminded again just before the personal information data is accessed by the application.
Further, it is an object of the present disclosure that personal information data should be managed and processed to minimize the risk of inadvertent or unauthorized access or use. Once the data is no longer needed, risk can be minimized by limiting the data collection and deleting the data. In addition, and when applicable, included in certain health-related applications, the data de-identification may be used to protect the privacy of the user. De-identification may be facilitated by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of stored data (e.g., collecting location data at a city level instead of at an address level), controlling how data is stored (e.g., aggregating data among users), and/or other methods, as appropriate.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, the present disclosure also contemplates that the various embodiments may be implemented without accessing such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, content may be selected and delivered to a user by other available non-personal information or publicly available information based on non-personal information data or a small amount of personal information, such as content requested by a device associated with the user.

Claims (78)

1. A method, comprising:
at a computer system in communication with a display generation component and one or more input devices:
detecting that at least a portion of the computer system has been placed on the body of a respective user; and
in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that biometric information received via the one or more input devices corresponds to a first registered user, enabling the computer system to be used with one or more settings associated with a first user account associated with the first registered user, and
In accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first registered user, relinquishing the computer system from use with the one or more settings associated with the first user account associated with the first registered user.
2. The method of claim 1, further comprising, in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first registered user and the biometric information received via the one or more input devices corresponds to a second registered user different from the first registered user, the computer system is enabled to be used with one or more settings associated with a second user account different from the first user account and associated with the second registered user.
3. The method of any of claims 1-2, further comprising, in response to detecting that the computer system has been placed on the body of the respective user:
In accordance with a determination that the biometric information received via the one or more input devices does not correspond to a registered user, a guest operation mode is entered.
4. A method according to any one of claims 1 to 3, further comprising, in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to a registered user, the computer system is relinquished from being logged into any user account.
5. The method of any one of claims 1 to 4, further comprising:
detecting that the at least a portion of the computer system has been removed from the body of the respective user when the computer system is enabled for use with one or more settings associated with the first user account associated with the first registered user; and
in response to detecting that the at least a portion of the computer system has been removed from the body of the respective user, ceasing to enable the computer system to be used with the one or more settings associated with the first user account associated with the first registered user.
6. The method of any one of claims 1 to 5, wherein:
the biometric information received via the one or more input devices is iris identification information;
determining that biometric information received via the one or more input devices corresponds to the first registered user includes determining that iris identification information received via the one or more input devices corresponds to the first registered user; and is also provided with
Determining that biometric information received via the one or more input devices does not correspond to the first registered user includes determining that iris identification information received via the one or more input devices does not correspond to the first registered user.
7. The method of any of claims 1-6, further comprising, in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that the biometric information received via the one or more input devices corresponds to a respective registered user, a visual indication is displayed that the computer system has been enabled for use with one or more settings associated with the respective registered user.
8. The method of any of claims 1-7, further comprising, in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to an enrolled user, a user-selected user interface is displayed that includes a plurality of selectable options including:
a first selectable option corresponding to the first registered user; and
a second selectable option corresponding to a second registered user different from the first registered user.
9. The method of claim 8, further comprising:
receiving user input corresponding to a selection of a respective selectable option of the plurality of selectable options when the user selection user interface including the plurality of selectable options is displayed, the respective selectable option corresponding to a respective registered user;
receiving updated biometric information via the one or more input devices after receiving the user input corresponding to selection of the respective selectable option; and
in response to receiving the updated biometric information:
In accordance with a determination that biometric information received via the one or more input devices corresponds to the respective registered user, enabling the computer system to be used with one or more settings associated with a respective user account associated with the respective registered user; and
in accordance with a determination that biometric information received via the one or more input devices does not correspond to the respective registered user, relinquishing enables the computer system to be used with the one or more settings associated with the respective user account associated with the respective registered user.
10. The method of claim 8, further comprising:
receiving user input corresponding to a selection of a respective selectable option of the plurality of selectable options when the user selection user interface including the plurality of selectable options is displayed, the respective selectable option corresponding to a respective registered user;
after receiving the user input corresponding to the selection of the respective selectable option and in accordance with not meeting biometric criteria:
in accordance with a determination that the first setting of the respective registered user is not enabled, displaying a password input user interface via the display generating component, and
In accordance with a determination that the first setting of the respective enrolled user is enabled, performing automatic biometric authentication, comprising:
in accordance with a determination that updated biometric information received via the one or more input devices corresponds to the respective registered user, enabling the computer system to be used with one or more settings associated with a respective user account associated with the respective registered user; and
in accordance with a determination that the updated biometric information received via the one or more input devices does not correspond to the respective registered user, relinquishing the computer system from being enabled for use with one or more settings associated with the respective user account associated with the respective registered user.
11. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for performing the method of any of claims 1-10.
12. A computer system, comprising:
a display generation section;
one or more input devices;
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-10.
13. A computer system, comprising:
a display generation section;
one or more input devices; and
apparatus for performing the method of any one of claims 1 to 10.
14. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for:
detecting that at least a portion of the computer system has been placed on the body of a respective user; and
in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that biometric information received via the one or more input devices corresponds to a first registered user, enabling the computer system to be used with one or more settings associated with a first user account associated with the first registered user, and
In accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first registered user, relinquishing the computer system from use with the one or more settings associated with the first user account associated with the first registered user.
15. A computer system, comprising:
a display generation section;
one or more input devices;
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for:
detecting that at least a portion of the computer system has been placed on the body of a respective user; and
in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that biometric information received via the one or more input devices corresponds to a first registered user, enabling the computer system to be used with one or more settings associated with a first user account associated with the first registered user, and
In accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first registered user, relinquishing the computer system from use with the one or more settings associated with the first user account associated with the first registered user.
16. A computer system, comprising:
a display generation section;
one or more input devices;
means for detecting that at least a portion of the computer system has been placed on the body of a respective user; and
means for, in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that biometric information received via the one or more input devices corresponds to a first registered user, enabling the computer system to be used with one or more settings associated with a first user account associated with the first registered user, and
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first registered user, relinquishing the computer system from use with the one or more settings associated with the first user account associated with the first registered user.
17. A method, comprising:
at a computer system in communication with a display generation component and one or more input devices:
detecting that at least a portion of the computer system has been placed on the body of a respective user;
detecting input from the respective user based on movement or position of at least a portion of the body of the respective user after detecting that at least a portion of the computer system has been placed on the body of the respective user; and
responsive to detecting the input from the respective user, responding to the input from the respective user includes:
in accordance with a determination that the respective user is a first user that has previously registered with the computer system, generating a response to the input based on the movement or location of the portion of the respective user's body and a first set of device calibration settings specific to the first user; and
in accordance with a determination that the respective user is not the first user, a response to the input is generated based on the movement or position of the portion of the body of the respective user and without using the first set of device calibration settings specific to the first user.
18. The method of claim 17, wherein generating the response to the input based on the movement or position of the portion of the respective user's body and without using the first set of device calibration settings specific to the first user comprises:
in accordance with a determination that the respective user is an unregistered user, a response to the input is generated based on the movement or location of the portion of the body of the respective user and a second set of device calibration settings that are different from the first set of device calibration settings and that represent a set of guest device calibration settings for the unregistered user.
19. The method of any of claims 17-18, wherein generating the response to the input based on the movement or position of the portion of the body of the respective user and without using the first set of device calibration settings specific to the first user comprises:
in accordance with a determination that the respective user is a second user different from the first user that has been previously registered with the computer system, a response to the input is generated based on the movement or location of the portion of the body of the respective user and a third set of device calibration settings different from the first set of device calibration settings and specific to the second user.
20. The method of any of claims 17-19, wherein the first set of device calibration settings is determined based on a plurality of device calibration inputs received from the first user.
21. The method of claim 20, wherein generating the response to the input based on the movement or position of the portion of the respective user's body and without using the first set of device calibration settings specific to the first user comprises:
in accordance with a determination that the respective user is an unregistered user, a response to the input is generated based on the movement or location of the portion of the body of the respective user and a second set of device calibration settings that are different from the first set of device calibration settings and that represent a set of guest device calibration settings for the unregistered user, wherein the second set of device calibration settings is determined based on a plurality of device calibration inputs received from the unregistered user.
22. The method of claim 21, wherein the plurality of device calibration inputs received from the unregistered user is a subset of device calibration inputs less than the plurality of device calibration inputs received from the first user.
23. The method of claim 20, wherein generating the response to the input based on the movement or position of the portion of the respective user's body and without using the first set of device calibration settings specific to the first user comprises:
in accordance with a determination that the respective user is an unregistered user, a response to the input is generated based on the movement or location of the portion of the body of the respective user and a second set of device calibration settings that are different from the first set of device calibration settings and that represent a set of guest device calibration settings for the unregistered user, wherein the second set of device calibration settings is a set of default device calibration settings and is not based on user input from the unregistered user.
24. The method of any of claims 17 to 23, wherein the first set of device calibration settings comprises one or more eye and/or gaze movement calibration settings.
25. The method of any of claims 17 to 24, wherein the first set of device calibration settings comprises one or more hand movement calibration settings.
26. The method of any of claims 17-25, wherein generating the response to the input based on the movement or position of the portion of the respective user's body and the first set of device calibration settings specific to the user comprises enabling the computer system to be used with the first set of device calibration settings specific to the first user, and the method further comprises:
Detecting that the at least a portion of the computer system has been removed from the body of the respective user when the computer system is enabled for use with the first set of device calibration settings specific to the first user; and
in response to detecting that the at least a portion of the computer system has been removed from the body of the respective user, ceasing to enable the computer system to be used with the first set of device calibration settings specific to the first user.
27. The method of any of claims 17 to 26, wherein the determination that the respective user is the first user is performed automatically in response to detecting that the at least a portion of the computer system has been placed on a user's body.
28. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for performing the method of any of claims 17-27.
29. A computer system, comprising:
a display generation section;
one or more input devices;
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 17-27.
30. A computer system, comprising:
a display generation section;
one or more input devices; and
apparatus for performing the method of any one of claims 17 to 27.
31. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for:
detecting that at least a portion of the computer system has been placed on the body of a respective user;
detecting input from the respective user based on movement or position of at least a portion of the body of the respective user after detecting that at least a portion of the computer system has been placed on the body of the respective user; and
Responsive to detecting the input from the respective user, responding to the input from the respective user includes:
in accordance with a determination that the respective user is a first user that has previously registered with the computer system, generating a response to the input based on the movement or location of the portion of the respective user's body and a first set of device calibration settings specific to the first user; and
in accordance with a determination that the respective user is not the first user, a response to the input is generated based on the movement or position of the portion of the body of the respective user and without using the first set of device calibration settings specific to the first user.
32. A computer system, comprising:
a display generation section;
one or more input devices;
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for:
detecting that at least a portion of the computer system has been placed on the body of a respective user;
Detecting input from the respective user based on movement or position of at least a portion of the body of the respective user after detecting that at least a portion of the computer system has been placed on the body of the respective user; and
responsive to detecting the input from the respective user, responding to the input from the respective user includes:
in accordance with a determination that the respective user is a first user that has previously registered with the computer system, generating a response to the input based on the movement or location of the portion of the respective user's body and a first set of device calibration settings specific to the first user; and
in accordance with a determination that the respective user is not the first user, a response to the input is generated based on the movement or position of the portion of the body of the respective user and without using the first set of device calibration settings specific to the first user.
33. A computer system, comprising:
a display generation section;
one or more input devices;
means for detecting that at least a portion of the computer system has been placed on the body of a respective user;
Means for detecting input from the respective user based on movement or position of at least a portion of the body of the respective user after detecting that at least a portion of the computer system has been placed on the body of the respective user; and
in response to detecting the input from the respective user, means for responding to the input from the respective user, comprising:
in accordance with a determination that the respective user is a first user that has previously registered with the computer system, generating a response to the input based on the movement or location of the portion of the respective user's body and a first set of device calibration settings specific to the first user; and
in accordance with a determination that the respective user is not the first user, generating a response to the input based on the movement or position of the portion of the body of the respective user and without using the first set of device calibration settings specific to the first user.
34. A method, comprising:
at a first computer system in communication with a display generation component and one or more input devices:
Detecting a request to display an avatar of a user of the respective computer system; and
in response to detecting the request to display the avatar, displaying an avatar of the user of the respective computer system, comprising:
in accordance with a determination that the user of the respective computer system is a registered user of the respective computer system, displaying the avatar having the appearance selected by the user of the respective computer system, wherein the avatar moves based on movements of the user detected by one or more sensors of the respective computer system; and
in accordance with a determination that the user of the respective computer system is not a registered user of the respective computer, the avatar is displayed with a placeholder appearance that does not represent the appearance of the user of the respective computer system, wherein the avatar moves based on movement of the user detected by one or more sensors of the respective computer system.
35. The method of claim 34, wherein the avatar visually represents a user of the first computer system.
36. The method of any of claims 34 to 35, wherein the avatar visually represents a user of a second computer system different from the first computer system.
37. The method of any of claims 34 to 36, wherein the avatar is displayed at one or more computer systems of one or more users interacting with the user of the respective computer system.
38. The method of any of claims 34 to 37, wherein at least a portion of the avatar is displayed via a display generation component in communication with the respective computer system.
39. The method of any of claims 34-38, wherein the registered user is a first registered user and the appearance is a first appearance, and the method further comprises:
in accordance with a determination that the user of the respective computer system is a second registered user of the respective computer system that is different from the first registered user, the avatar having a second appearance that is different from the first appearance is displayed, wherein the second appearance is selected by the second registered user.
40. The method of any of claims 34-39, wherein the determination that the user of the respective computer system is a registered user of the respective computer system is performed based on an automatic biometric identification of the user as a registered user, and wherein the automatic biometric identification comprises an eye-based identification.
41. The method of claim 40, wherein the automatic biometric identification is performed automatically in response to determining that at least a portion of the respective computer system has been placed on the body of the user.
42. The method of any of claims 34-41, wherein the registered user is a first user and the appearance is a first appearance selected by the first user, and the method further comprises:
detecting, via the one or more input devices, that the computer system has been removed from the body of the first user;
after detecting that the computer system has been removed from the body of the first user, detecting, via the one or more input devices, that the computer system has been placed on the body of the respective user; and
in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that the first user is no longer the user of the respective computer system, ceasing to display the avatar having the first appearance selected by the first user; and
in accordance with a determination that the user of the respective computer system is a second user of the respective computer system that is different from the first user, the avatar having a second appearance is displayed.
43. The method of any of claims 34 to 42, wherein the placeholder appearance is an abstract representation.
44. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for performing the method of any of claims 34-43.
45. A computer system, comprising:
a display generation section;
one or more input devices;
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 34-43.
46. A computer system, comprising:
a display generation section;
one or more input devices; and
apparatus for performing the method of any one of claims 34 to 43.
47. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a first computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for:
Detecting a request to display an avatar of a user of the respective computer system; and
in response to detecting the request to display the avatar, displaying an avatar of the user of the respective computer system, comprising:
in accordance with a determination that the user of the respective computer system is a registered user of the respective computer system, displaying the avatar having the appearance selected by the user of the respective computer system, wherein the avatar moves based on movements of the user detected by one or more sensors of the respective computer system; and
in accordance with a determination that the user of the respective computer system is not a registered user of the respective computer, the avatar is displayed with a placeholder appearance that does not represent the appearance of the user of the respective computer system, wherein the avatar moves based on movement of the user detected by one or more sensors of the respective computer system.
48. A computer system, comprising:
a display generation section;
one or more input devices;
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for:
Detecting a request to display an avatar of a user of the respective computer system; and
in response to detecting the request to display the avatar, displaying an avatar of the user of the respective computer system, comprising:
in accordance with a determination that the user of the respective computer system is a registered user of the respective computer system, displaying the avatar having the appearance selected by the user of the respective computer system, wherein the avatar moves based on movements of the user detected by one or more sensors of the respective computer system; and
in accordance with a determination that the user of the respective computer system is not a registered user of the respective computer, the avatar is displayed with a placeholder appearance that does not represent the appearance of the user of the respective computer system, wherein the avatar moves based on movement of the user detected by one or more sensors of the respective computer system.
49. A computer system, comprising:
a display generation section;
one or more input devices;
means for detecting a request to display an avatar of a user of the respective computer system; and
In response to detecting the request to display the avatar, means for displaying an avatar of the user of the respective computer system, comprising:
in accordance with a determination that the user of the respective computer system is a registered user of the respective computer system, means for displaying the avatar having the appearance selected by the user of the respective computer system, wherein the avatar moves based on movements of the user detected by one or more sensors of the respective computer system; and
in accordance with a determination that the user of the respective computer system is not a registered user of the respective computer, means for displaying the avatar with a placeholder appearance that does not represent an appearance of the user of the respective computer system, wherein the avatar moves based on movements of the user detected by one or more sensors of the respective computer system.
50. A method, comprising:
at a computer system in communication with a display generation component and one or more input devices:
displaying, via the display generating component, a first user interface corresponding to a first application program when the computer system is placed on a body of a first user, wherein the first user interface is displayed in a first mode having allowable access to a plurality of features associated with the first user;
Detecting, via the one or more input devices, that the computer system has been removed from the body of the first user while the first user interface is displayed in the first mode with allowable access to the plurality of features associated with the first user;
after detecting that the computer system has been removed from the body of the first user, detecting, via the one or more input devices, that the computer system has been placed on the body of the respective user; and
in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that biometric information received via the one or more input devices corresponds to the first user, displaying, via the display generating component, the first user interface in the first mode having allowable access to the plurality of features associated with the first user, and
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and has met a set of handoff criteria, the first user interface is displayed via the display generation component in a second mode having restricted access to one or more of the plurality of features associated with the first user.
51. The method of claim 50, wherein the set of handoff criteria comprises a first criterion that is met when a user input corresponding to a request to lock the computer system is not received by the computer system before it is detected that the computer system has been placed on the body of the respective user.
52. The method of any of claims 50 to 51, wherein the set of handover criteria includes a second criterion that is met when a time less than a threshold period of time has elapsed since the computer system was detected to have been removed from the body of the first user.
53. The method of any of claims 50-52, wherein the set of handoff criteria includes a third criterion that is met when the computer system is not turned off or placed in sleep mode after detecting that the computer system has been removed from the body of the first user and before detecting that the computer system has been placed on the body of the respective user.
54. The method of any of claims 50-53, wherein the set of handover criteria includes a fourth criterion that is met when the first user is a registered user.
55. The method of any one of claims 50 to 54, further comprising:
in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and the set of handoff criteria has not been met, the first user interface is relinquished from display.
56. The method of claim 55, further comprising:
in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and the set of handover criteria has not been met:
in accordance with a determination that biometric information received via the one or more input devices does not correspond to a previously registered user, a user interface is displayed for an unregistered user that indicates that the respective user is not a registered user.
57. The method of any one of claims 50 to 56, further comprising:
in response to detecting that the computer system has been placed on the body of the respective user:
In accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and the set of handover criteria has not been met:
in accordance with a determination that biometric information received via the one or more input devices corresponds to a second registered user different from the first user, a second user interface different from the first user interface is displayed, wherein the second user interface corresponds to the second registered user.
58. The method of any one of claims 50 to 57, further comprising:
in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that the set of handover criteria has not been met and the biometric information received via the one or more input devices corresponds to the first user, the first user interface is displayed via the display generation component in the first mode having allowable access to the plurality of features associated with the first user.
59. The method of any of claims 50-58, wherein the second mode having restricted access to one or more of the plurality of features associated with the first user further comprises maintaining one or more user settings associated with the first user.
60. The method of any one of claims 50 to 59, further comprising:
receiving navigational user input when the computer system is placed on the body of the respective user and when the first user interface is displayed; and
in response to receiving the navigational user input:
in accordance with determining that the first user interface is displayed in the first mode with permitted access to the plurality of features associated with the first user, navigating through the user interface in accordance with the navigational user input; and
in accordance with a determination that the first user interface is displayed in the second mode with restricted access to one or more of the plurality of features associated with the first user, navigational through the user interface in accordance with the navigational user input is abandoned.
61. The method of any one of claims 50 to 60, further comprising:
receiving user input when the computer system is placed on the body of the respective user and when the first user interface is displayed; and
in response to receiving the user input:
in accordance with a determination that the user input corresponds to a request to access a system control, performing an operation associated with the system control;
In accordance with a determination that the user input corresponds to a request to access non-system controls and the first user interface is displayed in the first mode with allowable access to the plurality of features associated with the first user, performing an operation associated with the non-system controls; and
in accordance with a determination that the user input corresponds to a request to access non-system control and the first user interface is displayed in the second mode with restricted access to one or more features of the plurality of features associated with the first user, the operations associated with the non-system control are relinquished from being performed.
62. The method of any one of claims 50 to 61, further comprising:
receiving user input when the computer system is placed on the body of the respective user and when the first user interface is displayed; and
in response to receiving the user input:
in accordance with a determination that the user input corresponds to a request to access one or more accessibility settings, performing an operation associated with the one or more accessibility settings;
in accordance with a determination that the user input corresponds to a request to access a non-accessibility setting and the first user interface is displayed in the first mode with permitted access to the plurality of features associated with the first user, performing an operation associated with the non-accessibility setting; and
In accordance with a determination that the user input corresponds to a request to access a non-accessibility setting and the first user interface is displayed in the second mode with restricted access to one or more features of the plurality of features associated with the first user, the operation associated with the non-accessibility setting is relinquished.
63. The method of any one of claims 50 to 62, further comprising:
receiving user input corresponding to a request to enable one or more accessibility settings when the computer system is placed on the body of the first user and when the first user interface is displayed in the first mode with allowable access to the plurality of features associated with the first user;
enabling one or more accessibility settings in response to the user input corresponding to a request to enable the one or more accessibility settings;
detecting, via the one or more input devices, that the computer system has been removed from the body of the first user when the one or more accessibility settings are enabled;
detecting, via the one or more input devices, that the computer system has been placed on a body of a second, opposing user after detecting that the computer system has been removed from the body of the first user; and
In response to detecting that the computer system has been placed on the body of the second respective user:
in accordance with a determination that biometric information received via the one or more input devices corresponds to the first user, displaying, via the display generating component, the first user interface in the first mode having allowable access to the plurality of features associated with the first user while maintaining the one or more accessibility settings in an enabled state, and
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and has met a set of handoff criteria, the first user interface is displayed via the display generation component in a second mode having restricted access to one or more of the plurality of features associated with the first user while maintaining the one or more accessibility settings in the enabled state.
64. The method of any one of claims 50 to 63, further comprising:
receiving information when the first user interface is displayed; and
in response to receiving the information:
In accordance with a determination that the first user interface is displayed in the first mode with permitted access to the plurality of features associated with the first user, providing a notification corresponding to the received information; and
in accordance with a determination that the first user interface is displayed in the second mode with restricted access to one or more of the plurality of features associated with the first user, the notification corresponding to the received information is relinquished.
65. The method of claim 64, wherein the notification not provided during display of the first user interface in the second mode with restricted access is provided on an external computer system different from the computer system and associated with the first user.
66. The method of any one of claims 50 to 65, further comprising:
in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user:
the eye tracking calibration settings are switched from a first set of eye tracking calibration settings specific to the first user to a second set of eye tracking calibration settings different from the first set of eye tracking calibration.
67. The method of any one of claims 50 to 66, further comprising:
in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user:
hand tracking calibration settings are switched from a first set of hand tracking calibration settings specific to the first user to a second set of hand tracking calibration settings different from the first set of hand tracking calibration settings.
68. The method of any of claims 50-67, wherein the first user interface is displayed in the second mode with restricted access on a first display portion of the computer system, and the method further comprises:
in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and has met the set of handoff criteria, and the computer system is operating in the second mode with restricted access to one or more of the plurality of features associated with the first user:
An indication of what content is being displayed on the first display portion is displayed on a second display portion of the computer system that is different from the first display portion.
69. The method of any one of claims 50 to 68, further comprising:
an indication of the currently logged-in user is displayed on an external portion of the computer system.
70. The method of any one of claims 50 to 69, further comprising:
in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and the set of handoff criteria has been met, a notification is transmitted to a second computer system of the first user that the computer system is operating in the second mode with restricted access to one or more of the plurality of features associated with the first user.
71. The method of any one of claims 50 to 70, further comprising:
in response to detecting that the computer system has been placed on the body of the respective user:
In accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and the set of handoff criteria has been met, a visual indication of content being displayed by the computer system when the computer system is operating in the second mode with restricted access to one or more of the plurality of features associated with the first user is transmitted to a second computer system of the first user.
72. The method of any one of claims 50 to 71, further comprising:
in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and has met the set of handoff criteria, a visual indication that the computer system is operating in the second mode with restricted access is displayed concurrently with the first user interface in the second mode with restricted access.
73. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for performing the method of any of claims 50-72.
74. A computer system, comprising:
a display generation section;
one or more input devices;
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 50-72.
75. A computer system, comprising:
a display generation section;
one or more input devices; and
means for performing the method of any one of claims 50 to 72.
76. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with a display generation component and one or more input devices, the one or more programs comprising instructions for:
displaying, via the display generating component, a first user interface corresponding to a first application program when the computer system is placed on a body of a first user, wherein the first user interface is displayed in a first mode having allowable access to a plurality of features associated with the first user;
Detecting, via the one or more input devices, that the computer system has been removed from the body of the first user while the first user interface is displayed in the first mode with allowable access to the plurality of features associated with the first user;
after detecting that the computer system has been removed from the body of the first user, detecting, via the one or more input devices, that the computer system has been placed on the body of the respective user;
in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that biometric information received via the one or more input devices corresponds to the first user, displaying, via the display generating component, the first user interface in the first mode having allowable access to the plurality of features associated with the first user, and
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and has met a set of handoff criteria, the first user interface is displayed via the display generation component in a second mode having restricted access to one or more of the plurality of features associated with the first user.
77. A computer system, comprising:
a display generation section;
one or more input devices;
one or more processors; and
a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for:
displaying, via the display generating component, a first user interface corresponding to a first application program when the computer system is placed on a body of a first user, wherein the first user interface is displayed in a first mode having allowable access to a plurality of features associated with the first user;
detecting, via the one or more input devices, that the computer system has been removed from the body of the first user while the first user interface is displayed in the first mode with allowable access to the plurality of features associated with the first user;
after detecting that the computer system has been removed from the body of the first user, detecting, via the one or more input devices, that the computer system has been placed on the body of the respective user;
In response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that biometric information received via the one or more input devices corresponds to the first user, displaying, via the display generating component, the first user interface in the first mode having allowable access to the plurality of features associated with the first user, and
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and has met a set of handoff criteria, the first user interface is displayed via the display generation component in a second mode having restricted access to one or more of the plurality of features associated with the first user.
78. A computer system, comprising:
a display generation section;
one or more input devices;
means for displaying, via the display generating component, a first user interface corresponding to a first application program when the computer system is placed on a body of a first user, wherein the first user interface is displayed in a first mode having allowable access to a plurality of features associated with the first user;
Means for detecting, via the one or more input devices, that the computer system has been removed from the body of the first user when the first user interface is displayed in the first mode with allowable access to the plurality of features associated with the first user;
means for detecting, via the one or more input devices, that the computer system has been placed on the body of the respective user after detecting that the computer system has been removed from the body of the first user;
means for, in response to detecting that the computer system has been placed on the body of the respective user:
in accordance with a determination that biometric information received via the one or more input devices corresponds to the first user, displaying, via the display generating component, the first user interface in the first mode having allowable access to the plurality of features associated with the first user, and
in accordance with a determination that the biometric information received via the one or more input devices does not correspond to the first user and has met a set of handoff criteria, the first user interface is displayed via the display generation component in a second mode having restricted access to one or more of the plurality of features associated with the first user.
CN202280015964.6A 2021-02-19 2022-02-17 User interface and device settings based on user identification Pending CN116868191A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311225588.6A CN117032465A (en) 2021-02-19 2022-02-17 User interface and device settings based on user identification

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/151,597 2021-02-19
US17/582,902 US20220269333A1 (en) 2021-02-19 2022-01-24 User interfaces and device settings based on user identification
US17/582,902 2022-01-24
PCT/US2022/016804 WO2022178132A1 (en) 2021-02-19 2022-02-17 User interfaces and device settings based on user identification

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202311225588.6A Division CN117032465A (en) 2021-02-19 2022-02-17 User interface and device settings based on user identification

Publications (1)

Publication Number Publication Date
CN116868191A true CN116868191A (en) 2023-10-10

Family

ID=88223866

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202280015964.6A Pending CN116868191A (en) 2021-02-19 2022-02-17 User interface and device settings based on user identification
CN202311225588.6A Pending CN117032465A (en) 2021-02-19 2022-02-17 User interface and device settings based on user identification

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202311225588.6A Pending CN117032465A (en) 2021-02-19 2022-02-17 User interface and device settings based on user identification

Country Status (1)

Country Link
CN (2) CN116868191A (en)

Also Published As

Publication number Publication date
CN117032465A (en) 2023-11-10

Similar Documents

Publication Publication Date Title
US20220269333A1 (en) User interfaces and device settings based on user identification
US11995230B2 (en) Methods for presenting and sharing content in an environment
US20220262080A1 (en) Interfaces for presenting avatars in three-dimensional environments
US20230336865A1 (en) Device, methods, and graphical user interfaces for capturing and displaying media
US20230384907A1 (en) Methods for relative manipulation of a three-dimensional environment
US20240020371A1 (en) Devices, methods, and graphical user interfaces for user authentication and device management
US20240094882A1 (en) Gestures for selection refinement in a three-dimensional environment
US20240077937A1 (en) Devices, methods, and graphical user interfaces for controlling avatars within three-dimensional environments
US20230343049A1 (en) Obstructed objects in a three-dimensional environment
US20230316674A1 (en) Devices, methods, and graphical user interfaces for modifying avatars in three-dimensional environments
KR20230043749A (en) Adaptive user enrollment for electronic devices
CN116868191A (en) User interface and device settings based on user identification
EP4295251A1 (en) User interfaces and device settings based on user identification
US20240104859A1 (en) User interfaces for managing live communication sessions
US20240103678A1 (en) Devices, methods, and graphical user interfaces for interacting with extended reality experiences
US20240104861A1 (en) Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments
US20240152244A1 (en) Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments
US20240104819A1 (en) Representations of participants in real-time communication sessions
US20240103677A1 (en) User interfaces for managing sharing of content in three-dimensional environments
US20240103686A1 (en) Methods for controlling and interacting with a three-dimensional environment
US20240036699A1 (en) Devices, Methods, and Graphical User Interfaces for Processing Inputs to a Three-Dimensional Environment
US20240103617A1 (en) User interfaces for gaze tracking enrollment
US20240118746A1 (en) User interfaces for gaze tracking enrollment
US20230171484A1 (en) Devices, methods, and graphical user interfaces for generating and displaying a representation of a user
US20240104871A1 (en) User interfaces for capturing media and manipulating virtual objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination