CN117120956A - System, method, and graphical user interface for automated measurement in an augmented reality environment - Google Patents

System, method, and graphical user interface for automated measurement in an augmented reality environment Download PDF

Info

Publication number
CN117120956A
CN117120956A CN202280015105.7A CN202280015105A CN117120956A CN 117120956 A CN117120956 A CN 117120956A CN 202280015105 A CN202280015105 A CN 202280015105A CN 117120956 A CN117120956 A CN 117120956A
Authority
CN
China
Prior art keywords
user
cameras
body part
representation
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280015105.7A
Other languages
Chinese (zh)
Inventor
A·W·德赖尔
G·耶基斯
L·K·福赛尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/576,735 external-priority patent/US20220261066A1/en
Application filed by Apple Inc filed Critical Apple Inc
Priority to CN202311417788.1A priority Critical patent/CN117472182A/en
Priority claimed from PCT/US2022/012856 external-priority patent/WO2022173561A1/en
Publication of CN117120956A publication Critical patent/CN117120956A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

A computer system displays visual cues that move a body part into the field of view of one or more cameras. The computer system detects a portion of the user's body in the field of view of the one or more cameras and corresponding to the body portion. The computer system displaying a representation of the portion of the user's body, comprising: in accordance with a determination that the portion of the user's body meets a first criterion, displaying, via the display device, the representation of the portion of the user's body in the field of view of the one or more cameras with a first transparency; and in accordance with a determination that the portion of the user's body fails to meet the first criterion, displaying the representation of the portion of the user's body as having a second transparency.

Description

System, method, and graphical user interface for automated measurement in an augmented reality environment
Related patent application
This patent application claims priority from U.S. provisional patent application number 63/149,553, filed on day 2021, month 2, and on day 2022, month 1, and on top of U.S. patent application number 17/576,735, each of which is incorporated herein by reference in its entirety.
Technical Field
The present invention relates generally to computer systems for augmented reality, including but not limited to electronic devices for making measurements using virtual objects displayed in an augmented reality environment.
Background
In recent years, the development of computer systems for augmented reality has increased significantly. Methods and interfaces for interacting with environments that include at least some virtual elements (e.g., augmented reality environments, mixed reality environments, and virtual reality environments) are cumbersome and inefficient.
Conventional methods of making measurements using augmented reality do not provide guidance to assist the user in moving to the correct position for measuring the body part, and do not provide dynamic positioning guidance as the user's position progresses. In some cases, conventional methods of storing measurements obtained using augmented reality do not easily allow a device to share measurement information with another device. In addition, these methods take longer than necessary, wasting energy. This latter consideration is particularly important in battery-powered devices.
Disclosure of Invention
Accordingly, there is a need for a computer system with faster, more efficient methods and interfaces for making measurements using an augmented reality environment. Such methods and interfaces optionally supplement or replace conventional methods for making measurements using an augmented reality environment. Such methods and interfaces reduce the amount, degree, and/or nature of input from a user and result in a more efficient human-machine interface. For battery-operated devices, such methods and interfaces may save electricity and increase the time between battery charges.
The above-described drawbacks and other problems associated with user interfaces for virtual/augmented reality are reduced or eliminated by the disclosed computer system. In some embodiments, the computer system comprises a desktop computer. In some embodiments, the computer system is portable (e.g., a notebook, tablet, or handheld device). In some embodiments, the computer system includes a personal electronic device (e.g., a wearable electronic device such as a watch). In some embodiments, the computer system has (and/or communicates with) a touch pad. In some embodiments, the computer system has (and/or communicates with) a touch sensitive display (also referred to as a "touch screen" or "touch screen display"). In some embodiments, a computer system has a Graphical User Interface (GUI), one or more processors, memory and one or more modules, a program or set of instructions stored in the memory for performing a plurality of functions. In some implementations, the user interacts with the GUI in part through stylus and/or finger contacts and gestures on the touch-sensitive surface. In some embodiments, these functions optionally include game play, image editing, drawing, presentation, word processing, spreadsheet making, phone calls, video conferencing, email sending and receiving, instant messaging, fitness support, digital photography, digital video recording, web browsing, digital music playing, notes taking, and/or digital video playing. Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
According to some embodiments, a method is performed at a computer system in communication with a display device and one or more cameras. The method includes displaying a visual cue in a first area of a first user interface to move a body part into a field of view of the one or more cameras. The method includes, while displaying the visual cue to move the body part into the field of view of the one or more cameras: detecting, using the one or more cameras, a portion of the user's body in the field of view of the one or more cameras and corresponding to the body portion; and in response to detecting the portion of the user's body, displaying a representation of the portion of the user's body. The method further comprises the steps of: in accordance with a determination that the portion of the user's body in the field of view of the one or more cameras meets a first location criterion, displaying, via the display device, the representation of the portion of the user's body in the field of view of the one or more cameras with a first transparency; and in accordance with a determination that the portion of the user's body in the field of view of the one or more cameras fails to meet a first criterion, displaying the representation of the portion of the user's body as having a second transparency indicating that the first criterion has not been met.
According to some embodiments, a method is performed at a computer system in communication with a display device and one or more cameras. The method includes displaying, in a user interface, a first representation of a body part in a field of view of the one or more cameras. The method includes detecting movement of the body part using the one or more cameras, wherein the displayed first representation of the body part is updated in accordance with the movement of the body part. The method includes, when displaying the first representation of the body part, displaying an indicator at a fixed position relative to the first representation of the body part. The indicator is displayed at a first location in the user interface overlaying at least a portion of the representation of the body part. The indicator is updated in accordance with the movement of the body part. The indicator includes an indication of a suggested direction of movement of the body part.
According to some embodiments, a method is performed at a computer system in communication with a display device and one or more cameras. The method includes detecting, using the one or more cameras, a portion of a user's body in the field of view of the one or more cameras. The method includes scanning the portion of the user's body in the field of view of the one or more cameras to determine a measurement of the portion of the user's body in the field of view of the one or more cameras. The method also includes, after scanning the portion of the user's body, generating a machine-readable code that includes information that identifies one or more sizing parameters of a wearable object or describes the measurement of the portion of the user's body based on the measurement of the portion of the user's body.
According to some embodiments, a method is performed at a computer system in communication with a display device and one or more cameras. The method includes displaying a first visual cue at a first fixed location within a first user interface indicating a location for moving a body part into a field of view of the one or more cameras. The method includes, while displaying the first visual cue indicating the location for moving the body part into the field of view of the one or more cameras, detecting a portion of the user's body in the field of view of the one or more cameras and corresponding to the body part using the one or more cameras. The method includes, in response to detecting the portion of the user's body in the field of view of the one or more cameras: displaying a representation of the portion of the user's body; and displaying a second visual cue fixed at a predefined location relative to the representation of the portion of the user's body, wherein a location of the second visual cue relative to a location of the first visual cue indicates movement of the body part required to meet a body part positioning precondition.
According to some embodiments, a computer system includes a display generating component (e.g., a display, projector, head-mounted display, head-up display, etc.), one or more cameras (e.g., a video camera that continuously or repeatedly provides a live preview of at least a portion of content within a camera field of view at fixed intervals and optionally generates video output including one or more image frame streams capturing content within the camera field of view), and one or more input devices (e.g., a touch-sensitive surface, such as a touch-sensitive remote control, or a touch screen display that also serves as a display generating component, a mouse, a joystick, a wand controller, and/or a camera that tracks the position of one or more features of a user's hand, such as a user), optionally one or more gesture sensors, optionally one or more sensors to detect the intensity of contact with the touch-sensitive surface, optionally one or more tactile output generators, one or more processors, and memory storing one or more programs (and/or communicating with these components); the one or more programs are configured to be executed by the one or more processors, and the one or more programs include instructions for performing or causing the operations of performing any of the methods described herein. According to some embodiments, a computer-readable storage medium has stored therein instructions that, when executed by a computer system comprising a display generating component, one or more cameras, one or more input devices, optionally one or more gesture sensors, optionally one or more sensors for detecting intensity of contact with a touch-sensitive surface, and optionally one or more tactile output generators (and/or in communication with these components), cause the computer system to perform the operations of any of the methods described herein or cause the operations of any of the methods described herein to be performed. According to some embodiments, a graphical user interface on a computer system including a display generating component, one or more cameras, one or more input devices, optionally one or more gesture sensors, optionally one or more sensors to detect intensity of contacts with a touch-sensitive surface, optionally one or more tactile output generators, memory, and one or more processors to execute one or more programs stored in memory (and/or in communication with these components) includes one or more elements displayed in any of the methods described herein, the one or more elements updated in response to input, as described in any of the methods described herein. According to some embodiments, a computer system comprises (and/or communicates with) the following components: a display generating component, one or more cameras, one or more input devices, optionally one or more gesture sensors, optionally one or more sensors for detecting intensity of contact with a touch-sensitive surface, optionally one or more tactile output generators, and means for performing or causing to be performed the operations of any of the methods described herein. According to some embodiments, an information processing apparatus for use in a computer system comprising a display generating component, one or more cameras, one or more input devices, optionally one or more gesture sensors, optionally one or more sensors for detecting intensity of contact with a touch-sensitive surface, and optionally one or more tactile output generators (and/or in communication with these components) comprises means for performing or causing to be performed the operations of any of the methods described herein.
Accordingly, a computer system having (and/or in communication with) a display generating component, one or more cameras, one or more input devices, optionally one or more gesture sensors, optionally one or more sensors to detect intensity of contacts with a touch-sensitive surface, and optionally one or more tactile output generators is provided with improved methods and interfaces for making measurements using an augmented reality environment, thereby improving the effectiveness, efficiency, and user satisfaction of such a computer system. Such methods and interfaces may supplement or replace conventional methods for making measurements using an augmented reality environment.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following detailed description taken in conjunction with the following drawings, in which like reference numerals designate corresponding parts throughout the several views.
Fig. 1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
FIG. 1B is a block diagram illustrating exemplary components for event processing according to some embodiments.
Fig. 2A illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
Fig. 2B illustrates a portable multifunction device with an optical sensor and a depth sensor (e.g., a time-of-flight sensor) according to some embodiments.
Fig. 3A is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
Fig. 3B-3C are block diagrams of exemplary computer systems according to some embodiments.
Fig. 4A illustrates an exemplary user interface for an application menu on a portable multifunction device in accordance with some embodiments.
Fig. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface separate from a display in accordance with some embodiments.
Fig. 5A-5P illustrate an exemplary user interface for initiating a process for measuring a body part (e.g., hand or wrist) of a user and prompting the user to position the body part in the field of view of one or more cameras of a computer system, according to some embodiments.
Fig. 6A-6N illustrate an exemplary user interface for prompting a user to move a body part to capture one or more images, according to some embodiments.
Fig. 7A-7T illustrate an exemplary user interface for determining a measurement of a body part of a user, according to some embodiments.
Fig. 8A-8F illustrate an exemplary user interface for storing measurements of a body part of a user, according to some embodiments.
Fig. 9A-9C are flowcharts of a process for providing visual feedback to a user to indicate a correct position for measurement, according to some embodiments.
Fig. 10A-10D are flowcharts of a process for providing a virtualized progress indicator for measuring a portion of a user's body, according to some embodiments.
Fig. 11A-11C are flowcharts of processes for generating machine readable code to store information about measurements, according to some embodiments.
Fig. 12A-12D are flowcharts of a process for prompting a user to adjust the position of a body part of the user into a correct position for measurement, according to some embodiments.
Detailed Description
As described above, by enabling a visual indicator to be superimposed on a physical environment, an augmented reality environment is useful for measuring objects in the physical environment (including parts of a user's body), where the visual indicator facilitates placement of the part of the user's body to be measured into place and visually indicates progress toward completing a measurement of the part of the user's body. Conventional methods of making measurements using augmented reality do not provide guidance that helps the user move to the correct location for the measurement, nor do they provide dynamic guidance as the user's location progresses. In some cases, conventional methods of storing measurements obtained using augmented reality do not easily allow a device to share measurement information with another device.
The systems, methods, and GUIs described herein improve user interface interactions with virtual/augmented reality environments in a variety of ways. For example, they make it easier to measure a portion of the user's body by providing automatic detection of characteristics of the portion of the user's body, providing a visual indication of the proper position of the portion of the user's body to be measured (e.g., relative to the device or system making the measurement), and improved guidance for showing the user how the measurement process progresses as the user moves.
Next, fig. 1A to 1B, 2, and 3A to 3C illustrate exemplary devices. Fig. 4A-4B, 5A-5P, 6A-6N, 7A-7T, and 8A-8F illustrate exemplary user interfaces for measuring a portion of a user's body in an augmented reality environment. Fig. 9A-9C illustrate a flowchart of a method of providing visual feedback to a user to indicate a correct position for measurement. Fig. 10A-10D illustrate a flowchart of a method of providing a virtualized progress indicator for measuring a portion of a user's body. Fig. 11A-11C illustrate a flowchart of a method of generating machine readable code storing information about a measurement. Fig. 12A-12D show flowcharts of a method of prompting a user to adjust the position of a body part of the user to the correct position for measurement. The user interfaces in fig. 5A to 5P, 6A to 6N, 7A to 7T, and 8A to 8F are used to illustrate the processes in fig. 9A to 9C, 10A to 10D, 11A to 11C, and 12A to 12D.
Exemplary apparatus
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. Numerous specific details are set forth in the following detailed description in order to provide a thorough understanding of the various described embodiments. It will be apparent, however, to one of ordinary skill in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
It will also be understood that, although the terms "first," "second," etc. may be used herein to describe various elements in some cases, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first contact may be named a second contact, and similarly, a second contact may be named a first contact without departing from the scope of the various described embodiments. The first contact and the second contact are both contacts, but they are not the same contact unless the context clearly indicates otherwise.
The terminology used in the description of the various illustrated embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and in the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term "if" is optionally interpreted to mean "after" at … … when … … "or" in response to determination "or" in response to detection "depending on the context. Similarly, the phrase "if determined … …" or "if detected [ stated condition or event ]" is optionally interpreted to mean "upon determining … …" or "in response to determining … …" or "upon detecting [ stated condition or event ]" or "in response to detecting [ stated condition or event ]" depending on the context.
A computer system for virtual/augmented reality includes an electronic device that generates a virtual/augmented reality environment. Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described herein. In some embodiments, the device is a portable communication device, such as a mobile phone, that also includes other functions, such as PDA and/or music player functions. Exemplary embodiments of the portable multifunction device include, but are not limited to, those from Apple inc., cupertino, californiaiPod />And->An apparatus. Other portable electronic devices, such as a laptop computer or tablet computer having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad), are optionally used. It should also be appreciated that in some embodiments, the device is not a portable communication device, but rather has a touch-sensitive surface (e.g., a touch screen display and/or a touch pad)A desktop computer further comprising or in communication with one or more cameras.
In the following discussion, a computer system is described that includes an electronic device having a display and a touch-sensitive surface (and/or in communication with these components). However, it should be understood that the computer system may alternatively include one or more other physical user interface devices, such as a physical keyboard, a mouse, a joystick, a stylus controller, and/or a camera that tracks the position of one or more features of the user, such as the user's hand.
The device typically supports various applications such as one or more of the following: a game application, a note taking application, a drawing application, a presentation application, a word processing application, a spreadsheet application, a telephone application, a video conferencing application, an email application, an instant messaging application, a workout support application, a photograph management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
The various applications executing on the device optionally use at least one generic physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed by the device are optionally adjusted and/or changed for different applications and/or within the respective applications. In this way, the common physical architecture of the devices (such as the touch-sensitive surface) optionally supports various applications with a user interface that is intuitive and transparent to the user.
Attention is now directed to embodiments of a portable device having a touch sensitive display. Fig. 1A is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display system 112 is sometimes referred to as a "touch screen" for convenience and is sometimes referred to simply as a touch-sensitive display. Device 100 includes memory 102 (which optionally includes one or more computer-readable storage media), memory controller 122, one or more processing units (CPUs) 120, peripheral interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, input/output (I/O) subsystem 106, other input or control devices 116, and external ports 124. The device 100 optionally includes one or more optical sensors 164 (e.g., as part of one or more cameras). The device 100 optionally includes one or more intensity sensors 165 (e.g., a touch-sensitive surface, such as the touch-sensitive display system 112 of the device 100) for detecting the intensity of contacts on the device 100. The device 100 optionally includes one or more tactile output generators 163 for generating tactile outputs on the device 100 (e.g., generating tactile outputs on a touch-sensitive surface such as the touch-sensitive display system 112 of the device 100 or the touch pad 355 of the device 300). These components optionally communicate via one or more communication buses or signal lines 103.
As used in this specification and in the claims, the term "haptic output" refers to a physical displacement of a device relative to a previous location of the device, a physical displacement of a component of the device (e.g., a touch sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a centroid of the device, to be detected by a user with a user's feel. For example, in the case where the device or component of the device is in contact with a touch-sensitive surface of the user (e.g., a finger, palm, or other portion of the user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a haptic sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or touch pad) is optionally interpreted by a user as a "press click" or "click-down" of a physically actuated button. In some cases, the user will feel a tactile sensation, such as "press click" or "click down", even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moved. As another example, movement of the touch-sensitive surface may optionally be interpreted or sensed by a user as "roughness" of the touch-sensitive surface, even when the smoothness of the touch-sensitive surface is unchanged. While such interpretation of touches by a user will be limited by the user's individualized sensory perception, many sensory perceptions of touches are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click down," "click up," "roughness"), unless stated otherwise, the haptic output generated corresponds to a physical displacement of the device or component thereof that would generate that sensory perception of a typical (or ordinary) user. Using haptic output to provide haptic feedback to a user enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which further reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
It should be understood that the device 100 is merely one example of a portable multifunction device, and that the device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown in fig. 1A are implemented in hardware, software, firmware, or any combination thereof (including one or more signal processing circuits and/or application specific integrated circuits).
Memory 102 optionally includes high-speed random access memory, and also optionally includes non-volatile memory, such as one or more disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to the memory 102 by other components of the device 100, such as the CPU 120 and the peripheral interface 118, is optionally controlled by a memory controller 122.
Peripheral interface 118 may be used to couple input and output peripherals of the device to CPU 120 and memory 102. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in the memory 102 to perform various functions of the device 100 and process data.
In some embodiments, peripheral interface 118, CPU 120, and memory controller 122 are optionally implemented on a single chip, such as chip 104. In some other embodiments, they are optionally implemented on separate chips.
The RF (radio frequency) circuit 108 receives and transmits RF signals, also referred to as electromagnetic signals. RF circuitry 108 converts/converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 108 optionally includes well known circuitry for performing these functions including, but not limited to, an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and the like. RF circuitry 108 optionally communicates via wireless communication with networks such as the internet (also known as the World Wide Web (WWW)), intranets, and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), and other devices. The wireless communication optionally uses any of a variety of communication standards, protocols, and technologies including, but not limited to, global system for mobile communications (GSM), enhanced Data GSM Environment (EDGE), high Speed Downlink Packet Access (HSDPA), high Speed Uplink Packet Access (HSUPA), evolution-only data (EV-DO), HSPA, hspa+, dual cell HSPA (DC-HSPA), long Term Evolution (LTE), near Field Communication (NFC), wideband code division multiple access (W-CDMA), code Division Multiple Access (CDMA), time Division Multiple Access (TDMA), bluetooth, wireless fidelity (Wi-Fi) (e.g., IEEE802.11a, IEEE802.11 ac, IEEE802.11 ax, IEEE802.11 b, IEEE802.11 g, and/or IEEE802.11 n), voice over internet protocol (VoIP), wi-MAX, electronic mail protocols (e.g., internet Message Access Protocol (IMAP), and/or Post Office Protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), instant messaging and presence protocols (simp) for instant messaging and presence, instant messaging and presence using extended session initiation protocols (sime), instant messaging and presence protocols (IMPS), instant messaging and SMS protocols (SMS) or other suitable communication protocols not yet developed herein.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between the user and device 100. Audio circuitry 110 receives audio data from peripheral interface 118, converts the audio data to electrical signals, and transmits the electrical signals to speaker 111. The speaker 111 converts electrical signals into sound waves that are audible to humans. The audio circuit 110 also receives electrical signals converted from sound waves by the microphone 113. The audio circuitry 110 converts the electrical signals into audio data and transmits the audio data to the peripheral interface 118 for processing. The audio data is optionally retrieved from and/or transmitted to the memory 102 and/or the RF circuitry 108 by the peripheral interface 118. In some embodiments, the audio circuit 110 also includes a headset jack (e.g., 212 in fig. 2A). The headset jack provides an interface between the audio circuit 110 and removable audio input/output peripherals such as output-only headphones or a headset having both an output (e.g., a monaural or binaural) and an input (e.g., a microphone).
The I/O subsystem 106 couples input/output peripheral devices on the device 100, such as the touch-sensitive display system 112 and other input or control devices 116, to the peripheral device interface 118. The I/O subsystem 106 optionally includes a display controller 156, an optical sensor controller 158, an intensity sensor controller 159, a haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. One or more input controllers 160 receive electrical signals from/transmit electrical signals to other input or control devices 116. Other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and the like. In some alternative implementations, one or more input controllers 160 are optionally coupled to (or not coupled to) any of the following: a keyboard, an infrared port, a USB port, a stylus, and/or a pointing device such as a mouse. One or more buttons (e.g., 208 in fig. 2A) optionally include an up/down button for volume control of speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206 in fig. 2A).
The touch sensitive display system 112 provides an input interface and an output interface between the device and the user. The display controller 156 receives electrical signals from and/or transmits electrical signals to the touch sensitive display system 112. The touch sensitive display system 112 displays visual output to a user. Visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively, "graphics"). In some embodiments, some or all of the visual output corresponds to a user interface object. As used herein, the term "affordance" refers to a user-interactive graphical user interface object (e.g., a graphical user interface object configured to respond to input directed to the graphical user interface object). Examples of user interactive graphical user interface objects include, but are not limited to, buttons, sliders, icons, selectable menu items, switches, hyperlinks, or other user interface controls.
The touch-sensitive display system 112 has a touch-sensitive surface, sensor, or set of sensors that receives input from a user based on haptic and/or tactile contact. The touch-sensitive display system 112 and the display controller 156 (along with any associated modules and/or sets of instructions in the memory 102) detect contact (and any movement or interruption of the contact) on the touch-sensitive display system 112 and translate the detected contact into interactions with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on the touch-sensitive display system 112. In some implementations, the point of contact between the touch-sensitive display system 112 and the user corresponds to a user's finger or stylus.
Touch-sensitive display system 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, but in other embodiments other display technologies are used. Touch sensitive display system 112 and display controller 156 optionally detect contact and any movement or interruption thereof using any of a variety of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch sensitive display system 112Techniques. In some embodiments, projected mutual capacitance sensing techniques are used, such as those from Apple inc (Cupertino, california)iPod />And->The technology found in (a) is provided.
The touch sensitive display system 112 optionally has a video resolution in excess of 100 dpi. In some implementations, the touch screen video resolution exceeds 400dpi (e.g., 500dpi, 800dpi, or greater). The user optionally uses any suitable object or appendage, such as a stylus, finger, or the like, to contact the touch sensitive display system 112. In some embodiments, the user interface is designed to work with finger-based contacts and gestures, which may not be as accurate as stylus-based input due to the large contact area of the finger on the touch screen. In some embodiments, the device translates the finger-based coarse input into a precise pointer/cursor location or command for performing the action desired by the user.
In some embodiments, the device 100 optionally includes a touch pad for activating or deactivating a particular function in addition to the touch screen. In some embodiments, the touch pad is a touch sensitive area of the device that, unlike the touch screen, does not display visual output. The touch pad is optionally a touch-sensitive surface separate from the touch-sensitive display system 112 or an extension of the touch-sensitive surface formed by the touch screen.
The apparatus 100 also includes a power system 162 for powering the various components. The power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in the portable device.
The device 100 optionally also includes one or more optical sensors 164 (e.g., as part of one or more cameras). FIG. 1A shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The one or more optical sensors 164 optionally include a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The one or more optical sensors 164 receive light projected through the one or more lenses from the environment and convert the light into data representing an image. In conjunction with imaging module 143 (also referred to as a camera module), one or more optical sensors 164 optionally capture still images and/or video. In some embodiments, the optical sensor is located on the back of the device 100 opposite the touch sensitive display system 112 on the front of the device, enabling the touch screen to be used as a viewfinder for still image and/or video image acquisition. In some embodiments, another optical sensor is located on the front of the device to acquire an image of the user (e.g., for self-timer shooting, for video conferencing while the user views other video conference participants on a touch screen, etc.). All references to images captured by one or more cameras of device 100 should be understood to optionally include depth information from one or more depth sensors (e.g., one or more time-of-flight sensors, structured light sensors (also referred to as structured light scanners), etc.) of device 100 to facilitate measurement of objects in the field of view of the one or more cameras.
The apparatus 100 optionally further comprises one or more contact intensity sensors 165. FIG. 1A shows a contact intensity sensor coupled to an intensity sensor controller 159 in the I/O subsystem 106. The one or more contact strength sensors 165 optionally include one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other strength sensors (e.g., sensors for measuring force (or pressure) of a contact on a touch-sensitive surface). One or more contact strength sensors 165 receive contact strength information (e.g., pressure information or a surrogate for pressure information) from the environment. In some implementations, at least one contact intensity sensor is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on a rear of the device 100 opposite the touch sensitive display system 112 located on a front of the device 100.
The device 100 optionally further includes one or more proximity sensors 166. Fig. 1A shows a proximity sensor 166 coupled to the peripheral interface 118. Alternatively, the proximity sensor 166 is coupled to the input controller 160 in the I/O subsystem 106. In some implementations, the proximity sensor turns off and disables the touch-sensitive display system 112 when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call).
The device 100 optionally further comprises one or more tactile output generators 163. FIG. 1A shows a haptic output generator coupled to a haptic feedback controller 161 in the I/O subsystem 106. In some embodiments, the one or more tactile output generators 163 include one or more electroacoustic devices such as speakers or other audio components; and/or electromechanical devices for converting energy into linear motion such as motors, solenoids, electroactive polymers, piezoelectric actuators, electrostatic actuators, or other tactile output generating means (e.g., means for converting an electrical signal into a tactile output on a device). The one or more haptic output generators 163 receive haptic feedback generation instructions from the haptic feedback module 133 and generate haptic output on the device 100 that can be perceived by a user of the device 100. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., inward/outward of the surface of device 100) or laterally (e.g., backward and forward in the same plane as the surface of device 100). In some embodiments, at least one tactile output generator sensor is located on a rear of the device 100 opposite the touch sensitive display system 112 located on a front of the device 100.
The device 100 optionally further includes one or more accelerometers 167, gyroscopes 168, and/or magnetometers 169 (e.g., as part of an Inertial Measurement Unit (IMU)) for obtaining information regarding the pose (e.g., position and orientation or posture) of the device. Fig. 1A shows sensors 167, 168, and 169 coupled to peripheral interface 118. Alternatively, sensors 167, 168, and 169 are optionally coupled to input controller 160 in I/O subsystem 106. In some implementations, information is displayed in a portrait view or a landscape view on a touch screen display based on analysis of data received from the one or more accelerometers. The device 100 optionally includes a GPS (or GLONASS or other global navigation system) receiver for obtaining information about the location of the device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or instruction set) 128, a contact/motion module (or instruction set) 130, a graphics module (or instruction set) 132, a haptic feedback module (or instruction set) 133, a text input module (or instruction set) 134, a Global Positioning System (GPS) module (or instruction set) 135, and an application program (or instruction set) 136. Further, in some embodiments, memory 102 stores device/global internal state 157, as shown in fig. 1A and 3. The device/global internal state 157 includes one or more of the following: an active application state indicating which applications (if any) are currently active; display status, which indicates what applications, views, or other information occupy various regions of the touch-sensitive display system 112; sensor status, which includes information obtained from various sensors of the device and other input or control devices 116; and location and/or position information regarding a pose (e.g., position and/or posture) of the device.
Operating system 126 (e.g., iOS, android, darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.), and facilitates communication between the various hardware and software components.
The communication module 128 is advantageousTo communicate with other devices via one or more external ports 124, and also includes various software components for processing data received by the RF circuitry 108 and/or external ports 124. External port 124 (e.g., universal Serial Bus (USB), firewire, etc.) is adapted to be coupled directly to other devices or indirectly via a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external ports are some of the same as Apple inc (Cupertino, california)iPod />And->The 30-pin connectors used in the devices are the same or similar and/or compatible multi-pin (e.g., 30-pin) connectors. In some embodiments, the external port is some +.o. with Apple inc (Cupertino, california)>iPod />And->The lighting connectors used in the devices are the same or similar and/or compatible lighting connectors. In some embodiments, the external port is a USB type-C connector that is the same as or similar to and/or compatible with a USB type-C connector used in some electronic devices of Apple inc (Cupertino, california).
The contact/motion module 130 optionally detects contact with the touch-sensitive display system 112 (in conjunction with the display controller 156) and other touch-sensitive devices (e.g., a touch pad or physical click wheel). The contact/motion module 130 includes various software components for performing various operations related to contact detection (e.g., by a finger or stylus), such as determining whether a contact has occurred (e.g., detecting a finger press event), determining the strength of the contact (e.g., the force or pressure of the contact, or a substitute for the force or pressure of the contact), determining whether there is movement of the contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger drag events), and determining whether the contact has stopped (e.g., detecting a finger lift event or contact break). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining a velocity (magnitude), a speed (magnitude and direction), and/or an acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single-finger contacts or stylus contacts) or simultaneous multi-point contacts (e.g., "multi-touch"/multi-finger contacts). In some embodiments, the contact/motion module 130 and the display controller 156 detect contact on the touch pad.
The contact/motion module 130 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different movements, timings, and/or intensities of the detected contacts). Thus, gestures are optionally detected by detecting a particular contact pattern. For example, detecting a single-finger tap gesture includes detecting a finger-down event, and then detecting a finger-up (lift-off) event at the same location (or substantially the same location) as the finger-down event (e.g., at an icon location). As another example, detecting a finger swipe gesture on a touch-sensitive surface includes detecting a finger press event, then detecting one or more finger drag events, and then detecting a finger lift (lift off) event. Similarly, taps, swipes, drags, and other gestures of the stylus are optionally detected by detecting a particular contact pattern of the stylus.
In some embodiments, detecting a finger tap gesture depends on a length of time between detecting a finger press event and a finger lift event, but is independent of a finger contact strength between detecting a finger press event and a finger lift event. In some embodiments, in accordance with a determination that the length of time between the finger press event and the finger lift event is less than a predetermined value (e.g., less than 0.1 seconds, 0.2 seconds, 0.3 seconds, 0.4 seconds, or 0.5 seconds), a flick gesture is detected, regardless of whether the intensity of the finger contact during the flick reaches a given intensity threshold (greater than a nominal contact detection intensity threshold), such as a light press or a deep press intensity threshold. Thus, a finger tap gesture may satisfy a particular input criteria that does not require the characteristic intensity of the contact to satisfy a given intensity threshold to satisfy the particular input criteria. For clarity, finger contact in a flick gesture is typically required to meet a nominal contact detection intensity threshold below which no contact is detected to detect a finger press event. Similar analysis applies to detecting a flick gesture by a stylus or other contact. In the case where the device is capable of detecting finger or stylus contact hovering over a touch sensitive surface, the nominal contact detection intensity threshold optionally does not correspond to physical contact between the finger or stylus and the touch sensitive surface.
The same concepts apply in a similar manner to other types of gestures. For example, a swipe gesture, pinch gesture, spread gesture, and/or long press gesture may optionally be detected based on meeting criteria that one or more contacts that are independent of the intensity of contacts included in the gesture or do not require the performance of the gesture reach an intensity threshold in order to be recognized. For example, a swipe gesture is detected based on an amount of movement of the one or more contacts; pinch gestures are detected based on movement of two or more contacts toward each other; the spread gesture is detected based on movement of the two or more contacts away from each other; the long press gesture is detected based on a duration of contact on the touch-sensitive surface having less than a threshold amount of movement. Thus, statement that a particular gesture recognition criterion does not require that the contact intensity meet a respective intensity threshold to meet the particular gesture recognition criterion means that the particular gesture recognition criterion can be met when a contact in the gesture does not meet the respective intensity threshold, and can also be met if one or more contacts in the gesture meet or exceed the respective intensity threshold. In some embodiments, a flick gesture is detected based on determining that a finger press event and a finger lift event are detected within a predefined time period, regardless of whether the contact is above or below a respective intensity threshold during the predefined time period, and a swipe gesture is detected based on determining that the contact movement is greater than a predefined magnitude, even though the contact is above the respective intensity threshold at the end of the contact movement. Even in implementations where the detection of gestures is affected by the intensity of the contact performing the gesture (e.g., the device detects a long press faster when the intensity of the contact is above an intensity threshold, or the device delays the detection of a tap input when the intensity of the contact is higher), the detection of these gestures does not require the contact to reach a certain intensity threshold (e.g., even if the amount of time required to recognize the gesture changes) as long as the criteria for recognizing the gesture can be met without the contact reaching the certain intensity threshold.
In some cases, the contact strength threshold, duration threshold, and movement threshold are combined in various different combinations in order to create a heuristic to distinguish between two or more different gestures for the same input element or region, such that multiple different interactions with the same input element can provide a richer set of user interactions and responses. Statement that a set of particular gesture recognition criteria does not require that the intensity of one or more contacts meet a respective intensity threshold in order to meet the particular gesture recognition criteria does not preclude simultaneous evaluation of other intensity-related gesture recognition criteria to identify other gestures having criteria that are met when the gesture includes a contact having an intensity above the respective intensity threshold. For example, in some cases, a first gesture recognition criterion of a first gesture (which does not require the intensity of a contact to meet a respective intensity threshold to meet the first gesture recognition criterion) competes with a second gesture recognition criterion of a second gesture (which depends on the contact reaching the respective intensity threshold). In such a competition, if the second gesture recognition criteria of the second gesture is satisfied first, the gesture is optionally not recognized as satisfying the first gesture recognition criteria of the first gesture. For example, if the contact reaches a corresponding intensity threshold before the contact moves by a predefined amount of movement, a deep press gesture is detected instead of a swipe gesture. Conversely, if the contact moves a predefined amount of movement before the contact reaches the corresponding intensity threshold, a swipe gesture is detected instead of a deep press gesture. Even in such cases, the first gesture recognition criteria of the first gesture still does not require the intensity of the contact to meet the respective intensity threshold to meet the first gesture recognition criteria because if the contact remains below the respective intensity threshold until the gesture ends (e.g., a swipe gesture having a contact that does not increase in intensity above the respective intensity threshold), the gesture will be recognized by the first gesture recognition criteria as a swipe gesture. Thus, a particular gesture recognition criterion that does not require the intensity of the contact to meet the respective intensity threshold to meet the particular gesture recognition criterion will (a) in some cases ignore the intensity of the contact relative to the intensity threshold (e.g., for a flick gesture) and/or (B) in some cases fail to meet the particular gesture recognition criterion (e.g., for a long press gesture) in the sense that the intensity of the contact relative to the intensity threshold (e.g., for a long press gesture) is still dependent on if a competing set of intensity-related gesture recognition criteria (e.g., for a long press gesture that competes for recognition with a deep press gesture) recognizes the input as corresponding to the intensity-related gesture before the particular gesture recognition criterion recognizes the gesture.
In conjunction with accelerometer 167, gyroscope 168, and/or magnetometer 169, gesture module 131 optionally detects gesture information about the device, such as the gesture (e.g., roll, pitch, yaw, and/or position) of the device in a particular frame of reference. Gesture module 131 includes software components for performing various operations related to detecting device positions and detecting device gesture changes.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch-sensitive display system 112 or other displays, including means for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual attribute) of the displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including without limitation text, web pages, icons (such as user interface objects including soft keys), digital images, video, animation, and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphic module 132 receives one or more codes for designating graphics to be displayed from an application program or the like, and also receives coordinate data and other graphic attribute data together if necessary, and then generates screen image data to output to the display controller 156.
Haptic feedback module 133 includes various software components for generating instructions (e.g., instructions used by haptic feedback controller 161) to generate haptic output at one or more locations on device 100 using one or more haptic output generators 163 in response to user interaction with device 100.
Text input module 134, which is optionally a component of graphics module 132, provides a soft keyboard for entering text in various applications (e.g., contacts 137, email 140, IM 141, browser 147, and any other application requiring text input).
The GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to the phone 138 for location-based dialing, to the camera 143 as picture/video metadata, and to applications that provide location-based services such as weather desktops, page-on-the-ground desktops, and map/navigation desktops).
The virtual/augmented reality module 145 provides virtual and/or augmented reality logic components to the application 136 implementing the augmented reality feature, and in some embodiments the virtual reality feature. The virtual/augmented reality module 145 facilitates the superposition of virtual content, such as virtual user interface objects, over a representation of at least a portion of the field of view of one or more cameras. For example, with the aid of the virtual/augmented reality module 145, a representation of at least a portion of the field of view of one or more cameras may include a respective physical object, and the virtual user interface object may be displayed in the displayed augmented reality environment at a location determined based on the respective physical object in the field of view of the one or more cameras or in a virtual reality environment determined based on a pose of at least a portion of the computer system (e.g., a pose of a display device used to display a user interface to a user of the computer system).
The application 136 optionally includes the following modules (or sets of instructions) or a subset or superset thereof:
contact module 137 (sometimes referred to as an address book or contact list);
a telephone module 138;
video conferencing module 139;
email client module 140;
an Instant Messaging (IM) module 141;
a fitness support module 142;
a camera module 143 for still and/or video images;
an image management module 144;
browser module 147;
calendar module 148;
a desktop applet module 149, optionally including one or more of: weather desktop applet 149-1, stock desktop applet 149-2, calculator desktop applet 149-3, alarm desktop applet 149-4, dictionary desktop applet 149-5 and other desktop applets obtained by the user, and user created desktop applet 149-6;
a desktop applet creator module 150 for forming a user-created desktop applet 149-6;
search module 151;
a video and music player module 152, optionally consisting of a video player module and a music player module;
the notes module 153;
map module 154; and/or
An online video module 155.
Examples of other applications 136 optionally stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In connection with the touch sensitive display system 112, the display controller 156, the contact module 130, the graphics module 132, and the text input module 134, the contact module 137 includes executable instructions for managing an address book or contact list (e.g., in the application internal state 192 of the contact module 137 stored in the memory 102 or the memory 370), including: adding names to address books; deleting the name from the address book; associating a telephone number, email address, physical address, or other information with the name; associating the image with the name; classifying and classifying names; providing a telephone number and/or email address to initiate and/or facilitate communication through telephone 138, video conference 139, email 140, or IM 141; etc.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, phone module 138 includes executable instructions for: inputting a character sequence corresponding to the telephone numbers, accessing one or more telephone numbers in the address book 137, modifying the inputted telephone numbers, dialing the corresponding telephone numbers, conducting a conversation, and disconnecting or hanging up when the conversation is completed. As described above, wireless communication optionally uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, one or more optical sensors 164, optical sensor controller 158, contact module 130, graphics module 132, text input module 134, contact list 137, and telephony module 138, videoconferencing module 139 includes executable instructions to initiate, conduct, and terminate a videoconference between a user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions for creating, sending, receiving, and managing emails in response to user instructions. In conjunction with the image management module 144, the email client module 140 makes it very easy to create and send emails with still or video images captured by the camera module 143.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, instant message module 141 includes executable instructions for: inputting a character sequence corresponding to the instant message, modifying previously inputted characters, transmitting the corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for phone-based instant messages or using XMPP, SIMPLE, apple push notification services (apls) or IMPS for internet-based instant messages), receiving the instant message, and viewing the received instant message. In some implementations, the transmitted and/or received instant message optionally includes graphics, photos, audio files, video files, and/or other attachments supported in an MMS and/or Enhanced Messaging Service (EMS). As used herein, "instant message" refers to both telephone-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, APNs or IMPS).
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and video and music player module 152, workout support module 142 includes executable instructions for creating a workout (e.g., with time, distance, and/or calorie burn targets); communication with fitness sensors (in sports equipment and smart watches); receiving fitness sensor data; calibrating a sensor for monitoring fitness; selecting and playing music for exercise; and displaying, storing and transmitting the fitness data.
In conjunction with touch-sensitive display system 112, display controller 156, one or more optical sensors 164, optical sensor controller 158, contact module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions for: capturing still images or videos (including video streams) and storing them in the memory 102, modifying features of the still images or videos, and/or deleting the still images or videos from the memory 102.
In conjunction with the touch-sensitive display system 112, the display controller 156, the contact module 130, the graphics module 132, the text input module 134, and the camera module 143, the image management module 144 includes executable instructions for arranging, modifying (e.g., editing), or otherwise manipulating, labeling, deleting, presenting (e.g., in a digital slide or album), and storing still images and/or video images.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions for browsing the internet (including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages) according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, text input module 134, email client module 140, and browser module 147, calendar module 148 includes executable instructions for creating, displaying, modifying, and storing calendars and data associated with calendars (e.g., calendar entries, to-do items, etc.) according to user instructions.
In conjunction with the RF circuitry 108, the touch-sensitive display system 112, the display controller 156, the contact module 130, the graphics module 132, the text input module 134, and the browser module 147, the desktop applet module 149 is a mini-application (e.g., weather desktop applet 149-1, stock desktop applet 149-2, calculator desktop applet 149-3, alarm clock desktop applet 149-4, and dictionary desktop applet 149-5) or a mini-application created by a user (e.g., user created desktop applet 149-6) that is optionally downloaded and used by the user. In some embodiments, gadgets include HTML (hypertext markup language) files, CSS (cascading style sheet) files, and JavaScript files. In some embodiments, gadgets include XML (extensible markup language) files and JavaScript files (e.g., yahoo | gadgets).
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, desktop applet creator module 150 includes executable instructions for creating an applet (e.g., turning a user-specified portion of a web page into the applet).
In conjunction with the touch-sensitive display system 112, the display controller 156, the contact module 130, the graphics module 132, and the text input module 134, the search module 151 includes executable instructions for searching text, music, sound, images, video, and/or other files in the memory 102 that match one or more search criteria (e.g., one or more user-specified search terms) according to user instructions.
In conjunction with the touch-sensitive display system 112, the display controller 156, the contact module 130, the graphics module 132, the audio circuit 110, the speaker 111, the RF circuit 108, and the browser module 147, the video and music player module 152 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats (such as MP3 or AAC files), as well as executable instructions for displaying, rendering, or otherwise playing back video (e.g., on the touch-sensitive display system 112, or on an external display that is wirelessly connected or connected via the external port 124). In some embodiments, the device 100 optionally includes the functionality of an MP3 player such as an iPod (trademark of Apple inc.).
In conjunction with touch-sensitive display system 112, display controller 156, touch module 130, graphics module 132, and text input module 134, notepad module 153 includes executable instructions for creating and managing notes, backlog, etc. in accordance with user instructions.
In conjunction with the RF circuitry 108, the touch-sensitive display system 112, the display controller 156, the contact module 130, the graphics module 132, the text input module 134, the GPS module 135, and the browser module 147, the map module 154 includes executable instructions for receiving, displaying, modifying, and storing maps and data associated with maps (e.g., driving directions; data of stores and other points of interest at or near a particular location; and other location-based data) according to user instructions.
In conjunction with the touch sensitive display system 112, the display controller 156, the contact module 130, the graphics module 132, the audio circuit 110, the speaker 111, the RF circuit 108, the text input module 134, the email client module 140, and the browser module 147, the online video module 155 includes executable instructions that allow a user to access, browse, receive (e.g., by streaming and/or downloading), play back (e.g., on the touch screen 112, or on an external display connected wirelessly or via the external port 124), send emails with links to particular online videos, and otherwise manage online videos in one or more file formats such as h.264. In some embodiments, the instant messaging module 141 is used to send links to particular online videos instead of the email client module 140.
Each of the modules and applications identified above corresponds to a set of executable instructions for performing one or more of the functions described above, as well as the methods described in the present disclosure (e.g., computer-implemented methods and other information processing methods described herein). These modules (i.e., sets of instructions) need not be implemented in separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 102 optionally stores a subset of the modules and data structures described above. Further, memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device in which the operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or touch pad. By using a touch screen and/or a touch pad as the primary input control device for operating the device 100, the number of physical input control devices (e.g., push buttons, dials, etc.) on the device 100 is optionally reduced.
A predefined set of functions performed solely by the touch screen and/or touch pad optionally includes navigation between user interfaces. In some embodiments, the touchpad, when touched by a user, navigates the device 100 from any user interface displayed on the device 100 to a main menu, a main desktop menu, or a root menu. In such implementations, a touch-sensitive surface is used to implement "menu buttons. In some other embodiments, the menu buttons are physical push buttons or other physical input control devices, rather than touch sensitive surfaces.
FIG. 1B is a block diagram illustrating exemplary components for event processing according to some embodiments. In some embodiments, memory 102 (in fig. 1A) or memory 370 (fig. 3A) includes event sorter 170 (e.g., in operating system 126) and corresponding applications 136-1 (e.g., any of the aforementioned applications 136, 137-155, 380-390).
The event classifier 170 receives the event information and determines the application view 191 of the application 136-1 and the application 136-1 to which the event information is to be delivered. The event sorter 170 includes an event monitor 171 and an event dispatcher module 174. In some implementations, the application 136-1 includes an application internal state 192 that indicates one or more current application views that are displayed on the touch-sensitive display system 112 when the application is active or executing. In some embodiments, the device/global internal state 157 is used by the event classifier 170 to determine which application(s) are currently active, and the application internal state 192 is used by the event classifier 170 to determine the application view 191 to which to deliver event information.
In some implementations, the application internal state 192 includes additional information, such as one or more of the following: restoration information to be used when the application 136-1 resumes execution, user interface state information indicating that the information is being displayed or ready for display by the application 136-1, a state queue for enabling the user to return to a previous state or view of the application 136-1, and a repeat/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripheral interface 118. The event information includes information about sub-events (e.g., user touches on the touch sensitive display system 112 as part of a multi-touch gesture). Peripheral interface 118 transmits information it receives from I/O subsystem 106 or sensors, such as proximity sensor 166, one or more accelerometers 167, and/or microphone 113 (via audio circuitry 110). The information received by the peripheral interface 118 from the I/O subsystem 106 includes information from the touch-sensitive display system 112 or touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to peripheral interface 118 at predetermined intervals. In response, the peripheral interface 118 transmits event information. In other embodiments, the peripheral interface 118 transmits event information only if there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or receiving an input exceeding a predetermined duration).
In some implementations, the event classifier 170 also includes a hit view determination module 172 and/or an active event identifier determination module 173.
When the touch sensitive display system 112 displays more than one view, the hit view determination module 172 provides a software process for determining where within one or more views a sub-event has occurred. The view is made up of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected optionally corresponds to a level of programming within the application's programming or view hierarchy. For example, the lowest horizontal view in which a touch is detected is optionally referred to as a hit view, and the set of events that are recognized as correct inputs is optionally determined based at least in part on the hit view of the initial touch that begins a touch-based gesture.
Hit view determination module 172 receives information related to sub-events of the touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 172 identifies the hit view as the lowest view in the hierarchy that should process sub-events. In most cases, the hit view is the lowest level view in which the initiating sub-event (i.e., the first sub-event in the sequence of sub-events that form the event or potential event) occurs. Once the hit view is identified by the hit view determination module, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as a hit view.
The activity event recognizer determination module 173 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some implementations, the active event identifier determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, the activity event recognizer determination module 173 determines that all views that include the physical location of a sub-event are actively engaged views, and thus determines that all actively engaged views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely localized to an area associated with one particular view, the higher view in the hierarchy will remain the actively engaged view.
The event dispatcher module 174 dispatches event information to an event recognizer (e.g., event recognizer 180). In embodiments that include an active event recognizer determination module 173, the event dispatcher module 174 delivers event information to the event recognizers determined by the active event recognizer determination module 173. In some embodiments, the event dispatcher module 174 stores event information in an event queue that is retrieved by the corresponding event receiver module 182.
In some embodiments, the operating system 126 includes an event classifier 170. Alternatively, the application 136-1 includes an event classifier 170. In yet another embodiment, the event classifier 170 is a stand-alone module or part of another module stored in the memory 102, such as the contact/motion module 130.
In some embodiments, application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for processing touch events that occur within a respective view of the user interface of the application. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, the respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of the event recognizers 180 are part of a separate module that is a higher level object from which methods and other properties are inherited, such as the user interface toolkit or application 136-1. In some implementations, the respective event handlers 190 include one or more of the following: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or invokes data updater 176, object updater 177, or GUI updater 178 to update the application internal state 192. Alternatively, one or more of application views 191 include one or more corresponding event handlers 190. Additionally, in some implementations, one or more of the data updater 176, the object updater 177, and the GUI updater 178 are included in a respective application view 191.
The corresponding event identifier 180 receives event information (e.g., event data 179) from the event classifier 170 and identifies events from the event information. Event recognizer 180 includes event receiver 182 and event comparator 184. In some embodiments, event recognizer 180 further includes at least a subset of metadata 183 and event transfer instructions 188 (which optionally include sub-event delivery instructions).
Event receiver 182 receives event information from event sorter 170. The event information includes information about sub-events such as touches or touch movements. The event information also includes additional information, such as the location of the sub-event, according to the sub-event. When a sub-event relates to movement of a touch, the event information optionally also includes the rate and direction of the sub-event. In some embodiments, the event includes rotation of the device from one orientation to another orientation (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about a current pose (e.g., position and orientation) of the device.
The event comparator 184 compares the event information with predefined event or sub-event definitions and determines an event or sub-event or determines or updates the state of the event or sub-event based on the comparison. In some embodiments, event comparator 184 includes event definition 186. Event definition 186 includes definitions of events (e.g., a predefined sequence of sub-events), such as event 1 (187-1), event 2 (187-2), and others. In some implementations, sub-events in event 187 include, for example, touch start, touch end, touch movement, touch cancellation, and multi-touch. In one example, the definition of event 1 (187-1) is a double click on the displayed object. For example, the double click includes a first touch (touch start) for a predetermined period of time on the displayed object, a first lift-off (touch end) for a predetermined period of time, a second touch (touch start) for a predetermined period of time on the displayed object, and a second lift-off (touch end) for a predetermined period of time. In another example, the definition of event 2 (187-2) is a drag on the displayed object. For example, dragging includes touching (or contacting) on the displayed object for a predetermined period of time, movement of the touch on the touch-sensitive display system 112, and lift-off of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some implementations, the event definitions 187 include definitions of events for respective user interface objects. In some implementations, the event comparator 184 performs a hit test to determine which user interface object is associated with a sub-event. For example, in an application view that displays three user interface objects on touch-sensitive display system 112, when a touch is detected on touch-sensitive display system 112, event comparator 184 performs a hit test to determine which of the three user interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 190, the event comparator uses the results of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects an event handler associated with the sub-event and the object that triggered the hit test.
In some implementations, the definition of the respective event 187 also includes delay actions that delay delivery of event information until after it has been determined that the sequence of sub-events does or does not correspond to an event type of the event recognizer.
When the respective event recognizer 180 determines that the sequence of sub-events does not match any of the events in the event definition 186, the respective event recognizer 180 enters an event impossible, event failed, or event end state after which subsequent sub-events of the touch-based gesture are ignored. In this case, the other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the ongoing touch-based gesture.
In some embodiments, the respective event recognizer 180 includes metadata 183 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to the actively engaged event recognizer. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate how event recognizers interact or are able to interact with each other. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to different levels in a view or programmatic hierarchy.
In some embodiments, when one or more particular sub-events of an event are identified, the corresponding event recognizer 180 activates an event handler 190 associated with the event. In some implementations, the respective event identifier 180 delivers event information associated with the event to the event handler 190. The activate event handler 190 is different from sending (and deferring) sub-events to the corresponding hit view. In some embodiments, event recognizer 180 throws a marker associated with the recognized event, and event handler 190 associated with the marker retrieves the marker and performs a predefined process.
In some implementations, the event delivery instructions 188 include sub-event delivery instructions that deliver event information about the sub-event without activating the event handler. Instead, the sub-event delivery instructions deliver the event information to an event handler associated with the sub-event sequence or to an actively engaged view. Event handlers associated with the sequence of sub-events or with the actively engaged views receive the event information and perform a predetermined process.
In some embodiments, the data updater 176 creates and updates data used in the application 136-1. For example, the data updater 176 updates telephone numbers used in the contacts module 137 or stores video files used in the video or music player module 152. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, the object updater 177 creates a new user interface object or updates a portion of a user interface object. GUI updater 178 updates the GUI. For example, the GUI updater 178 prepares the display information and sends the display information to the graphics module 132 for display on a touch-sensitive display.
In some embodiments, event handler 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, the data updater 176, the object updater 177, and the GUI updater 178 are included in a single module of the respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It should be appreciated that the above discussion regarding event handling of user touches on a touch sensitive display also applies to other forms of user inputs that utilize an input device to operate the multifunction device 100, not all of which are initiated on a touch screen. For example, mouse movements and mouse button presses optionally in conjunction with single or multiple keyboard presses or holds; contact movement on the touch pad, such as flicking, dragging, scrolling, etc.; inputting by a touch pen; input based on real-time analysis of video images obtained by one or more cameras; movement of the device; verbal instructions; detected eye movement; inputting biological characteristics; and/or any combination thereof is optionally used as input corresponding to sub-events defining the event to be distinguished.
Fig. 2A illustrates a portable multifunction device 100 with a touch screen (e.g., touch-sensitive display system 112, fig. 1A) according to some embodiments. The touch screen optionally displays one or more graphics within a User Interface (UI) 200. In these embodiments, as well as other embodiments described below, a user can select one or more of these graphics by making a gesture on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figures) or one or more styluses 203 (not drawn to scale in the figures). In some embodiments, selection of one or more graphics will occur when a user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up and/or down), and/or scrolling of a finger that has been in contact with the device 100 (right to left, left to right, up and/or down). In some implementations or in some cases, inadvertent contact with the graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture that swipes over an application icon optionally does not select the corresponding application.
The device 100 optionally also includes one or more physical buttons, such as a "home desktop" or menu button 204. As previously described, menu button 204 is optionally used to navigate to any application 136 in a set of applications that are optionally executed on device 100. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on a touch screen display.
In some embodiments, the device 100 includes a touch screen display, menu buttons 204 (sometimes referred to as home screen buttons 204), a press button 206 for powering the device on/off and for locking the device, a volume adjustment button 208, a Subscriber Identity Module (SIM) card slot 210, a headset jack 212, and a docking/charging external port 124. Pressing button 206 is optionally used to turn on/off the device by pressing the button and holding the button in the pressed state for a predefined time interval; locking the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or unlock the device or initiate an unlocking process. In some implementations, the device 100 also accepts voice input through the microphone 113 for activating or deactivating certain functions. The device 100 also optionally includes one or more contact intensity sensors 165 for detecting contact intensity on the touch-sensitive display system 112, and/or one or more tactile output generators 163 for generating tactile outputs for a user of the device 100.
Fig. 2B shows a portable multifunction device 100 (e.g., a view of the back of the device 100) optionally including optical sensors 164-1 and 164-2 and a depth sensor 220 (e.g., one or more time-of-flight ("ToF") sensors, a structured light sensor (also referred to as a structured light scanner), etc.). When the optical sensors (e.g., cameras) 164-1 and 164-2 simultaneously capture representations (e.g., images or video) of the physical environment, the portable multifunction device may determine depth information from differences between information simultaneously captured by the optical sensors (e.g., differences between captured images). Depth information provided by the (e.g., image) differences determined using optical sensors 164-1 and 164-2 may lack accuracy, but generally provide high resolution. To improve the accuracy of the depth information provided by the differences between the images, a depth sensor 220 is optionally used in conjunction with the optical sensors 164-1 and 164-2. In some embodiments, depth sensor 220 emits a waveform (e.g., light from a Light Emitting Diode (LED) or laser) and measures the time it takes for the reflection of the waveform (e.g., light) to return to ToF sensor 220. Depth information is determined from the measurement time taken for light to return to the depth sensor 220. Depth sensors typically provide high accuracy (e.g., accuracy of 1cm or better relative to the distance or depth measured), but may lack high resolution (e.g., the resolution of depth sensor 220 is optionally one-fourth of the resolution of optical sensor 164, or less than one-fourth of the resolution of optical sensor 164, or one-sixteenth of the resolution of optical sensor 164, or less than one-sixteenth of the resolution of optical sensor 164). Thus, combining depth information from a depth sensor (e.g., depth sensor 220, such as a ToF sensor) with depth information provided by a (e.g., image) difference determined using an optical sensor (e.g., a camera) provides a depth map that is both accurate and high resolution.
Fig. 3A is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, the device 300 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child learning toy), a gaming system, or a control device (e.g., a home controller or an industrial controller). The device 300 generally includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. Communication bus 320 optionally includes circuitry (sometimes referred to as a chipset) that interconnects and controls communications between system components. The device 300 includes an input/output (I/O) interface 330 with a display 340, which may optionally be a touch screen display. The I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and a touch pad 355, a tactile output generator 357 (e.g., similar to the one or more tactile output generators 163 described above with reference to fig. 1A) for generating tactile outputs on the device 300, sensors 359 (e.g., optical sensors, depth sensors, acceleration sensors, proximity sensors, touch-sensitive sensors, and/or contact intensity sensors similar to the one or more contact intensity sensors 165 described above with reference to fig. 1A). Memory 370 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and optionally includes non-volatile memory such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 370 optionally includes one or more storage devices located remotely from CPU 310. In some embodiments, memory 370 stores programs, modules, and data structures similar to those stored in memory 102 of portable multifunction device 100 (fig. 1A), or a subset thereof. Furthermore, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk editing module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (fig. 1A) optionally does not store these modules.
Each of the above identified elements of fig. 3A are optionally stored in one or more of the previously mentioned memory devices. Each of the above identified modules corresponds to a set of instructions for performing the above described functions. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures described above. Further, memory 370 optionally stores additional modules and data structures not described above.
Fig. 3B-3C are block diagrams of exemplary computer systems 301, according to some embodiments.
In some embodiments, computer system 301 includes and/or communicates with the following components:
input devices (302 and/or 307, e.g., a touch-sensitive surface such as a touch-sensitive remote control, or a touch screen display that also serves as a display generating component, a mouse, a joystick, a stylus controller, and/or a camera that tracks the position of one or more features of a user such as the user's hand);
Virtual/augmented reality logic 303 (e.g., virtual/augmented reality module 145);
display generation means (304 and/or 308, e.g., a display, projector, head-mounted display, heads-up display, etc.) for displaying virtual user interface elements to a user;
a camera (e.g., 305 and/or 311) for capturing an image of a field of view of the device, e.g., for determining placement of virtual user interface elements, determining a pose of the device, and/or displaying an image of a portion of a physical environment in which the camera is located; all references to images captured by one or more cameras of a computer system (e.g., 301-a, 301-b, or 301-c) should be understood to optionally include one or more depth sensors (e.g., one or more time-of-flight sensors,
A structured light sensor (also referred to as a structured light scanner), etc.) to facilitate measurement of objects in the field of view of one or more cameras; and
a gesture sensor (e.g., 306 and/or 311) for determining a gesture of the device relative to the physical environment and/or a change in the gesture of the device.
In some computer systems (e.g., 301-a in fig. 3B), the input device 302, the virtual/augmented reality logic component 303, the display generation component 304, the camera 305, and the gesture sensor 306 are all integrated into the computer system (e.g., the portable multifunction device 100 in fig. 1A-1B or the device 300 in fig. 3, such as a smart phone or tablet).
In some computer systems (e.g., 301-b), in addition to integrated input device 302, virtual/augmented reality logic component 303, display generation component 304, camera 305; and gesture sensor 306, the computer system communicates with additional devices independent of the computer system, such as a separate input device 307, such as a touch-sensitive surface, a stylus, a remote control, etc., and/or a separate display generating component 308, such as a virtual reality headset or augmented reality glasses covering virtual objects on the environment.
In some computer systems (e.g., 301-C in FIG. 3C), the input device 307, display generation component 309, camera 311, and/or gesture sensor 312 are separate from and in communication with the computer system. In some embodiments, other combinations of components in computer system 301 and in communication with the computer system are used. For example, in some embodiments, display generation component 309, camera 311, and gesture sensor 312 are incorporated in a headset that is integrated with or in communication with a computer system.
In some embodiments, all of the operations described below with reference to fig. 5A-5P, 6A-6N, 7A-7T, and 8A-8F are performed on a single computing device (e.g., computer system 301-a described below with reference to fig. 3B) having virtual/augmented reality logic 303. However, it should be appreciated that a plurality of different computing devices are often linked together to perform the operations described below with reference to fig. 5A-5P, 6A-6N, 7A-7T, and 8A-8F (e.g., a computing device with virtual/augmented reality logic 303 is in communication with a separate computing device with display 450 and/or a separate computing device with touch-sensitive surface 451). In any of these embodiments, the computing devices described below with reference to fig. 5A-5P, 6A-6N, 7A-7T, and 8A-8F are one or more computing devices that include virtual/augmented reality logic component 303. Additionally, it should be appreciated that in various embodiments, virtual/augmented reality logic 303 may be divided among a plurality of different modules or computing devices; for purposes of this description, however, virtual/augmented reality logic component 303 will be referred to primarily as residing in a single computing device to avoid unnecessarily obscuring other aspects of the embodiments.
In some embodiments, virtual/augmented reality logic 303 includes one or more modules (e.g., one or more event handlers 190, including one or more object updaters 177 and one or more GUI updaters 178 as described in more detail above with reference to fig. 1B) that receive interpretation inputs and, in response to these interpretation inputs, generate instructions for updating a graphical user interface in accordance with the interpretation inputs, which are then used to update the graphical user interface on the display. In some embodiments, interpretation inputs of inputs that have been detected (e.g., by the contact motion module 130 in fig. 1A and 3), identified (e.g., by the event identifier 180 in fig. 1B), and/or assigned (e.g., by the event classifier 170 in fig. 1B) are used to update a graphical user interface on a display. In some implementations, the interpretation input is generated by a module on the computing device (e.g., the computing device receives raw contact input data in order to identify the gesture from the raw contact input data). In some implementations, some or all of the interpretation input is received by the computing device as interpretation input (e.g., the computing device including touch-sensitive surface 451 processes the raw contact input data in order to recognize a gesture from the raw contact input data and send information indicative of the gesture to the computing device including virtual/augmented reality logic component 303).
In some implementations, both the display and the touch-sensitive surface are integrated with a computer system (e.g., 301-a in fig. 3B) that includes virtual/augmented reality logic component 303. For example, the computer system may be a desktop or laptop computer with an integrated display (e.g., 340 in FIG. 3) and a touch pad (e.g., 355 in FIG. 3). As another example, the computing device may be a portable multifunction device 100 (e.g., smart phone, PDA, tablet, etc.) having a touch screen (e.g., 112 in fig. 2A).
In some implementations, the touch-sensitive surface is integrated with a computer system, while the display is not integrated with a computer system that includes the virtual/augmented reality logic component 303. For example, the computer system may be a device 300 (e.g., desktop or laptop computer, etc.) having an integrated touch pad (e.g., 355 of fig. 3) that is connected (via a wired or wireless connection) to a separate display (e.g., computer monitor, television, etc.). As another example, the computer system may be a portable multifunction device 100 (e.g., smart phone, PDA, tablet, etc.) having a touch screen (e.g., 112 in fig. 2A) that is connected (via a wired or wireless connection) to a separate display (e.g., computer monitor, television, etc.).
In some implementations, the display is integrated with a computer system, while the touch-sensitive surface is not integrated with a computer system that includes virtual/augmented reality logic 303. For example, the computer system may be a device 300 (e.g., desktop computer, laptop computer, television with integrated set-top box) having an integrated display (e.g., 340 in fig. 3) that is connected (via wired or wireless connection) to a separate touch-sensitive surface (e.g., remote touch pad, portable multifunction device, etc.). As another example, the computer system may be a portable multifunction device 100 (e.g., smart phone, PDA, tablet, etc.) having a touch screen (e.g., 112 in fig. 2A) that is connected (via a wired or wireless connection) to a separate touch-sensitive surface (e.g., a remote touch pad, another touch screen is a portable multifunction device used as a remote touch pad, etc.).
In some implementations, neither the display nor the touch-sensitive surface are integrated with a computer system (e.g., 301-C in FIG. 3C) that includes virtual/augmented reality logic component 303. For example, the computer system may be a stand-alone computing device 300 (e.g., a set-top box, game console, etc.) connected (via a wired or wireless connection) to a stand-alone touch-sensitive surface (e.g., a remote touch pad, portable multifunction device, etc.) and a stand-alone display (e.g., a computer monitor, television, etc.).
In some embodiments, the computer system has an integrated audio system (e.g., audio circuit 110 and speaker 111 in portable multifunction device 100). In some implementations, the computing device communicates with an audio system that is independent of the computing device. In some implementations, an audio system (e.g., an audio system integrated in a television unit) is integrated with a separate display. In some embodiments, the audio system (e.g., a stereo system) is a stand-alone system separate from the computer system and the display.
Attention is now directed to embodiments of a user interface ("UI") optionally implemented on the portable multifunction device 100.
Fig. 4A illustrates an exemplary user interface of an application menu on the portable multifunction device 100 in accordance with some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
signal strength indicators for wireless communications such as cellular signals and Wi-Fi signals;
time;
bluetooth indicator;
battery status indicator;
tray 408 with common application icons, such as:
Icon 416 of phone module 138 marked "phone", optionally including an indicator 414 of the number of missed calls or voice messages;
an icon 418 of email client module 140 marked "mail" optionally including an indicator 410 of the number of unread emails;
icon 420 of browser module 147 marked "browser"; and
icon 422 labeled "music" for video and music player module 152;
and
icons of other applications, such as:
icon 424 marked "message" for IM module 141;
icon 426 of calendar module 148 marked "calendar";
icon 428 marked "photo" of image management module 144;
icon 430 marked "camera" for camera module 143;
icon 432 of online video module 155 marked "online video";
icon 434 labeled "stock market" for stock market desktop applet 149-2;
icon 436 marked "map" of map module 154;
icon 438 marked "weather" for weather desktop applet 149-1;
icon 440 marked "clock" for alarm desktop applet 149-4;
Icon 442 labeled "fitness support" for fitness support module 142;
the icon 444 marked "memo" of the memo module 153;
an icon 446 labeled "set" for a set application or module that provides access to settings for the device 100 and its various applications 136;
the icon 448 marked "gauge" of the measurement desktop applet; and
icon 449 marked "electronic store" of the store module.
It should be noted that the iconic labels shown in fig. 4A are merely exemplary. For example, other labels are optionally used for various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 in fig. 3A) having a touch-sensitive surface 451 (e.g., tablet or touchpad 355 in fig. 3A) separate from display 450. While many examples will be given later with reference to inputs on touch screen display 112 (where the touch sensitive surface and the display are combined), in some embodiments the device detects inputs on a touch sensitive surface separate from the display, as shown in fig. 4B. In some implementations, the touch-sensitive surface (e.g., 451 in fig. 4B) has a primary axis (e.g., 452 in fig. 4B) that corresponds to the primary axis (e.g., 453 in fig. 4B) on the display (e.g., 450). According to these implementations, the device detects contact with the touch-sensitive surface 451 at locations corresponding to respective locations on the display (e.g., 460 and 462 in fig. 4B) (e.g., 460 corresponds to 468 and 462 corresponds to 470 in fig. 4B). Thus, user inputs (e.g., contacts 460 and 462 and movement thereof) detected by the device on the touch-sensitive surface are used by the device to manipulate the user interface on the display (e.g., 450 in FIG. 4B) when the touch-sensitive surface (e.g., 451 in FIG. 4B) is separate from the display of the multifunction device. It should be appreciated that similar approaches are optionally used for other user interfaces described herein.
Additionally, while the following examples are primarily presented with reference to finger inputs (e.g., finger contacts, single-finger flick gestures, finger swipe gestures, etc.), it should be understood that in some embodiments one or more of these finger inputs are replaced by input from another input device (e.g., mouse-based input or stylus input). For example, a swipe gesture is optionally replaced with a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, a flick gesture is optionally replaced by a mouse click (e.g., instead of detection of contact, followed by ceasing to detect contact) when the cursor is over the position of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be appreciated that multiple computer mice are optionally used simultaneously, or that the mice and finger contacts are optionally used simultaneously.
User interface and associated process
Attention is now directed to computer-implemented and related processes on a system (e.g., portable multifunction device 100, device 300, or device ("UI") 800 ") that includes display generating components (e.g., a display, projector, head-mounted display, heads-up display, etc.), one or more cameras (e.g., a video camera that continuously provides a live preview of at least a portion of content within a field of view of the camera and optionally generates video output including one or more image frame streams capturing content within the field of view of the camera), and one or more input devices (e.g., a touch-sensitive surface, such as a touch-sensitive remote control, or a touch screen display that also serves as a display generating component, a mouse, joystick, a stylus controller, and/or a camera that tracks the position of one or more features of a user, such as a user's hand), optionally one or more gesture sensors, optionally one or more tactile output generators (and/or devices in communication with these components) to detect the intensity of contact with the touch-sensitive surface.
Fig. 5A-5P illustrate an exemplary user interface for initiating a process for measuring a body part of a user, according to some embodiments. Fig. 6A-6N illustrate an exemplary user interface for obtaining measurements of a body part of a user, according to some embodiments. Fig. 7A-7T illustrate an exemplary user interface for obtaining measurements (e.g., by prompting a user to locate and move a body part of the user, and scanning the body part of the user), according to some embodiments. Fig. 8A-8F illustrate an exemplary user interface for storing measurement information of a body part of a user, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 9A-9C, 10A-10D, 11A-11C, and 12A-12D. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having a touch sensitive display system 112. In such embodiments, the focus selector is optionally: a respective finger or stylus contact, a point of representation corresponding to the finger or stylus contact (e.g., a center of gravity of the respective contact or a point associated with the respective contact), or a center of gravity of two or more contacts detected on the touch-sensitive display system 112. However, in response to detecting contact on the touch-sensitive surface 451 when the user interface shown in the figures is displayed on the display 450 along with the focus selector, similar operations are optionally performed on a device having the display 450 and the separate touch-sensitive surface 451.
Fig. 5A-5P illustrate an exemplary user interface for measuring a body part of a user, according to some embodiments.
Fig. 5A illustrates a user interface of an application menu on the portable multifunction device 100 in accordance with some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the application provided on the user interface in fig. 5A includes an application as described with reference to fig. 4A. The device detects the user input 502 on an application icon 448 labeled "gauge". In some embodiments, a "gauge" application is launched on device 100 in response to user input 502 detected on a "gauge" icon. In some embodiments, a user interface is provided within the launched "gauge" application to initiate measurements of a body part of the user within the "gauge" application. For example, after launching the "gauge" application, the device optionally presents the user with the user interface shown in FIG. 5D.
FIG. 5B illustrates user input 504 detected at a location within the user interface corresponding to the application menu of application icon 449 labeled "electronic store". In some implementations, in response to the device detecting the user input 504, the device launches an application (e.g., "electronic store").
FIG. 5C illustrates an exemplary user interface displayed by the device in an application launched in response to user input 504 (e.g., and/or user input 502). For example, fig. 5C illustrates an exemplary user interface for purchasing accessories (e.g., a watch and/or a wristband). In some embodiments, the user interface for purchasing the accessory includes an option to measure a size of the accessory (e.g., a size of a body part). For example, fig. 5C shows a user interface for purchasing a watch, and a button 506 (e.g., a user selectable affordance or button) is provided to measure a user's wristband (e.g., a watch band) size. In some implementations, the user selects button 506 via user input 508 (e.g., tap input).
Fig. 5D illustrates an exemplary user interface 510 displayed by the device in response to detecting a selection of button 506. It will be appreciated that in some embodiments, fig. 5D is launched from a different application (e.g., and/or after a different user interface is displayed) than the examples shown in fig. 5B-5C. For example, in some embodiments, FIG. 5D is initiated from the "gauge" application selected in FIG. 5A.
Fig. 5D shows a user interface 510 for prompting the user to select a "left" button 514 or a "right" button 516 to indicate which wrist (e.g., left or right) is to be measured. In some embodiments, the user interface includes instructions, such as text instructions 512, "select your wrist worn watch. In some embodiments, the user interface displays a representation of the body part 518 to be measured. For example, to measure a wristband, the device displays representations (e.g., images) of left and right forearms, wrists, and/or hands. It will be appreciated that depending on the accessory (e.g., and the corresponding body part of the user to be measured), the instructions and/or the exemplary display representations of the body parts vary depending on the accessory selected.
Fig. 5E shows user input 522 selecting the "left" button, thereby indicating that the user is wearing a watch on the user's left wrist (e.g., and/or indicating that the user's left wrist is the body part to be measured).
In some embodiments, the background displayed by user interface 520 is different from the background displayed in user interface 510. For example, as representations of the "left" and "right" buttons and/or body parts continue to be displayed, the color of the background changes (e.g., as shown by the change in the fill pattern shown in fig. 5D-5F) (e.g., the background color changes even when other elements of the user interface remain the same). In some embodiments, the color of the background changes throughout the display of the various user interfaces to provide instructions for initiating measurements of the body part. For example, fig. 5F shows additional (e.g., different) instructions displayed by the device (e.g., "place device on a flat surface") in addition to the instructions displayed in fig. 5D and 5E (e.g., "select your wrist watch wearing"); however, each of the user interfaces shown in fig. 5D-5F corresponds to a guiding user interface for initiating a measurement of a body part. In some embodiments, the background color changes according to the amount of time (e.g., color changes every 1 second, every 2 seconds, every 5 seconds, every 1 minute, every 5 minutes, etc.). In some implementations, a predefined number of colors are repeated (e.g., cycled) such that the same color is redisplayed after the remaining colors have been displayed.
In some embodiments, the device displays the user interface 524 shown in fig. 5F in response to the device detecting user input 522 selecting either the "left" or "right" wrist. Fig. 5F illustrates a user interface that provides instructions to the user on how to measure a body part of the user through the device (e.g., text instructions 526 and/or instructions in the form of animations 528). For example, FIG. 5F includes text instructions 526, i.e., user "1. Place device on a flat surface" and "2. Place your left hand over device and rotate your hand. Fig. 5F also optionally shows an animation 528 that shows a representation of the forearm/hand placed over the device 100 (and/or an animation of forearm/hand rotation).
As described with reference to fig. 1A, in some embodiments, the device 100 includes one or more optical sensors 164 (e.g., as part of one or more cameras). In some embodiments, one or more optical sensors 164 are used to detect whether the user has placed the user's hand over the device.
Fig. 5G-5J illustrate a physical environment 531 and an exemplary user interface displayed by the device when the user moves the user's hand 532 to different locations within the physical environment (e.g., relative to the device 100). In some embodiments, the device 100 displays visual cues (e.g., using fades, translucency, colors, etc.) to indicate to the user the proper position of the user's hand relative to the device 100. For example, the proper position of the user's hand is defined by the distance between the device 100 and the user's hand 532 (e.g., or by a predefined distance range) and the orientation of the user's hand relative to the device 100. For example, the orientation is determined by a predefined angle (e.g., or range of angles) formed between the user's hand and the device. In one example, the proper orientation of the user's hand corresponds to the user's palm being substantially parallel to the device. In some embodiments, the proper position of the user's hand is further defined by the user's hand being centered (e.g., laterally) within the field of view of the one or more cameras.
Fig. 5G shows a physical environment 531 comprising the device 100 lying on a table, a user's hand 532 hovering over the device 100, and a ceiling fan 530. In some embodiments, the user's hand 532 and ceiling fan 530 are in the field of view of one or more cameras of the device 100 (e.g., detected by one or more optical sensors 164). In some implementations, the device 100 displays a representation of the field of view of one or more cameras of the device. For example, in fig. 5G, the user interface displayed on the device 100 is shown on the left side. The user interface includes a representation of a user's hand 536 and a representation of ceiling fan 534 within the field of view of one or more cameras of device 100.
Fig. 5H shows the user moving the user's hand 532 closer to the device 100 than the placement of the hand in fig. 5G. In some implementations, the device updates the representation of the user's hand 536 displayed on the user interface of the device 100 according to the movement of the user's hand 532. For example, as the user moves the hand 532 closer to the device 100, the representation of the user's hand 536 is updated to a larger representation of the user's hand 540 in fig. 5H (e.g., because the camera feed of the user's hand will appear larger when the user's hand is closer to the camera). In some implementations, as the user hand 532 moves relative to the device (e.g., relative to one or more cameras), a representation of the user hand is updated according to the movement of the user hand (e.g., in a physical environment). In some implementations, the device 100 begins to fade out (e.g., mask) the background (e.g., objects in the physical environment that are within the field of view of one or more cameras) according to the user's hand moving closer to the proper position relative to the device 100 (e.g., where the proper position is determined by the distance and/or orientation of the user's hand to be used for measurement). For example, fig. 5H shows the representation of the ceiling fan 538 faded out (e.g., as indicated by the shaded pattern of the representation) as a function of the user's hand 532 moving closer to the proper position (e.g., as the user adjusts the user's hand position). For example, once the user's hand moves into position, the background object is no longer displayed (e.g., fades out completely), as described with reference to fig. 5J.
In some embodiments, when the user moves the user's hand to a different position relative to the device 100, the representation of the user's hand 544 is displayed as semi-transparent (e.g., and/or faded), as shown in fig. 5I. In some embodiments, the representation of the user's hand transitions from translucent to non-translucent in accordance with a determination that the user's hand is in place.
Fig. 5J shows an exemplary user interface of the user's hand 532 in place relative to the device 100. In some embodiments, the user's hand is determined to be in place according to the device, the background (e.g., any physical objects included in the physical environment in the field of view of the one or more cameras) is removed and replaced with a manual or virtual background (sometimes referred to herein as display background 546). In some embodiments, the displayed background 546 includes a color (e.g., selected from among the colors displayed on the user interface as described with reference to fig. 5D-5F). Thus, in accordance with the device determining that the user's hand is in place, the device 100 displays a user interface that includes a representation of the user's hand 548 without representations of other objects (e.g., ceiling fans 530) in the field of view of the one or more cameras.
Fig. 5K-5N illustrate a physical environment 531 and an exemplary user interface for displaying error conditions to a user. As explained above, the body part of the user has to be placed in a proper position with respect to the device before measuring the part of the body part of the user. In some embodiments, the representation of the user's body part (e.g., hand) is displayed by the device 100 when the user's body part is within the field of view of the one or more cameras. In some embodiments, when the body part of the user is within the field of view of the one or more cameras but not in position relative to the device, the representation of the body part of the user is modified by the device so that the user can determine that the body part is not already in position. For example, in fig. 5K-5N, the representation of the user's body part (e.g., representation 552) is displayed as a fading (e.g., at least partially translucent) representation of the user's body part (e.g., as shown by the shaded pattern in fig. 5K-5N).
Fig. 5K shows user's hand 532 positioned farther from device 100 than in place (e.g., in place as shown in fig. 5J). In some embodiments, the device 100 displays an indication that the user's hand is not in place. For example, the device 100 displays a text indication 550 ("hands too far") indicating that the user's hands are farther away from the device and need to be moved closer in order to meet the proper position.
Fig. 5L shows that the user's hand 532 is positioned closer to the device 100 than the placement of the user's hand 532 in fig. 5K, but in this example closer to the device than the (predefined) appropriate position. Because the user's hand 532 is positioned closer to the device than it is in place, the representation of the user's hand 556 is displayed as a fade by the device, and the device 100 displays a text indication 554 ("hand too close") to indicate that the user's hand is closer to the device than it should be in place. It should also be noted that as the user hand 532 moves closer to the device 100, the representation of the user hand 556 is displayed by the device as being larger than the representation of the user hand 532 when the user hand 552 is farther from the device (e.g., as shown in fig. 5K).
Fig. 5M shows an error condition in which the posture of the device 100 is incorrect. For example, the device 100 is tilted on a table. In some embodiments, the orientation of the device 100 is determined by one or more accelerometers 167, gyroscopes 168, and/or magnetometers 169 (e.g., as part of an Inertial Measurement Unit (IMU) of the device 100) for obtaining information regarding the pose (e.g., position and orientation or pose) of the device. For example, in some embodiments, when the device is "flat," the device is considered to have an appropriate pose. The device determines that the device is "flat" based on a determination that its pose is substantially parallel to the ground (e.g., or other surface, such as a table). In some implementations, a predefined angle (or range of angles) of the device may be used to determine whether the device is in an appropriate pose for initiating the measurement.
Fig. 5N illustrates an error condition in which the user's hand 532 also includes an accessory 562 (e.g., bracelet, watch, etc.) in a physical environment. In some embodiments, in accordance with a determination that the user's hand 532 includes an accessory (e.g., an accessory in the field of view of one or more cameras; or, in another example, an accessory that is determined by the device to be at a position or location that may interfere with determining an accurate measurement of the body part to be measured), the device displays an error, such as a text indication 564 ("remove jewelry"), indicating that the user removes the accessory before the device will initiate measuring the user's wrist.
Fig. 5O shows a faded representation of the user's hand 568 without displaying a text indication describing the error. For example, the device 100 displays a fade representation of the user's hand to indicate that the user's hand is not in the proper position for initiating the measurement (e.g., there is no additional indication of what the user must do in order to obtain the proper position). Thus, as described with reference to fig. 5K-5N, text indications that appear in accordance with a determination that the user's hand is not in place are optionally not displayed.
Fig. 5P shows an optional indication that the user's hand is in place. For example, the device 100 optionally displays an animated profile 570 in accordance with the device determining that the user's hand is in the proper position to initiate the measurement (e.g., at a predefined distance and/or in a predefined orientation relative to the device 100). In some embodiments, the animated profile 570 includes the same shape as the representation of the user's hand 572 (e.g., the animated profile follows the profile of the representation of the user's hand). Further, in some embodiments, in accordance with a determination that the user's hand is in place, the representation of the user's hand 572 is no longer displayed as semi-transparent (e.g., as indicated by the fill pattern shown in fig. 5O being removed in fig. 5P).
Fig. 6A-6N illustrate an exemplary user interface for measuring a portion of a body part of a user using images of the body part captured by one or more cameras of an electronic device. The one or more captured images optionally include depth information from one or more depth sensors of the electronic device to facilitate determination of measurements made by the device. For example, fig. 6A-6B illustrate a guiding user interface that explains to the user a first set of actions that the user must take in order for the user's wrist to be measured by the device 100 (e.g., aligning the point 606 with the target 608). Fig. 6A shows a user interface including instructions, optionally including text instructions 602 ("move your hand to place a point in a circle"), non-text instructions 604 (e.g., an arrow indicating which direction to move the hand/wrist), and/or animation instructions in which a representation 610 of the hand and wrist is animated to move in the direction of the arrow. In some implementations, the animation includes the display point 606 (e.g., the first visual indicator) moving into alignment with the target 608.
Fig. 6B illustrates a user interface including instructions for a user to perform a second set of actions (e.g., align point 612 with target 608) after the user has completed the first set of actions indicated by the instructions illustrated in fig. 6A. For example, after the user places point 606 into target 608, as indicated in FIG. 6A, the user is then instructed to place point 612 into target 608. In some embodiments, the targets 608 are located at the same location within the user interface shown in fig. 6A and 6B (e.g., the targets 608 are the same targets). Fig. 6B also shows a user interface including a plurality of instructions, optionally including text instructions 616 ("then rotate your hand to place the point in a circle"), non-text instructions 614 (e.g., an arrow indicating which direction to rotate the hand/wrist), and/or animation instructions in which the representation 610 of the hand and wrist is animated to move in the direction of the arrow so that the point 612 (e.g., a third visual indicator) moves into the target 608.
In some embodiments, the instructions of fig. 6A-6B are displayed to the user before measurement of the body part of the user begins. For example, the instructions shown in fig. 6A-6B provide the user with a preview of the steps that the user would need to complete in order to obtain a measurement.
Fig. 6C-6N illustrate exemplary user interfaces displayed when a user performs actions indicated by the instructions presented in fig. 6A-6B. For example, fig. 6C-6N illustrate user interfaces that are updated as a user moves the user's hand and/or wrist to aim points within a target.
Fig. 6C illustrates a user interface displaying a representation 622 of a user's hand and wrist corresponding to the user's hand in the field of view of one or more cameras of the device 100. The user interface includes a point 620 that is fixed to a portion of a representation 622 of the user's hand (e.g., point 620 is fixed to a representation of the user's palm). Thus, as the user's hand moves relative to the device 100 (e.g., relative to the field of view of one or more cameras), the displayed representation 622 is updated accordingly on the display, and the point 620 is displayed as moving with the user's hand. The user interface also includes a target 618 that is displayed at a fixed location within the user interface (e.g., the target 618 does not move when the user's hand moves relative to the device 100).
Fig. 6D shows a user interface displayed in accordance with a user moving a user's hand closer to the device. Thus, representation 624 of the user's hand appears larger than the representation shown in fig. 6C. The size of the dots 620 also increases (e.g., proportionally) with the increase in the size of the representation 624. The target 618 remains unchanged in position and size (e.g., relative to the user interface shown in fig. 6C).
Fig. 6E illustrates a user interface displayed according to a user moving a hand farther from the device (e.g., relative to fig. 6D). Thus, the representation 626 of the user's hand and wrist appears smaller in the user interface, and the size of the dots 620 is updated according to the current display size of the representation 626 of the user's hand. The point 620 also remains in the same relative position with respect to the representation of the user's hand (e.g., the point 620 appears to be attached or fixed to the palm center of the representation of the user's hand). Thus, as the user's hand moves away from the device (e.g., changes the distance between the device and the hand; up or down along an axis perpendicular to the display of the device 100) and/or as the user's hand moves in any direction (e.g., left, right, up or down along an axis parallel to the display of the device 100), the size and position of the representation of the user's hand and wrist are updated on the display, and the size and position of the point 620 is also updated on the display.
Because the device displays the point 620 in a fixed position and size relative to a portion of the representation of the user's hand, the point 620 may indicate to the user how the user needs to move his hand in order to be in place (e.g., such that the point 620 is aligned within the target 618). For example, the size of the dot 620 indicates to the user whether the user's hand is too close or too far (e.g., whether the dot is too large or too small to fit/align within the target), and the position of the dot 620 indicates to the user the direction in which the user needs to move his hand relative to the device.
Fig. 6F shows a representation 628 of a user's hand in position such that the point 620 is aligned (e.g., centered) within the target 618.
According to some embodiments, as shown in fig. 6G, the device displays a timer 632 in response to the point 620 being properly aligned with the target 618. In some implementations, the display of the point 620 by the device is animated to transition to a timer 632. In some implementations, the target 618 optionally continues to be displayed by the device. In some implementations, in response to the user aligning the point 620 with the target 618, an optional text instruction 630 ("stay stationary 0:07") is displayed to the user. In some implementations, the text instructions 630 include an amount of time (e.g., "0:07") that the user must maintain the position of their hand relative to the device. In some implementations, the amount of time (e.g., per second) is updated to display a countdown to the user. In some embodiments, the device 100 captures one or more images of the user's hand/wrist while the user maintains the position of his hand. In some embodiments, the amount of time displayed is determined by the device from the amount of time required by the device to capture one or more images of the user's hand in place. For example, the device 100 uses the captured one or more images to determine a measurement of the user's wrist. As described elsewhere, the one or more captured images optionally include depth information from one or more depth sensors to facilitate determination of measurements made by the device.
According to some embodiments, after the amount of time has expired (e.g., the timer becomes "0"), a success message is displayed, as shown in fig. 6H. In some embodiments, the success message includes one or more of the following: text indication 636 ("success"), a non-text indication (e.g., check mark 638), and/or a screen flash (e.g., or other animation) to indicate that the amount of time has expired. In some embodiments, the representation 640 of the user's hand and/or wrist continues to be displayed simultaneously with the success message.
Fig. 6I illustrates an exemplary user interface displayed to instruct a user to position the user's hand into a second appropriate position (e.g., a position different from the appropriate positions described with reference to fig. 6C-6H). In some embodiments, the representation 646 of the user's hand begins at the location shown in FIG. 6I because the user's hand is already in that location, while the first "success" message in FIG. 6H is obtained. Fig. 6I optionally includes text instructions 642 that indicate to the user how to move the user's hand to achieve the second proper position ("rotate your hand to place the point in a circle").
In some embodiments, point 644 occurs in a fixed position relative to a portion of the representation of the user's hand that is different from the portion of the representation of the user's hand in which point 620 is fixed (e.g., the palm of the representation of the user's hand). For example, FIG. 6I shows the point 644 secured to the area adjacent the side of the user's hand. For example, the point 644 is fixed to a location that is not directly on (e.g., overlaps) the user's hand, but rather there is space between the fixed location and the user's hand. Further, in this example, the fixed position of the point 644 relative to a portion of the representation of the user's hand is a fixed position in three-dimensional space, and the point 644 has an orientation such that the rear surface of the point 644 faces the outer edge of the user's hand and is perpendicular to the front surface of the user's palm. In some embodiments, a side view of the point 644 is displayed in the user interface shown in fig. 6I (e.g., the point 644 appears elliptical, indicating that this is a side view of a circular point), indicating that the point needs to be moved (e.g., rotated) with the user's hand in order to view the point 644 from a front perspective.
The target 618 is displayed in the user interfaces shown in fig. 6I to 6L. As the user moves the user's hand (e.g., and the representation of the user's hand is updated in the user interface shown in fig. 6I-6L), the target 618 is maintained in the same position in the user interface. In some implementations, the target 618 as shown in fig. 6I-6L is the same target as that shown in fig. 6C-6F (e.g., the target is in the same location to guide the user to both the first and second appropriate locations).
FIG. 6J illustrates an exemplary user interface displayed when a user moves (e.g., rotates) a user's hand relative to the device. For example, the point 644 is maintained by the device at the same relative position with respect to the representation of the user's hand. For example, as the user turns the user's hand, the point 644 appears to float in front of the representation 648 of the user's hand. In some embodiments, the shape of the point 644 is updated by the device to indicate the angle in three-dimensional space at which the location of the point is in is displayed on the user interface of the device. For example, in fig. 6J, the point 644 continues to be displayed at a fixed position relative to the representation 648 of the user's hand (e.g., a fixed position as defined by the gap between the point and the representation and at a particular distance below the little finger shown in the representation of the user's hand).
Fig. 6K shows the user interface with the point 644 now appearing as a circle, indicating that the user has rotated the user's hand by the appropriate amount. The points 644 are maintained at the same relative position (e.g., at a particular distance below the little finger) with respect to the representation 650 of the user's hand.
FIG. 6L illustrates a user interface displayed in accordance with the device determining that the user has moved the user's hand into a second appropriate position. For example, representation 652 of the user's hand has been rotated approximately 90 degrees relative to the position of representation 640 of the user's hand shown in FIG. 6H. The point 644 is aligned within the target 618, indicating that a second proper position has been reached.
According to some embodiments, the user interface shown in fig. 6M is displayed in response to the user reaching the second appropriate position. For example, in some embodiments, a timer 656 is displayed in response to the point 644 being properly aligned with the target 618. In some embodiments, timer 656 is a different timer than timer 632 shown in fig. 6G. In some implementations, the point 620 (e.g., and/or the target 618) is animated to transition to a timer 656. In some embodiments, the target 618 optionally continues to be displayed. In some implementations, in response to the user aligning the point 644 with the target 618, an optional text instruction 654 ("stay stationary 0:09") is displayed to the user. In some implementations, the text instructions 654 include an amount of time (e.g., "0:09") that the user must maintain the position of their hand relative to the device. In some implementations, the amount of time (e.g., per second) is updated to display a countdown to the user. In some embodiments, the device 100 captures a second set of one or more images of the user's hand/wrist while the user maintains the position of his hand shown in fig. 6M. For example, the device 100 uses the captured one or more images and the captured second set of one or more images as described with reference to fig. 6G to determine (e.g., calculate) a measurement of the user's wrist.
According to some embodiments, after the amount of time has expired (e.g., the timer becomes "0"), a success message is displayed, as shown in fig. 6N. In some embodiments, the success message includes one or more of the following: text indication 660 ("success"), a non-text indication (e.g., check mark 662), and/or a screen flash (e.g., or other animation) to indicate that the amount of time has expired. In some embodiments, the representation 664 of the user's hand and/or wrist continues to be displayed concurrently with the success message.
Fig. 7A-7E illustrate an exemplary user interface for simultaneously displaying a virtual bracelet 704 and a representation 702 of a user's hand and/or wrist. In some embodiments, virtual bracelet 704 includes one or more indicators that change visual appearance as the device scans the user's hand and/or wrist. In some embodiments, fig. 7A is a instructional user interface that uses an animation that updates the appearance of the indicator as the animated hand rotates to demonstrate to the user how to use the virtual bracelet as a visual guide to obtain a scan of the user's hand.
For example, fig. 7A shows a virtual bracelet 704 having multiple indicators (e.g., oval openings) that change color when a representation 702 of a user's hand is rotated. In some embodiments, the change in visual appearance of the indicators includes changing a color of the one or more indicators, changing a brightness of the one or more indicators (e.g., to cause the one or more indicators to illuminate), and/or changing a level of transparency of the one or more indicators. Fig. 7A shows an example of how the indicators are updated or will be updated when the user's hand is rotated. For example, in the animation guided user interface of fig. 7A, as the representation of the user's hand rotates, the virtual bracelet appears to rotate with the representation of the user's hand such that the visual indicator is filled from left to right as the representation of the user's hand rotates. It should be noted that the animation in the coaching user interface of fig. 7A can be displayed even if the actual hand of the user is not moving, or whether the user hand is moving or stationary, as this is the coaching user interface. Additionally, it should be understood that examples of "filling" one or more indicators (e.g., as described below with reference to fig. 7J-7M) may correspond to any method of changing the visual appearance of one or more indicators (e.g., changing the brightness, changing the color, and/or changing the transparency of one or more indicators).
Fig. 7B illustrates a physical environment 531 that includes the device 100 positioned flat on a surface (e.g., a table) and a user's hand 532 positioned above the device 100 in the field of view of one or more cameras of the device 100. The user interface displayed by the device 100 is shown on the left side of fig. 7B. For example, the user interface includes a display of a representation 706 of the user's hand 532 (e.g., a representation as captured by the field of view of one or more cameras of the device 100) and a display of a virtual bracelet 708.
In some embodiments, as shown in fig. 7B, when the user's hand 532 is positioned too far from the device 100, the virtual bracelet 708 does not update any indicators of the virtual bracelet 708, which indicates that the user's hand has not been scanned at that location (e.g., because the user's hand is not in place to scan). In some embodiments, the determination of the distance of the user's hand or other body part from the device 100 by the device is based at least in part on depth information obtained from one or more depth sensors of the device 100 or images captured by one or more cameras of the device or both.
In some embodiments, the user's hand 532 moves to the left (e.g., relative to an axis of one or more cameras that are substantially parallel to the device), and in response to the user's hand moving to the left, the representation 710 of the user's hand shown in fig. 7C is displayed as moving to the left (e.g., relative to the position of the representation shown in fig. 7B). The virtual bracelet 708 appears in the same relative position as compared to the representation 710 of the user's hand, such that when the representation of the user's hand moves, the virtual bracelet moves with the representation (e.g., proportionally).
Fig. 7D illustrates an exemplary user interface displayed when the user hand 532 has moved away from the device 100 (e.g., the distance between the device (e.g., one or more cameras of the device) and the user hand increases). As the user's hand moves farther from the device 100, the representation 714 of the user's hand appears smaller. The displayed virtual bracelet 708 also appears smaller (e.g., relative to the representation and virtual bracelet shown in fig. 7C) according to (e.g., proportional to) the smaller representation 714 of the user's hand.
Fig. 7E illustrates an exemplary user interface displayed when the user hand 532 has moved closer to the device 100 (e.g., the distance between the device (e.g., one or more cameras of the device) and the user hand decreases). As the user's hand moves closer to the device 100, the representation 718 of the user's hand appears larger in the user interface. The displayed virtual bracelet 720 also appears larger (e.g., relative to the representations and virtual bracelets shown in fig. 7C and 7D) according to (e.g., proportional to) the larger representation 718 of the user's hand.
Fig. 7F-7G illustrate an optional user interface in which user input 726 moves the position of virtual bracelet 724 relative to representation 722 of the user's hand. In some embodiments, the virtual bracelet 724 is positioned at a location relative to a representation of the user's hand to be measured by the device 100. For example, if the virtual bracelet is moved farther away from the user's hand (e.g., up the user's forearm), the device will measure the size of the forearm corresponding to the position of the virtual bracelet. In some embodiments, instructions are provided to the user indicating that the user should move (e.g., using drag input 726) the virtual bracelet to the location of the accessory that the user is wearing (e.g., planning to wear) to measure. For example, as described above, the user's wrist is measured by the device to determine the size of the wristband. Thus, the user is optionally instructed by the device to move the virtual bracelet to the portion (e.g., wrist) of the user's forearm where the user wears the wristband.
Fig. 7H-7I illustrate optional user interfaces displayed in the event that the user has rotated his hand (e.g., wrist) too quickly. For example, the device determines the rate at which the user can rotate his or her hand in order for the device to obtain accurate measurements (e.g., by scanning the user's wrist and/or hand). If the user rotates the user's hand too quickly (e.g., faster than the device allows the determination of scanning the user's hand/wrist), the device discards populating the additional indicators of the virtual bracelet 730 (e.g., even if the user has moved the user's hand, only the same two indicators of the virtual bracelet 730 appear to be populated in fig. 7H-7I, as indicated by the change in position of representation 728 of the user's hand to the position of representation 734 of the user's hand). The device optionally displays an error message such as a text indication 732 ("you move too fast").
Fig. 7J-7M illustrate optional user interfaces that are displayed when the user rotates the user's hand, and the device 100 updates the indicators of the virtual bracelet based on successful movement (e.g., scanning) of the user's hand. For example, fig. 7J shows that two indicators of virtual bracelet 738 are filled. In some embodiments, the device determines that the user has positioned the user's hand and/or wrist at the appropriate distance from the device 100, and the centermost indicator of the virtual bracelet 738 is displayed by the device as filled. In some embodiments, if the user's hand is not in place (e.g., oriented and/or distance) relative to the device 100, or fails to meet other preconditions for scanning the user's hand, the device 100 provides an indication of an error condition, as described with reference to fig. 5K-5N. In some implementations, the indication of the error condition includes displaying the representation 736 of the user's hand as semi-transparent until the user's hand is in place. In some embodiments, the indication of the error condition includes discarding any indicator that populates virtual bracelet 738 (e.g., as shown in fig. 7B-7E).
In some implementations, as the user rotates (e.g., turns) the user's hand and/or wrist (e.g., within a physical environment), the representation 742 of the user's hand is updated according to the current position of the user's hand. For example, representation 742 of the user's hand shows that the user has rotated the user's hand as compared to the position of representation 736 of the user's hand shown in FIG. 7J. In some embodiments, the virtual bracelet rotates with (e.g., is fixed to) the representation of the user's hand as the representation of the user's hand rotates. In some embodiments, the filled indicator of the virtual bracelet remains filled as the device continues to scan the user's hand/wrist. For example, as the user's hand rotates, additional indicators appear on the display to indicate to the user that the user must continue to rotate his hand in that direction (e.g., until all of the indicators of the displayed virtual bracelet are filled, as shown in fig. 5M). For example, in fig. 7J, the indicators displayed near the center (e.g., palm) of the representation 736 of the user's hand are filled, and as the user's hand rotates to the position in fig. 7K, the indicators displayed near the center (e.g., palm) of the representation 740 of the user's hand remain filled, additional indicators (e.g., the indicators displayed below the representation of the user's little finger) are filled, while the remaining indicators on the far left side are not filled (e.g., the indicators displayed near the "back" of the representation of the user's hand).
Fig. 7L illustrates additional indicators of the displayed virtual bracelet 746, including one or more of the previously filled indicators shown in fig. 7K. As the user's hand rotates in the field of view of the one or more cameras, a representation 744 of the user's hand is updated. For example, FIG. 7L shows that the user has continued to rotate the user's hand, as indicated by the displayed position of the representation 744 of the user's hand. As the user continues to rotate the user's hand, additional indicators of the virtual bracelet 746 are filled.
Fig. 7M shows that all displayed indicators of the virtual bracelet 750 are filled in, indicating that the scanning of the user's hand is complete (e.g., no additional indicators of the virtual bracelet are to be filled in, so the user does not need to further rotate the user's hand to fill in the displayed indicators). The representation 748 of the user's hand shown in fig. 7M shows that the user's hand has rotated from having the front of the user's hand (e.g., palm) in the field of view of the one or more cameras (as shown in fig. 7J) to the back of the user's hand in the field of view of the one or more cameras.
In some implementations, as the user rotates the user's hand, the indicators of the virtual bracelet are continuously (e.g., and/or gradually) filled such that only indicators adjacent to the already filled indicators may be filled. In some embodiments, one or more indicators of the virtual bracelet may not be filled. In some embodiments, the indicators of the bracelet are populated in a sequence and/or direction to indicate to the user the direction of rotating the user's hand.
For example, as shown in fig. 7J-7M, additional portions of the virtual bracelet are filled according to the rotation of the user's hand. In some embodiments, the indicator of the virtual bracelet fills as the user's hand rotates, in accordance with a determination by the device that the user's hand is in place and/or the user's hand is moving at a rate at which the device can scan the user's hand and/or wrist.
In some implementations, if the user rotates the user's hand in a direction opposite to the predefined direction (e.g., the direction shown in fig. 7J-7M), the filled one or more indicators are updated to remove the fill. For example, after a user rotates his hand clockwise and one or more indicators fill, if the user rotates his hand at least partially counterclockwise, one or more of the filled indicators are unfilled.
Fig. 7N-7P illustrate exemplary user interfaces for displaying measurements to a user. In some embodiments, the measurements are automatically displayed to the user after the device has successfully scanned a portion of the user's body, e.g., after a success message as shown in fig. 6N and/or after all indicators of the virtual bracelet 750 have been filled (as shown in fig. 7M). Fig. 7N shows one or more size indicators as determined by the device from the measured portion of the user's body. For example, the user interface includes a text indication 752 ("you are currently watch band size: size 5"). In some embodiments, the sizing user interface includes an image 756 of the user's hand. In some embodiments, the image 756 of the user's hand is generated from images of the user's hand captured during and/or after scanning the user's hand (e.g., when the user's hand is in the position shown in fig. 7M, the device 100 captures an image (or screenshot) of the user's hand). Thus, when the user interface of fig. 7N is displayed, the user does not need to maintain the user's hand within the field of view of the one or more cameras.
In some embodiments, the user interface includes a watch and/or wristband 754 displayed at a location that at least partially overlaps an image 756 of the user's hand. For example, as described with reference to fig. 6C-6N and/or fig. 7J-7M, a wristband 754 is generated (e.g., as a digital image) and displayed at a measured position of the user's wrist. In some implementations, the user interface includes buttons 758 for the user to navigate to the next user interface (e.g., by selecting buttons 758 with user input 760).
In response to selection of button 758, the user interface in FIG. 7O is displayed. In some embodiments, as shown in fig. 7O, a text instruction 762 is provided ("you can drag the watch to a different portion of your wrist") to indicate to the user that the user is enabled to select wristband 754 and move the wristband to a different portion of the user's wrist's image 756. For example, a user wearing a wristband further away from the wrist and further up the forearm can move (e.g., using a drag and/or swipe gesture) the wristband 754 to a different location on the user's wrist. In response to user input 764, the device updates the size indicator according to the new portion of the wrist/forearm over which the wristband 754 is displayed, as shown in fig. 7P. For example, FIG. 7P shows wristband 754 having moved down over image 756 and the display of the size indicator has been updated from size 5 to size 6 (e.g., displayed as text indicator 768, "your current wristband size is: size 6"). In some embodiments, the user is enabled to continue to adjust the placement of the wristband 754 relative to the image 756 of the user's hand (e.g., until the user is satisfied with the placement of the wristband). As the user changes the placement of wristband 754, the size indicator is also updated to provide a measurement of the portion of the wrist on which wristband 754 is placed. In some implementations, upon completion of moving wristband 754, a user is enabled to select a "complete" button 766 to display a next user interface, such as the user interface shown in fig. 7P.
Fig. 7Q-7S illustrate an exemplary user interface similar to the user interface illustrated in fig. 7N-7P, with the displayed wristband 754 replaced by a displayed tape 770. For example, fig. 7R shows a text indication 772 ("you can drag the tape measure to a different portion of your wrist") and the device receives user input 774 (e.g., drag gesture, swipe gesture) that moves the tape measure 770 in a downward direction (e.g., relative to the orientation of the device as shown in fig. 7R). Fig. 7S shows that the size indicator is updated to show that the size of the user' S wrist corresponding to the updated placement of the tape measure 770 is now "size 6" rather than size 5. In some embodiments, upon completion of moving the tape 770, the user is enabled to select the "complete" button 778 to display a next user interface, such as the user interface shown in FIG. 7T.
FIG. 7T illustrates an exemplary user interface for a user to save user dimensions. For example, fig. 7T includes text prompt 780 ("save size as virtual card. For example, user input 786 selects button 782 "yes," which causes the device to initiate a process for preserving user dimensions. Optionally, FIG. 7T includes an image of a user's hand 756 with a tape 770 or wristband 754 (not shown).
In response to the user selecting "yes" button 782 in fig. 7T, in some embodiments, QR code 804 (e.g., or other computer-readable or machine-readable code) is displayed, as shown in fig. 8A. Optionally, QR code 804 is generated by device 100 in response to user selection of button 782; or alternatively, the QR code 804 is generated or stored prior to the user selecting the button 782 and retrieved for display in response to the user selecting the button 782. In some embodiments, QR code 804 includes sizing information to be saved and/or additional information regarding an accessory (e.g., a watchband) to which the sizing information corresponds.
As shown in fig. 8A, the user interface displayed by device 100 optionally includes text instructions 802 ("scan your code to save size on your device") for loading sizing information onto another device (e.g., different from the device performing the scan/measurement). For example, in some cases or embodiments, the device 100 is not a user device, but the user may scan the QR code 804 using another device (e.g., user device 800, fig. 8B) to transfer (e.g., and save) the user's sizing information to the user device performing the scan. For example, fig. 8B illustrates user device 800 (e.g., associated with a user that has obtained sizing information) scanning for code displayed on device 100. In some embodiments, user device 800 is configured to scan a QR code using one or more cameras of device 800. For example, device 800 in fig. 8B illustrates a camera application displaying a field of view of one or more cameras of device 800. The field of view of the one or more cameras includes a view of the device 100 (which is displaying the QR code 804).
In some embodiments, in response to capturing (e.g., scanning) the QR code displayed on device 100, device 800 generates and displays a mini-application 806 (e.g., an application clip), as shown in fig. 8C (e.g., displaying a user interface of mini-application 806). In some implementations, the mini-application 806 is not an application downloaded (e.g., from an application store), but rather a temporarily stored (e.g., cached) application that stores the QR code 804. In some implementations, the mini-application 806 displays a virtual card that includes information about the user's size and/or information about the accessory. For example, the applet 806 optionally displays additional information about the user's size and/or information about the accessory (e.g., displayed text ("watch band style A" and/or "your information: size 6")). In some implementations, the mini-application 806 displays the QR code 804. In some embodiments, QR code 804 includes information regarding the user's size and/or accessories.
In some implementations, the mini-application 806 provides options for saving information about the user's size and/or information by selecting button 808 ("add to wallet"). In some implementations, in response to user selection 810 of button 808, the virtual card (e.g., as displayed in mini-application 806) is stored in a virtual wallet on user device 800 (e.g., as shown in fig. 8F).
In some embodiments, device 100 is a user device (e.g., a scan is performed on the user device and thus the user does not need to scan the QR code from device 100 to preserve the user's sizing information), and in response to the user selecting button 782 "yes" in fig. 7T, device 100 displays the user interface shown in fig. 8D (e.g., rather than prompting the user to scan the QR code onto the user device because the user device is used to perform the scan). For example, the user interface optionally includes a display of a virtual card 1006. The virtual card 1006 optionally includes information about the user's size and/or accessories (e.g., shown as text "watch band style a" and "your information: size 6"). In some implementations, the virtual card 1006 includes a QR code 1004 that also stores information about the user's size and/or accessories. In some embodiments, the virtual card 1006 also includes a user-selectable affordance (e.g., including text such as "add to wallet" button 1008) that provides the user with the option to store the virtual card in the virtual wallet of the device 100. For example, in response to user input 1010, the device stores virtual card 1006 in a virtual wallet stored on the device. In some implementations, instead of displaying the virtual card 1006 shown in fig. 8D, the device 100 displays the QR code 1004 in a mini-application (e.g., an application clip), as discussed above with reference to fig. 8C.
In some embodiments, after sizing information (e.g., including QR code 804 or QR code 1004 as stored in virtual card 1006) is stored to a virtual wallet of a user device (e.g., device 800 as described with reference to fig. 8A-8C, or device 100 as described with reference to fig. 8D), a prompt 812 (see fig. 8E) is provided to the user to allow the user to open (e.g., view) the sizing information of accessories in the virtual wallet on the device. For example, when the user is within a predefined geographic area (e.g., near a store selling accessories), a prompt 812 ("open watch size information in wallet") is generated. In some embodiments, when a prompt is generated, prompt 812 is displayed on a current user interface displayed on the device. For example, in fig. 8E, a prompt 812 is generated and displayed over a lock screen displayed on the user device.
In some embodiments, sizing information (e.g., stored as a virtual card) is displayed within a virtual wallet of a user on a user device (e.g., device 100 or device 800) in response to the user selecting prompt 812 via user input 814, as shown in fig. 8F. For example, the sizing information includes information about the accessory ("watch strap style a") and/or information about the user's size ("your information: size 6"). In some embodiments, the sizing information is stored as a QR code. In some embodiments, the QR code is displayed in a virtual wallet, as shown in fig. 8F.
Fig. 9A-9C are flowcharts illustrating a method 900 of providing visual feedback to a user to indicate a correct position for measurement, according to some embodiments. The method 900 is performed at a computer system (e.g., the portable multifunction device 100, the device 300, or the device 800) that includes a display generating component (e.g., a display, optionally a touch-sensitive display, a projector, a head-up display, etc.), one or more cameras (and optionally one or more depth sensors), and one or more input devices, optionally one or more gesture sensors, optionally one or more sensors for detecting contact strength with a touch-sensitive surface, and optionally one or more tactile output generators (and/or communication therewith). All references to images captured by one or more cameras of a computer system should be understood to optionally include depth information from one or more depth sensors (e.g., one or more time-of-flight sensors, structured light sensors (also referred to as structured light scanners), etc.) of the computer system to facilitate measurement of objects in the field of view of the one or more cameras. Some operations in method 900 are optionally combined and/or the order of some operations is optionally changed.
As described below, the method 900 provides an intuitive way of indicating to a user how to locate a portion of a user's body for measurement by automatically detecting the current position of the portion of the user's body and showing a guide indicating the correct position. Changing the visual characteristics of the visual cues according to the current position of the user's body provides visual feedback to the user indicating whether the user's body has reached the proper position for measurement. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
The computer system displays (902) a visual cue in a first area of a first user interface to move a body part into view of one or more cameras. For example, as described with reference to fig. 5F, in some embodiments, the computer system displays visual cues such as text instructions 526 and/or representations of body parts (e.g., animation 528). In some embodiments, the visual cues comprise contours of the body part, or a stylized representation (e.g., animation and/or images) of the body part. For example, FIG. 5F shows an animated representation of the hand and wrist.
In displaying a visual cue (904) that moves the body part into a field of view of the one or more cameras, the computer system uses the one or more cameras to detect (906) a portion of the user's body in the field of view of the one or more cameras and corresponding to the body part. For example, fig. 5G-5J illustrate a physical environment 531 in which a user's hand 532 is positioned within the field of view of one or more cameras of the device 100.
In response to detecting the portion of the user's body, the computer system displays (908) a representation of the portion of the user's body. For example, as described with reference to fig. 5G-5J, the user's hand 532 is placed within the field of view of one or more cameras of the device 100, and the device 100 displays a representation of the user's hand (e.g., representation 548, fig. 5J).
In accordance with the computer system determining that the portion of the user's body in the field of view of the one or more cameras meets the first criteria, the computer system displays (910) a representation of the portion of the user's body in the field of view of the one or more cameras with a first transparency via the display device (e.g., displays the representation of the portion of the user's body without any fade and/or translucency). In some embodiments, the first criteria includes a position criteria of a portion of the user's body relative to a pose, orientation, rotational speed, and/or distance of the computer system. For example, the first criterion includes a requirement that a distance between a portion of the user's body and the one or more cameras is above a first threshold distance. In some embodiments, the first criterion includes a requirement that a distance between the portion of the user's body and the one or more cameras is below a second threshold distance. In some implementations, the first criterion includes a requirement that a distance between the portion of the user's body and the one or more cameras be above a first threshold distance and below a second threshold distance (e.g., between the threshold distances). In some implementations, the first criterion includes a requirement that the rotation rate of the portion of the user's body be less than a threshold rotation rate (e.g., the first criterion is not met when the user moves the arm too fast). In some embodiments, the first criteria includes a requirement that a portion of the user's body be aligned with the visual cue (e.g., relative to position, angle, etc.). In some embodiments, a representation of a portion of the user's body is displayed, while other objects within the field of view of one or more cameras are not displayed (e.g., as described with reference to fig. 5J). For example, only a representation of a portion of the user's body is displayed and superimposed over the background of the user interface.
In accordance with the computer system determining that a portion of the user's body in the field of view of the one or more cameras fails to meet the first criteria (e.g., relative to a representation of the first body portion, or relative to a location of the device, a distance from the device, or a speed of movement), the computer system displays (912) the representation of the portion of the user's body as having a second transparency indicating that the first criteria has not been met. In some embodiments, the second transparency is greater than the first transparency. For example, as described with reference to fig. 5K-5M, in accordance with the device determining that the user's hand 532 is not in position relative to the device 100, the device 100 displays the representation of the user's hand as at least partially translucent (e.g., transparent) in fig. 5K-5M (e.g., as indicated by the fill pattern of the representation of the user's hand in fig. 5K-5M). In accordance with a determination that the user's hand 532 is in position relative to the device (e.g., as shown in fig. 5J), the representation 548 of the user's hand is not displayed as semi-transparent.
In some embodiments, upon displaying a visual cue to move the body part into the field of view of the one or more cameras, the computer system displays (914) an animated transition in which at least a portion of the visual cue is moved to a position proximate to a representation of the portion of the user's body. In some embodiments, the animated transition is initiated in response to detecting a portion of the user's body. For example, as shown in fig. 5P, in some embodiments, the visual cues include contours 570 of portions of the user's body, and the contours 570 are moved to form contours around representations 572 of the user's hands in response to the user's hands being in the field of view of the one or more cameras.
Displaying multiple user interface elements (including visual cues and representations of portions of the user's body) provides an intuitive way for the user to determine that a portion of the user's body is in the correct position to be measured by the device without requiring additional input from the user to check whether a portion of the user's body is to be measured by the device. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, at least a portion of the visual cue includes (916) a profile of the representation that is aligned to (e.g., surrounds) a portion of the user's body. For example, as shown in FIG. 5P, the outline 570 is aligned to a representation 572 of the user's hand.
The representation of the alignment of the visual cues to the portion of the user's body provides visual feedback to the user indicating that the portion of the user's body has been automatically detected within the field of view of the one or more cameras without requiring the user to provide additional input to begin the measurement process. Providing improved visual feedback to the user when a set of conditions has been met and reducing the amount and/or degree of input required to perform an operation by (e.g., automatically) performing the operation enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, at least a portion of the visual cue is displayed (918) in a shape that matches a shape of a representation of a portion of the user's body in a field of view of the one or more cameras. In some embodiments, the visual cues (e.g., contours) outline portions of the user's body (e.g., as shown in fig. 5P).
Matching the shape of the visual cue to the shape of the portion of the user's body makes it easier for the user to see that the portion of the user's body has been detected by one or more cameras and/or that the portion of the user's body is in place to begin the measurement. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the visual cue comprises (920) a representation of a hand or another body part to be measured. For example, as shown in fig. 5F (e.g., animation 528 of a hand) and fig. 5P (e.g., outline 570 of a hand), the visual cues include representations of the hand. In some embodiments, the visual cues include contours of the hand, wrist, and/or forearm. In some embodiments, the representation of the body part in the visual cue is a right-hand or left-hand representation, depending on the selection received from the user (e.g., as described with reference to fig. 5E).
Providing visual cues in the shape of the hand indicates to the user that the user's hand is part of the user's body to be measured. Displaying visual cues as hands improves visual feedback to the user by making it easier for the user to determine that the user should place the user's hands into the field of view of one or more cameras. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the portion of the user's body includes (922) a hand. For example, as described with reference to fig. 5G-5N, the user hand 532 is shown in the physical environment 531 such that the user hand 532 is within the field of view of one or more cameras of the device 100.
Automatically detecting the user's hand positioned over the device's camera such that the device determines the size of an accessory worn on or near the user's hand would make it easier for the user to obtain sizing information without requiring the user to manually enter measurements of the user's body. Reducing the number and/or extent of inputs required to perform an operation when a set of conditions has been met (by performing the operation (e.g., automatically)) enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in accordance with the computer system determining that a portion of the user's body in the field of view of the one or more cameras fails to meet the first criteria, the computer system displays (924) text indicating that the first criteria have not been met (and optionally including an indication of what changes may be made to meet the first criteria). For example, fig. 5K-5N illustrate text prompts indicating that the first criterion has not been met (e.g., where meeting the first criterion indicates that the proper position of the user's hand has been met). In some embodiments, the first criteria includes a criterion that the computer device is stationary. In some embodiments, the first criterion includes the computer device being positioned at a first orientation (e.g., lying flat, or at an angle substantially perpendicular to the ground), and the displayed text (displayed in accordance with a determination that the first criterion has not been met) includes instructions to lie the device flat (e.g., in the first orientation), such as shown in fig. 5M. In some embodiments, the first criterion includes a requirement that a distance between the portion of the user's body and the one or more cameras (as determined by the computer system) be above a first threshold distance (e.g., the hand needs to be distant from the one or more cameras by at least the first threshold distance), and the displayed text displayed in accordance with the computer system determining that the first criterion has not been met includes an indication that the portion of the user's body is too close, e.g., as shown in fig. 5L. In some embodiments, the first criterion includes a requirement that a distance between the portion of the user's body and the one or more cameras be below a second threshold distance (e.g., the hand needs to be within a second distance relative to the one or more cameras), and the displayed text displayed in accordance with the computer system determining that the first criterion has not been met includes an indication that the portion of the user's body is too far, e.g., as shown in fig. 5K. In some embodiments, the first criterion includes a requirement that a portion of the user's body not be obstructed by an accessory (e.g., ring, watch, bracelet), and the displayed text displayed in accordance with a determination that the first criterion has not been met includes an indication to remove one or more objects from the portion of the user's body (e.g., remove ring, remove watch, remove bracelet, etc.), for example as shown in fig. 5M. In some embodiments, such as the examples provided above, the displayed text displayed in accordance with the computer system determining that the first criteria has not been met indicates a location relative to the user's hand or other body part or an error condition relative to the computer device.
Providing an error notification indicating that the user has not placed a portion of the user's body in place for measurement by the computer system may make it easier for the user to know how to adjust the portion of the user's body to be in place without requiring the user to provide user input asking the device whether the measurement has been successful. Providing improved visual feedback to the user when a set of conditions has been met and increasing the operability of the system by (e.g., automatically) performing an operation reducing the amount and/or degree of input required to perform the operation, and making the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and improving battery life of the computer system device by enabling the user to more quickly and efficiently use the computer system.
In some implementations, displaying the representation of the portion of the user's body with the second transparency indicating that the first criterion has not been met includes (926) visually de-emphasizing (e.g., fading) the representation of the portion of the user's body. For example, as shown in fig. 5L, in accordance with the computer system determining that the distance between the portion of the user's body and the one or more cameras is not above the first threshold distance, the device 100 visually de-emphasizes (e.g., fades) the representation 556 of the portion of the user's body. As shown in fig. 5K, in some embodiments, the device 100 visually de-emphasizes (e.g., fades) the representation 552 of the portion of the user's body in accordance with the computer system determining that the distance between the portion of the user's body and the one or more cameras is not less than the second threshold distance. In some implementations, an increase in the distance of the user's hand from the respective threshold distance causes the computer system to increase the amount of visual de-emphasis (e.g., as the user's hand moves away from the second threshold, the transparency of the representation of the user's hand increases, and as the user moves the user's hand closer to the device, the transparency of the representation of the user's hand gradually decreases (e.g., in proportion to the change in distance of the user's hand)).
Visually de-emphasizing (e.g., fading) the representation of the portion of the user's body provides visual feedback informing the user that the portion of the user's body is not in place within the field of view of the one or more cameras and prompting the user to move the portion of the user's body if the user wishes the computer system to measure the portion of the user's body. Providing improved visual feedback to the user enhances the operability of the computer system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and improving battery life of the device by enabling the user to more quickly and efficiently use the computer system.
In some embodiments, the portion of the user's body is in a physical environment (e.g., physical environment 531 shown in fig. 5G-5N). In some embodiments, the computer system displays (928) a background prior to detecting the portion of the user's body. For example, the context is a computer-generated context (e.g., wallpaper) that does not include a display of physical objects (e.g., lights, ceiling fans, ceilings, etc.) in a physical environment behind the user's hand. In some embodiments, in response to detecting that the portion of the user's body is in the field of view of the one or more cameras, the computer system displays a representation of the portion of the user's body that is in the field of view of the one or more cameras over the background (without displaying the physical environment in the field of view of the one or more cameras). For example, before the user's hand 532 is within the field of view of one or more cameras of the device 100, the ceiling fan 530 is within the field of view of the one or more cameras, and the representation 534 of the ceiling fan is displayed by the device 100 without displaying a representation of the user's hand. In accordance with the user's hand 532 being positioned within the field of view of the one or more cameras, a representation 548 of the user's hand is displayed over the computer-generated background 546, as shown in FIG. 5J.
Removing the view of the physical environment and displaying only a representation of the portion of the user's body provides information about the portion of the user's body without distraction, thereby making it easier for the user to adjust the position of the portion of the user's body. Providing improved visual feedback to the user enhances the operability of the computer system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and improving battery life of the computer system by enabling the user to more quickly and efficiently use the computer system.
In some embodiments, the portion of the user's body is in a physical environment, and the computer system displays (930) a representation of a field of view of one or more cameras that include a representation of the physical environment (e.g., a representation of the portion of the physical environment behind the user's hand is displayed). In some embodiments, the computer system visually de-emphasizes (e.g., masks out) the representation of the physical environment in response to the computer system detecting that a portion of the user's body is in the field of view of the one or more cameras. For example, as shown in FIG. 5H, the computer system visually de-emphasizes (e.g., fades) the representation 538 of the ceiling fan as compared to the representation 534 of the ceiling fan shown in FIG. 5G. In some embodiments, the display of the representation of the alternative physical environment includes displaying a representation of the background and the portion of the user's body. For example, as shown in fig. 5I-5J, in some embodiments, the background shown in fig. 5I (e.g., including representation 542 of the ceiling fan) is replaced with background 546, as shown in fig. 5J, while maintaining the display of representation 548 of the user's hand.
Changing the appearance of the user interface to visually de-emphasize portions of the physical environment provides visual feedback to the user, indicating that the device has detected portions of the user's body. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and improving battery life of the computer system by enabling the user to more quickly and efficiently use the computer system.
In some embodiments, in response to detecting the portion of the user's body, in accordance with a determination that the portion of the user's body meets the first criteria, the computer system displays (932) an indicator of a representation of the portion at least partially overlaying the user's body. For example, as shown in fig. 7B, in some embodiments, an indicator (e.g., virtual bracelet 708) is displayed in accordance with a computer system determining that the representation 706 of the user's hand meets a first criterion (e.g., is in place). In some embodiments, as described in more detail with reference to method 1000, the indicator is updated by the computer system to indicate the extent of the scan of the portion of the user's body that has been completed by the computer system (e.g., to determine depth information). In some implementations, the indicator represents a progress of movement of the portion of the user's body (for the one or more cameras to capture multiple views).
Displaying the visual indicator relative to the representation of the portion of the user's body provides visual feedback to the user that identifies the appropriate location of the portion of the user's body and the appropriate movement (e.g., rotational direction) to follow in order to obtain one or more measurements of the portion of the user's body made by the computer system. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and improving battery life of the computer system by enabling the user to more quickly and efficiently use the computer system.
In some embodiments, the computer system displays (934) a second user interface. In some embodiments, the second user interface comprises: an option to select (e.g., purchase) a product having a plurality of size options, the size options being selectable based on measurements made by the computer system of a portion of the user's body in view of one or more cameras, and upon selection, initiating an affordance of the computer system to display of a first user interface. For example, as shown in fig. 5C, the user interface for purchasing an accessory (e.g., a watch) includes a button 506 ("measure your wristband size") that, when selected, initiates display of the instructional measurement interface for measuring the user's wrist shown in fig. 5D by the computer system.
Providing additional options for selecting different attachments appearing in different sizes on the user interface prior to measuring the corresponding portion of the user's body avoids cluttering the user interface by automatically identifying which portion of the user's body needs to be measured based on the type of attachment selected. Providing a measurement option for measuring different parts of the user's body based on the selected accessory without cluttering the user interface by additional displayed measurement options (e.g., changing the part of the user's body to be measured) enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain the intended results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system displays (936), in a third user interface (e.g., prior to displaying the first user interface), instructions identifying (e.g., selecting) the first body portion as either a right-side body portion or a left-side body portion. For example, as shown in fig. 5D-5E, in some embodiments, user interface 510 includes selectable buttons 514 and 516 to allow a user to select either the left or right wrist.
Providing additional options for selecting which side of the part of the user's body should be measured avoids cluttering the user interface used for aligning the part of the user's body for the measurement by displaying only the guidance for the selected side of the user's body to be measured. Providing different sides (e.g., left or right) for measuring parts of the user's body without cluttering the user interface with additional guidance related to parts of the user's body that have not been measured enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain the intended results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in the first user interface, the computer system displays (938) the first color in a second area of the first user interface (e.g., the second area surrounds a representation of the first body portion displayed by the computer system on the first user interface). In some embodiments, the computer system displays the color in the first user interface regardless of whether a portion of the user's body is in the field of view of the one or more cameras. In some embodiments, the computer system replaces the display of the first color by a display of a second color that is different from the first color (e.g., based on the computer system determining that the time criterion has been met). In some embodiments, the computer system periodically changes color (e.g., every 3 seconds, every 5 seconds, etc.). In some embodiments, the color represents one or more colors of a product (e.g., a watchband color) available for selection/purchase by a user. In some embodiments, replacing the display of the first color with the display of the second color includes cross fade (e.g., gradual fade) between the first color and the second color. For example, as explained above with reference to fig. 5D-5E, in some embodiments, the background color changes (e.g., as indicated by a change in the background pattern of user interface 510 to user interface 520).
Automatically changing the color of the background to display a plurality of colors corresponding to the colors of the selected accessory provides visual feedback to the user that shows options of possible colors that the user may select for the accessory, wherein the user is being measured without requiring additional user input for the user to view the different color options. Providing improved visual feedback to the user when a set of conditions has been met and increasing the operability of the system by (e.g., automatically) performing an operation reducing the amount and/or degree of input required to perform the operation, and making the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and improving battery life of the computer system by enabling the user to more quickly and efficiently use the computer system.
It should be understood that the particular order of the operations that have been described in fig. 9A-9C is merely exemplary and is not intended to suggest that the order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. In addition, it should be noted that the details of other processes described herein with respect to other methods described herein (e.g., methods 1000, 1100, and 1200) are likewise applicable in a similar manner to method 900 described above with respect to fig. 9A-9C. For example, the user interface object described above with reference to method 900 optionally has one or more of the features of the user interface object described herein with reference to other methods described herein (e.g., methods 1000, 1100, and 1200). For the sake of brevity, these details are not repeated here.
Fig. 10A-10D are flowcharts illustrating a method 1000 of providing a virtualized progress indicator for measuring a portion of a user's body, according to some embodiments. The method 1000 is performed at a computer system (e.g., the portable multifunction device 100, the device 300, or the device 800) that includes a display generating component (e.g., a display, optionally a touch-sensitive display, a projector, a head-up display, etc.), one or more cameras (and optionally one or more depth sensors), and one or more input devices, optionally one or more gesture sensors, optionally one or more sensors for detecting contact strength with a touch-sensitive surface, and optionally one or more tactile output generators (and/or communication therewith). Some operations in method 900 are optionally combined and/or the order of some operations is optionally changed.
As described below, the method 1000 provides an intuitive way of intelligently providing visual feedback to a user to instruct a device to automatically measure the progress of a portion of a user's body as the portion of the user's body moves without requiring the user to provide input as the user moves the portion of the user's body to obtain a measurement. Providing improved visual feedback to the user and performing operations (e.g., automatically) without further user input when a set of conditions has been met enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
The computer system displays (1002) in a user interface (for measuring a portion of a user's body) a first representation of the body portion in a field of view of one or more cameras. For example, as shown in fig. 7B, the user's hand 532 is in the field of view of one or more cameras of the device 100, which display representations 706 of the user's hand.
The computer system detects (1004) movement of the body part using one or more cameras, wherein the displayed first representation of the body part is updated according to the movement of the body part. For example, as described with reference to fig. 7B-7E, as the user hand 532 moves relative to the device 100, the representation 718 of the user hand is updated according to the movement of the user hand 532 in the physical environment 531. In some embodiments, the computer system and/or one or more cameras of the computer system are stationary (e.g., the device 100 remains stationary). In some embodiments, the movement includes rotation of the body part. For example, as shown in fig. 7J-7M, as the user's hand rotates, a representation of the user's hand rotates on the display of the device 100.
When displaying the first representation of the body part, the computer system displays (1006) an indicator (e.g., a progress indicator, such as the virtual bracelet described with reference to fig. 7A-7M) at a fixed position relative to the first representation of the body part. For example, as described with reference to fig. 7B-7E and 7J-7M, the virtual bracelet is displayed at a fixed position relative to the representation of the user's hand.
The computer system displays (1008) the indicator at a first location in a user interface overlaying at least a portion of the representation of the body part. In some embodiments, the indicator is at least partially translucent (e.g., as shown in fig. 7E-7L, the representation of the user's hand is visible through the virtual bracelet). The indicators are updated according to movement of the body part (e.g., the virtual bracelet is displayed in the same relative position when the user's hand moves up, down, left, right, etc., as described with reference to fig. 7B-7E). The indicator comprises an indication of a suggested direction of movement of the body part. For example, the indicator is updated to illuminate (e.g., fill) additional portions of the indicator to show the progress of the movement (e.g., the indicator of the virtual bracelet fills as the user rotates the user's hand, as described with reference to fig. 7J-7M).
In some implementations, suggesting an indication of a direction of movement includes (1010) an animation of an indicator that indicates a direction of movement. In some embodiments, the indicator is animated according to movement of the body part. In some implementations, the indicator is animated without detecting movement of the body part (e.g., the indicator is animated to light up or otherwise indicate a suggested direction of movement before detecting movement of the body part). For example, FIG. 7A shows a coaching user interface in which an indicator of a virtual bracelet is animated to fill as the animated representation 702 of the hand rotates (e.g., in which the user's hand does not move; or, in some embodiments, the animated representation 702 is displayed regardless of whether the user's hand is stationary or moving).
The display shows an animation of the user that the body part should be rotated to obtain one or more measured directions of the body part, providing visual feedback to the user, indicating whether the user has rotated the body part in the correct or incorrect direction. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the body part (1012) is a wrist and/or hand. For example, in fig. 5G-5N, the user hand 532 is shown in the physical environment 531 such that the user hand 532 is within the field of view of one or more cameras of the device 100.
Automatically detecting a user's hand positioned over a camera of the device and measuring the user's hand to determine the size of an accessory worn on or near the user's hand makes it easier for the user to obtain sizing information without requiring the user to manually enter measurements of the user's body and without requiring the user to manually make measurements of the user's body (e.g., using a physical tape measure). Reducing the number and/or extent of inputs required to perform an operation when a set of conditions has been met (by performing the operation (e.g., automatically)) enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the body part is in a physical environment and the computer system displays (1014) the background in the first user interface. For example, the background is a computer-generated background. In some implementations, the context is an augmented reality environment that displays virtual objects simultaneously with physical objects in the physical environment. In some embodiments, the background is not a representation of the physical environment within the field of view of the one or more cameras. For example, as described with reference to fig. 5J, the device 100 does not display objects (e.g., ceiling fans 530) other than the user's body or body parts that are within the field of view of one or more cameras, but rather displays the background 546 as a colored background or other computer-generated background. In some embodiments, the computer system uses one or more cameras to detect a portion of the physical environment and a body part within the field of view of the one or more cameras. In some embodiments, the computer system displays a representation of the body part over the background, and not a representation of a portion of the physical environment within the field of view of the one or more cameras. For example, fig. 5J shows a representation 548 of a user's hand without displaying a representation of ceiling fan 530 in physical environment 531 within the field of view of one or more cameras of device 100.
Removing the view of the physical environment and displaying only a representation of the portion of the user's body provides information about the portion of the user's body without distraction, thereby making it easier for the user to adjust the position of the portion of the user's body. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system displays (1016) a background having a first color and replaces the display of the background having the first color with the display of the background having the second color. In some implementations, a background is displayed in the second user interface before the first user interface is displayed (e.g., color cycling before the body part is detected using one or more cameras). In some embodiments, the computer system changes the background color in accordance with a determination that the time criterion has been met. For example, the computer system periodically changes color (e.g., every 3 seconds, every 5 seconds, etc.). For example, as described with reference to fig. 5D-5F, the color of the background of user interface 510 (fig. 5D) is changed by the computer system to another color in user interface 520 (fig. 5E), which includes the same user interface elements as user interface 510, and the background of user interface 520 is updated as it transitions to displaying another user interface 524 (fig. 5F).
Automatically changing the color of the background to display multiple colors provides visual feedback to the user and optionally may show colors that the user may select for the accessory. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first color and the second color correspond (1018) to color options of a physical object to be worn on the body part. In some embodiments, the physical object comprises a wrist band or a wrist strap. In some embodiments, the body part is measured to determine the size of the physical object. In some embodiments, the first color and the second color are colors of a commercially available wristband.
Automatically changing the color of the background to display a plurality of colors corresponding to the colors of the selected accessory provides visual feedback to the user that shows options of possible colors that the user may select for the accessory, wherein the user is being measured without requiring additional user input for the user to view the different color options. Providing improved visual feedback to the user when a set of conditions has been met and reducing the amount and/or degree of input required to perform an operation by (e.g., automatically) performing the operation enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, a computer system detects (1020) movement of a body part in a first direction and, in response to detecting movement of the body part in the first direction, the computer system displays a representation of the body part at a second location in a user interface in accordance with the movement of the body part and displays an indicator at a fixed location relative to the first representation of the body part displayed at the second location. In some embodiments, detecting movement of the body part in the first direction includes detecting movement of the body part in a lateral direction (e.g., left, right, up, down) parallel to an axis of a position (e.g., outer surface) of the one or more cameras (e.g., not moving closer to or farther from the cameras; maintaining a distance from the one or more cameras as the body part moves left, right, up, or down in a plane parallel to a field of view of the one or more cameras). In some embodiments, the indicator is continuously displayed (e.g., appears to move) over the body part as the body part moves in the first direction. For example, as described with reference to fig. 7B-7C, as the user hand 532 moves to the left along an axis parallel to the position of one or more cameras, the representation 710 of the user hand displayed by the device 100 is updated to move to the left, and the virtual bracelet 712 moves with the representation 710 of the user hand (e.g., as compared to fig. 7B).
Automatically moving the indicator showing the progress of the measurement relative to the representation of the user's body provides continuous visual feedback to the user such that the user is aware of the progress of the measurement even when the user has moved the user's body part relative to the computer system. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system displays (1022) a first representation of the body part and the indicator in a corresponding first size in the user interface. In some embodiments, the computer system detects movement of the body part that changes the distance between the one or more cameras and the body part. In some embodiments, in response to detecting a change in distance between the one or more cameras and the body part, the computer system displays a first representation of the body part at a respective second size (e.g., different from a first size of the representation of the body part) in accordance with the changed distance and displays the indicator at the respective second size (e.g., different from the first size of the indicator) in accordance with the changed distance. In some embodiments, the indicator maintains its position relative to the representation of the body part, wherein the size of the representation of the body part is updated according to the changed distance. For example, the indicators are scaled by the computer system such that as the body part is closer to the one or more cameras (scaling), the size of the representation of the body part increases, and accordingly, the size of the indicators increases proportionally (e.g., at the same size ratio), such that the indicators continue to cover the same portion of the representation of the body part as before the detected movement of the body part. In another example, as the body part moves farther away from the one or more cameras, the indicator is scaled by the computer system to maintain its position and size relative to the body part (e.g., as the body part moves farther away, the indicator becomes smaller), for example, as shown in fig. 7D and 7E, as the user hand 532 moves away from and closer to the device 100, the representation of the user hand and the virtual bracelet are updated on the display of the device 100 such that the virtual bracelet is displayed in a size proportional to the size of the representation of the user hand.
Automatically changing the size of the indicator showing the progress of the measurement relative to the representation of the portion of the user's body provides continuous visual feedback to the user so that the user is aware of the progress of the measurement even when the user has moved the portion of the user's body closer to or farther from the computer system. The size of the indicator also provides visual feedback to the user to indicate whether a portion of the user's body is too close or too far from the computer system. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system detecting movement of the body part includes (1024) detecting rotation of the body part. In some embodiments, upon detecting rotation of the body part, the computer system scans one or more images using one or more cameras to determine a measurement of the body part and updates the indicator to indicate progress of scanning the one or more images. For example, fig. 7J-7M illustrate a representative rotation of a user's hand as the user rotates the user's hand in a physical environment.
Automatically animating the indicator showing the progress of the measurement as the user rotates the body part provides continuous visual feedback to the user such that the user is aware of the progress of the measurement as the user continues to move and rotate the body part of the user relative to the computer system. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the indication of the suggested direction of movement of the body part displayed by the computer system includes (1026) an indication of rotating the body part. For example, the indication may indicate rotation about an axis defined by the indication. In some embodiments, the indication may indicate a rotational speed and/or indicate a rotational direction (e.g., clockwise or counterclockwise relative to the axis). In some embodiments, the indication comprises a text prompt to rotate the body part. In some embodiments, the indication includes an animation (e.g., an animation of an arrow) prompting rotation of the body part.
Animating the indicator for showing the correct direction of rotation of the body part to obtain one or more measurements of the body part provides visual feedback to the user indicating whether the user should rotate his body part in a different direction (e.g., clockwise or counter-clockwise) relative to the computer system. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, in displaying the first representation of the body part, the computer system captures (1028) one or more images of the body part using one or more cameras. For example, as the user rotates the user's hand as described in fig. 7J-7M, the device captures (e.g., scans) one or more images of the user's hand (e.g., at different locations as the user's hand rotates). In some embodiments, one or more images are used to determine a measurement of a body part. In some embodiments, the computer system displays a measured dimension corresponding to the body part in the second user interface. In some embodiments, the dimension is a dimension of a body part. In some embodiments, the dimensions are those of an accessory for a body part (e.g., a watchband). For example, as shown in fig. 7N, the device displays the size (e.g., size 5) of the wristband as determined from measurements of the user's wrist.
Automatically capturing an image of a body part of a user and using the image to determine a measurement and corresponding size of the body part of the user may make it easier for the user to obtain sizing information without the user having to manually take a photograph of the user's body and/or manually enter the measurement into a computer system. Reducing the number and/or extent of inputs required to perform an operation when a set of conditions has been met (by performing the operation (e.g., automatically)) enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the fixed position relative to the first representation of the body part is (1030) a first fixed position relative to the first representation of the body part. In some embodiments, the computer system receives a first user input in a first direction and, in response to the first user input, updates a first fixed position of an indicator (e.g., virtual bracelet 724, fig. 7F) relative to the body part to a second fixed position relative to the body part that is different from the first fixed position. In some implementations, the first user input is a drag user input for moving the indicator to a second fixed location corresponding to a departure location of the drag user input. For example, fig. 7F-7G illustrate the position of the user change indicator (virtual bracelet 724) relative to the representation 722 of the user's hand (e.g., prior to scanning).
By using the position of the indicator to display which part of the user's body part is to be measured, changing the position of the indicator relative to a representation of the user's body part to indicate that a different part of the user's body part is to be measured improves visual feedback to the user. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system replaces (1032) the display of the indicator at the first location in the user interface by the user interface element at the first location in the user interface. In some embodiments, the user interface element is displayed at a fixed location relative to the first representation of the body part, and the user interface element indicates a first size of a portion of the body part corresponding to a portion of the representation of the body part at the fixed location. In some embodiments, the user interface element includes an AR object (which appears to be worn on a representation of the body part). For example, the virtual bracelets shown in fig. 7B-7M are AR objects that appear to be worn on the user's wrist. In some embodiments, the user interface element includes a virtual tape measure (e.g., as shown in fig. 7Q-7S). In some embodiments, the user interface element includes a virtual accessory, such as a virtual watch (e.g., as shown in fig. 7N-7P). In some embodiments, the user interface element includes a representation that provides a product (e.g., the product selected in fig. 5C) for sale. In some embodiments, the user interface element includes a representation of the watch with options as configured (selected) by the user (e.g., the user selects a color, shape, style, etc. of the watch). In some embodiments, the user interface element has a different size than the indicator. For example, the virtual bracelet has a different size than the wristwatch 754 in fig. 7P and a different size than the tape measure shown in fig. 7Q. In some embodiments, the indicator is replaced in response to completing the measurement. For example, the animated transition is used to replace the virtual bracelet shown in fig. 7M by the watch 754 in fig. 7N, or to replace the tape measure shown in fig. 7Q by the watch 754 in fig. 7N.
By indicating that the measurement of the part of the user's body has been successful and thus the progress indicator is no longer displayed, automatically transitioning the indicator to a user interface object that is also displayed relative to the representation of the part of the user's body improves the visual feedback to the user. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the fixed location relative to the first representation of the body part is a first fixed location, and the computer system receives (1034) a second user input (e.g., a drag input) for moving the user interface element. In some implementations, in response to receiving the second input, the computer system moves the user interface element (e.g., a drag input) from a first position in the user interface that overlays at least a portion of the representation of the body part to a third fixed position (e.g., different from the first fixed position) relative to the first representation of the body part. For example, the fixed location is at a predefined location on the user's wrist (e.g., moving from a first distance from the carpal bones to a second distance from the carpal bones). In some embodiments, the user interface element is maintained in a position relative to the user's wrist as the user's wrist moves in the field of view of the one or more cameras. In some embodiments, the user updates the fixed position of the user interface element after completing the scan (e.g., measurement) of the body part. For example, as shown in fig. 7O-7P, the user drags wristband 754 to another location on the representation of the user's wrist.
The ability to change the position of the user interface element relative to the representation of the user's body part provides improved visual feedback to the user by allowing the user to move the user interface element over the representation of the user's body part so that the user can visualize which part of the user's body part is measured for sizing. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in accordance with a determination that the user interface element is at a third fixed position relative to the first representation of the body part, the computer system updates (1036) the user interface element to indicate a second size of a portion of the body part corresponding to the third fixed position. For example, when the watch 754 is in the first fixed position, the size of the user's wrist in FIG. 7N is "size 5", and after the user interface element is moved to a different position on the representation of the user's hand, the size of the user's wrist corresponds to "size 6", as shown in FIG. 7P.
By using the user interface element to display which portion of the user's body part corresponds to the displayed size, changing the position of the user interface element relative to the representation of the user's body part and automatically updating the size associated with the respective position improves visual feedback to the user. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, a computer system captures (1038) an image (e.g., a screen shot, optionally including depth information from one or more depth sensors) that includes a first representation of a body part and a user interface element at a first fixed position relative to the first representation of the body part. In some embodiments, the user interface element indicates a size of a portion of the body part corresponding to the first fixed location of the user interface element. For example, in some embodiments, the representation of the user's hand shown in fig. 7N is not a representation of the current field of view of the one or more cameras (e.g., the user has moved the user's hand out of view of the one or more cameras), and the representation displayed in fig. 7N is a captured image of the user's hand (e.g., before the user has removed the user's hand from view of the one or more cameras).
Automatically capturing an image of the user's body part (optionally including depth information) and using the image to determine the measurement and corresponding dimensions of the user's body part may make it easier for the user to obtain sizing information without requiring the user to maintain the user's body part in place relative to the computer system, manually take a photograph of the user's body, and/or manually enter the measurement into the computer system. Reducing the number and/or extent of inputs required to perform an operation when a set of conditions has been met (by performing the operation (e.g., automatically)) enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, when displaying the image, the computer system receives (1040) a third user input (e.g., on the image displayed on the display device) for moving the user interface element to a different fixed position relative to the first representation of the body part in the image (e.g., as described with reference to fig. 7N-7S).
The ability to change the position of the user interface element relative to the representation of the user's body part provides improved visual feedback to the user by allowing the user to move the user interface element over the representation of the user's body part so that the user can visualize which part of the user's body part is measured for sizing. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the user interface is a first user interface and the computer system displays (1042) a second user interface comprising: an option to select (e.g., purchase) a product having a plurality of size options that are selectable based on measurements of body parts in view of one or more cameras, and upon selection, initiate an affordance of display of a first user interface. For example, fig. 5C illustrates a user interface for selecting a watch (e.g., and/or a wristband having multiple size options). In some embodiments, to select the size option, the user selects button 506 "measure your watchband size".
Providing additional options for selecting different attachments appearing in different sizes on the user interface prior to measuring the corresponding portion of the user's body avoids cluttering the user interface by automatically identifying which portion of the user's body needs to be measured based on the type of attachment selected. Providing a measurement option for measuring different parts of the user's body based on the selected accessory without cluttering the user interface by additional displayed measurement options (e.g., changing the part of the user's body to be measured) enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain the intended results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the user interface is (1044) a user interface within a corresponding application executed by the computer system. In some embodiments, the respective application is a measurement application for measuring one or more objects (such as body parts of a user), as described with reference to fig. 5A. In some implementations, the respective application is an online store (e.g., "electronic store") application, as described with reference to fig. 5B. In some embodiments, the respective application is a watch application configured to communicate (e.g., including instructions for communicating) between the first electronic device and a watch (e.g., or other wearable device).
Providing measurement functionality within an additional application already present on the computer system avoids cluttering the user interface by allowing the measurement feature to be launched from within an existing application. Providing an option for measuring a user's body part using existing applications without cluttering the user interface with additional applications for measuring the user's body enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system gradually changes (1046) the appearance of the indicator as the body part moves to indicate an amount of progress toward completing the scanning or measuring of the body part as the body part rotates, or an amount of progress toward completing the rotation of the body part. For example, as shown in fig. 7J-7M, as the user's hand rotates, the virtual bracelet is updated to change gradually (e.g., fill the indicators of the virtual bracelet). In some embodiments, the indicator includes an opening (e.g., oval) that is gradually filled in accordance with movement of the body part.
Gradually animating the portions of the indicator showing the progress of rotation of the user's body part provides visual feedback to the user to indicate which portions of the user's body part have been successfully scanned and/or measured. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, responsive to the computer system detecting movement of the body part, in accordance with the computer system determining that the movement of the body part moves at a speed below a threshold speed, the computer system gradually changes (1048) the appearance of the indicator in appearance as the body part moves to indicate a progress of the movement of the body part toward the target pose. In some embodiments, in accordance with the computer system determining that the movement of the body part is moving at a speed above the threshold speed, the computer system gives up at least a portion of the change in appearance of the indicator as the body part moves to indicate that the body part is moving too fast toward the target pose. For example, as described with reference to fig. 7H-7I, in accordance with the computer system determining that the user is rotating the user's wrist too fast (e.g., above a threshold speed), the indicators of virtual bracelet 730 are not populated by the computer system. In some implementations, the criteria include a requirement that the rotation rate of the portion of the user's body be less than a threshold rotation rate (e.g., the first criteria is not met when the user moves the arm too fast).
Visually animating the portion of the indicator corresponding to the progress of rotation of the user's body part provides visual feedback to the user by using the non-animated portion of the indicator to represent the portion of the body part of the user that has not been scanned and/or measured because the user's body part is rotating too fast. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
It should be understood that the particular order in which the operations in fig. 10A-10D are described is merely one example and is not intended to suggest that the order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that the details of other processes described herein with respect to other methods described herein (e.g., methods 900, 1100, and 1200) are likewise applicable in a similar manner to method 1000 described above with respect to fig. 10A-10D. For example, the user interface object described above with reference to method 1000 optionally has one or more of the features of the user interface object described herein with reference to other methods described herein (e.g., methods 900, 1100, and 1200). For the sake of brevity, these details are not repeated here.
Fig. 11A-11C are flowcharts illustrating a method 1100 of generating machine readable code to store information about measurements, according to some embodiments. The method 1000 is performed at a computer system (e.g., the portable multifunction device 100, the device 300, or the device 800) that includes a display generating component (e.g., a display, optionally a touch-sensitive display, a projector, a head-up display, etc.), one or more cameras (and optionally one or more depth sensors), and one or more input devices, optionally one or more gesture sensors, optionally one or more sensors for detecting contact strength with a touch-sensitive surface, and optionally one or more tactile output generators (and/or communication therewith). Some operations in method 1100 are optionally combined, and/or the order of some operations is optionally changed.
As described below, the method 1100 provides an intuitive way for automatically detecting a portion of a user's body, determining a measurement of the portion of the user's body, and embedding information about the measurement in machine-readable code. Embedding the information in scannable (e.g., computer readable) code allows the computer system to store and share the sizing and/or attachment information of the user, thereby eliminating the need for the user to remember their own sizing information and/or to ask the user to select a method for sharing the sizing information. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
The computer system uses the one or more cameras to detect (1102) a portion of the user's body in the field of view of the one or more cameras.
The computer system scans (1104) a portion of the user's body in the field of view of the one or more cameras to determine a measurement of the portion of the user's body in the field of view of the one or more cameras. For example, the computer system scans and measures a body part of the user (e.g., the user's wrist) using a method as described with reference to fig. 6C-6N and/or fig. 7J-7M.
After scanning the portion of the user's body, the computer system generates (1106) machine-readable code comprising information identifying one or more sizing parameters of the wearable object or describing measurements of the portion of the user's body based on the measurements of the portion of the user's body. In some embodiments, the machine-readable code is configured to scan for purchasing a wearable object (e.g., an accessory).
In some embodiments, after scanning a portion of the user's body, the computer system displays (1108) a first user interface comprising user interface objects that when selected generate machine-readable code. In some embodiments, a computer system detects user input selecting a user interface object and generates machine readable code in response to detecting the user input. For example, as shown in FIG. 7T, the computer system receives input 786 selecting the "Yes" button 782 to save the user size as a virtual card. In response to the input, the computer system generates a QR code (e.g., QR code 804, fig. 8A, or QR code 1004, fig. 8D).
Displaying multiple user interface elements, including the option of directly saving sizing information obtained by the device, provides the user with quick and easy access to the available functions of the measurement user interface without requiring the user to navigate through a complex menu hierarchy. Providing additional control options and reducing the number of inputs required to perform the operation enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain the desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the machine-readable code includes (1110) a QR code. For example, QR codes shown in fig. 8A and 8D. In some embodiments, the QR code is configured to be scanned at a store to purchase a wearable object (e.g., and/or an accessory of the wearable object). For example, a user purchases a wearable object having one or more dimensional parameters identified by a machine-readable code.
The embedded information as a QR code allows the computer system to easily share sizing and/or accessory information stored in the QR code, thereby eliminating the need for a user to manually enter the embedded information to share information with another device. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, a second computer system (e.g., a second electronic device) scans (1112) the machine-readable code and, in response to scanning the machine-readable code, initiates a process for displaying information about the wearable object on the second computer system (e.g., the second electronic device) or a third computer system (e.g., the third electronic device) communicatively coupled to the second computer system. In some embodiments, the second computer system and the third computer system are the same computer system (e.g., an electronic device comprising one or more cameras for scanning machine readable code and a display device for displaying information about the wearable object or accessory). For example, as shown in fig. 8B, in some embodiments, the second electronic device 800 scans the machine readable code displayed on the device 100. In some embodiments, the second electronic device is a scanner and the third electronic device is an electronic device having a display device communicatively coupled to the scanner.
Obtaining information from another electronic device by scanning a machine-readable code displayed on the other electronic device eliminates the need for a user to manually enter information stored in the machine-readable code. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in accordance with a determination that the machine-readable code has been scanned, the second computer system or the third computer system displays (1114) the first application on a first portion (less than all) of the user interface displayed using the display generating component of the second computer system or the third computer system. In some embodiments, the first application includes information identifying one or more sizing parameters of the wearable object or describing measurements of a portion of the user's body within the first application. For example, as shown in fig. 8C, device 800 displays a machine-readable code (e.g., QR code 804) within first application 806 on a portion of a display.
Providing an application (e.g., a mini-application) that includes information stored in machine-readable code avoids cluttering the user interface by displaying only the stored information without requiring the user to download the application to view the information. Providing information stored in machine-readable code on a portion of a display enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping a user obtain desired results and reducing user errors in operating/interacting with the system), thereby further reducing power use and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to scanning the machine-readable code, the second computer system or the third computer system uses a display generating component of the second computer system or the third computer system to display (1116) a card comprising information about the wearable object that includes information identifying one or more sizing parameters of the wearable object stored in the machine-readable code. For example, in some embodiments, a card (such as virtual card 1006 shown in fig. 8D) is displayed in response to the second electronic device scanning the machine readable code. In some embodiments, the displayed card includes additional information about the wearable object obtained from the machine-readable code, the additional information being information other than the one or more sizing parameters. Such additional information may include, for example, color, style, material, or other characteristics of the wearable object. For example, in FIG. 8D, the displayed card includes additional information (e.g., style information; model or product of identification information) about the wearable object (e.g., "watch band style A").
Providing a displayed card (sometimes referred to herein as a virtual card) that includes information stored in a machine-readable code improves visual feedback to a user by displaying the stored information and/or machine-readable code so that the user can see the information displayed on the user device. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system displays (1118) an option to add a virtual card to the virtual wallet that includes information about the wearable object, and store the virtual card in the virtual wallet in response to detecting a user input selecting the option. For example, as described with reference to fig. 7T and 8D, in some embodiments (e.g., when a user saves a virtual card to the same device used to scan and measure the user's wrist), device 100 stores the virtual card. In some embodiments, the virtual wallet is stored on or for a user of the (first) computer system. In some embodiments, the user may access the virtual card by opening a virtual wallet. In some embodiments, the display of the virtual card includes machine readable code (e.g., as shown in fig. 8E-8F). In some embodiments, the virtual card displays information stored in the machine-readable code (e.g., in addition to or in lieu of displaying the machine-readable code). For example, the virtual card includes text indicating the size of a body part of the user or sizing parameters of the wearable object or accessory, and/or text indicating information about the wearable object or accessory (e.g., "watchband style a").
Displaying the user interface element for saving the virtual card to the user's virtual wallet and simultaneously displaying the virtual card including sizing information provides the user with quick and easy access to the available functions of the virtual wallet without requiring the user to navigate through a complex menu hierarchy. Providing additional control options and reducing the number of inputs required to perform the operation enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain the desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in accordance with a determination that the respective computer system is within a predefined proximity to a predefined location at the respective computer system on which the virtual card is stored or accessed, the respective computer system uses a display generation component of the respective computer system to display (1120) a visual cue for displaying the virtual card stored in the virtual wallet. For example, in accordance with a determination that the respective computer system (and/or user of the respective computer system) is within a threshold distance (e.g., 10 feet, 20 feet, etc.) of a predefined location, such as a store (e.g., selling a wearable object and/or an accessory for a wearable object, such as a watchband), a visual cue is automatically (without user input) generated and displayed on the respective computer system. For example, as described with reference to fig. 8E, in some embodiments, a hint (e.g., notification) 812 is generated and displayed on the device storing the virtual card. In some embodiments, displaying visual cues is performed in conjunction with providing notifications (e.g., sounds, vibrations, or other alerts).
Automatically providing notifications to the user (including providing user interface elements for opening virtual cards stored in the user's virtual wallet) based on the user's location reduces the amount of input required from the user and provides the user with quick and easy access to the virtual wallet's available functions without requiring the user to navigate through a complex menu hierarchy. Providing additional control options and reducing the number of inputs required to perform the operation enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain the desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the respective computer system detects (1122) a user input selecting a visual cue for displaying the virtual card, and in response to detecting the user input selecting the visual cue, displays the virtual card using the display generating component of the respective computer system. For example, in response to a user selecting a visual cue, a virtual card is displayed (e.g., within a user interface of a virtual wallet), as described with reference to fig. 8E-8F.
Automatically providing user interface elements that, when selected by a user, open a virtual card stored in the user's virtual wallet reduces the amount of input required from the user and provides the user with quick and easy access to the available functions of the virtual wallet without requiring the user to navigate through a complex menu hierarchy. Providing additional control options and reducing the number of inputs required to perform the operation enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain the desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, displaying the virtual card includes (1124) displaying machine readable code. For example, the virtual card 1006 in fig. 8D includes a display QR code 1004.
Displaying a machine-readable code, such as a QR code, in a virtual card reduces the amount of input required to share information stored in the machine-readable code with another device, such that the other device need only scan the machine-readable code (e.g., rather than requiring alternative methods for user selection for sharing and/or additional user input for the recipient of the shared information). Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, displaying the virtual card includes (1126) displaying a description of the wearable object. For example, virtual card 1006 in fig. 8D includes displaying a description about a watch band (e.g., "style a"). In some embodiments, the description of the wearable object includes a description of one or more sizing parameters (e.g., "size 6" is shown in fig. 8D). In some embodiments, the description of the wearable object includes a description of a group or class of wearable objects (e.g., watchbands that fit a 44mm model of watch, or a particular style of watchband having a particular size (e.g., small, medium, large, 1, 2, 3, 4, 5, 6, 7, 8, 9, etc.). In some embodiments, the description of the wearable object includes a color (e.g., including characteristics of the wearable object that the user selected in an application (e.g., an electronic store or measurement application) prior to measuring his wrist (e.g., the wearable object shown in fig. 5C)).
Displaying information about the wearable object within the virtual card improves visual feedback to the user by displaying the stored information so that the user can see the information displayed on the user device. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
It should be understood that the particular order in which the operations in fig. 11A-11C are described is merely exemplary and is not intended to suggest that the order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that the details of other processes described herein with respect to other methods described herein (e.g., methods 900, 1000, and 1200) are likewise applicable in a similar manner to method 1100 described above with respect to fig. 11A-11C. For example, the user interface object described above with reference to method 1100 optionally has one or more of the features of the user interface object described herein with reference to other methods described herein (e.g., methods 900, 1000, and 1200). For the sake of brevity, these details are not repeated here.
Fig. 12A-12D are flowcharts illustrating a method 1200 of prompting a user to adjust a position of a body part of the user to a correct position for measurement, according to some embodiments. The method 1200 is performed at a computer system (e.g., the portable multifunction device 100, the device 300, or the device 800) that includes a display generating component (e.g., a display, optionally a touch-sensitive display, a projector, a head-up display, etc.), one or more cameras (and optionally one or more depth sensors), and one or more input devices, optionally one or more gesture sensors, optionally one or more sensors for detecting contact strength with a touch-sensitive surface, and optionally one or more tactile output generators (and/or communication therewith). Some operations in method 900 are optionally combined and/or the order of some operations is optionally changed.
As described below, the method 1200 provides an intuitive way of providing visual feedback to a user to indicate proper positioning of a portion of the user's body for a device to automatically measure a portion of the user's body without requiring the user to provide input when the user moves a portion of the user's body to obtain a measurement. Providing improved visual feedback to the user and performing operations (e.g., automatically) without further user input when a set of conditions has been met enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
The computer system displays (1202) a first visual cue (e.g., a target) indicating a location for moving the body part into view of the one or more cameras at a first fixed location within the first user interface. In some embodiments, the first visual cue comprises a contour of a circle fixed to a center of the display. For example, as shown in fig. 6C-6M, the target 618 is displayed at the same location within each of the user interfaces shown in fig. 6C-6M.
Upon displaying (1204) a first visual cue indicating a location for moving the body part into the field of view of the one or more cameras, the computer system detects (1206) a portion of the user's body in the field of view of the one or more cameras and corresponding to the body part using the one or more cameras. For example, as described with reference to fig. 5G-5O, within the physical environment 531, the user's hand 532 is moved into view of one or more cameras of the device 100. In some embodiments, detecting includes scanning (e.g., for measuring) a portion of the user's body.
In response to detecting (1208) a portion of the user's body in the field of view of the one or more cameras, the computer system displays (1210) a representation of the portion of the user's body and displays (1212) a second visual cue fixed at a predefined position relative to the representation of the portion of the user's body, wherein a position (e.g., a current position) of the second visual cue relative to a position of the first visual cue is indicative of movement of the body portion required to satisfy the body portion positioning precondition. In some embodiments, the computer system determining the location of the second visual cue relative to the location of the first visual cue comprises the computer system determining a separation (e.g., distance and direction therefrom) between the first visual cue and the second visual cue. In some embodiments, the second visual cue comprises a circle (e.g., smaller than the first visual cue) fixed to a location on (or beside) a portion of the representation of the user's body. For example, the first visual cue is centered on the display and the second visual cue is fixed to the center of the user's palm/hand. In some embodiments, the second visual cue fills the first visual cue when aligned with the first visual cue. For example, in fig. 6C-6F, the computer system displays a point 620 (e.g., a second visual cue) at a location relative to a representation 622 of the user's hand. When the user moves the user's hand into position, the computer system moves the point 620 with the representation of the user's hand (e.g., proportionally) until the point 620 aligns with the target 618 to be in position (e.g., to satisfy the body part positioning precondition).
In some embodiments, the computer system detects (1214) movement of the body part while the first visual cue and the second visual cue are displayed. In some embodiments, in response to detecting movement of the body part, the computer system moves the second visual cue on the display without moving the first visual cue on the display. For example, as the representation of the user's hand moves on the display 100 in fig. 6C-6F, the point 620 moves with the representation of the user's hand (e.g., to maintain its relative position with respect to the representation of the user's hand), and the target 618 remains in the same fixed position as the representation of the user's hand moves.
Automatically moving the representation of the portion of the user's body in the user interface in accordance with how the user is moving the user's body (e.g., as determined by one or more cameras) provides continuous visual feedback to the user so that the user is aware of where and how to move the portion of the user's body so that the representation of the portion of the user's body is in place for scanning. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in accordance with the computer system determining that the body part is moved a first amount, the computer system moves (1216) the second visual cue a second amount on the display corresponding to the first amount of movement of the body part. In some embodiments, in accordance with the computer system determining that the body part moves a third amount different from the first amount, the computer system moves the second visual cue by a fourth amount on the display that corresponds to the third amount of movement of the visual cue and that is different from the second amount of movement of the body part. In some implementations, the first amount is proportional to the second amount (e.g., a predefined scaling factor) and the third amount is proportional to the fourth amount. For example, as shown in fig. 6E-6F, as the user moves the user's hand (e.g., to the right) in the physical environment, the representation of the user's hand displayed on the device 100 is updated according to the current view of the one or more cameras of the device 100 (e.g., to the right), and the computer system moves the point 620 along with the representation of the user's hand (e.g., to maintain its position relative to the representation of the user's hand). For example, as the user moves the user's hand a different amount (e.g., to the right), the computer system moves point 620 a proportional amount (e.g., in the same direction).
As the user moves the body part of the user, automatically moving the second visual cue fixed relative to the representation of the body part of the user provides continuous visual feedback to the user so that the user is aware of how to adjust the body part of the user in order to align the second visual cue with the target. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in accordance with a determination that the body part is moving in a first direction, the computer system causes the second visual cue to move on the display in a second direction corresponding to the first direction of movement of the body part (1218). In some embodiments, in accordance with a determination that the body part is moving in a third direction different from the first direction, the computer system causes the second visual cue to move on the display in a fourth direction corresponding to the third direction of movement of the body part and different from the second direction. In some embodiments, the first respective direction is the same as the first direction and the second respective direction is the same as the second direction. For example, as shown in fig. 6E-6F, as the user moves the user's hand (e.g., to the right) in the physical environment, the device updates the representation of the user's hand displayed on the device 100 (e.g., to the right) according to the current field of view of the one or more cameras of the device 100, and the point 620 moves with the representation of the user's hand (e.g., to maintain its position relative to the representation of the user's hand).
As the user moves the body part of the user, automatically moving the second visual cue fixed relative to the representation of the body part of the user provides continuous visual feedback to the user so that the user is aware of how to adjust the body part of the user in order to align the second visual cue with the target. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the computer system displays (1220) the first visual cue at a fixed size and at a fixed location in the first user interface as the body part of the user moves. In some embodiments, the first visual cue is fixed in the user interface by the computer system such that the first visual cue does not move as the user's hand moves. In some implementations, the computer system moves the second visual cue as the user's hand moves (e.g., the first visual cue is fixed at a particular location in the user interface, and the second visual cue moves as the user's hand moves). For example, as shown in fig. 6C-6F, the representation of the point 620 is fixed relative to the palm of the user.
Maintaining a first visual cue (e.g., a target) at a fixed location within the user interface provides continuous visual feedback to the user such that the user is aware of where and how to move a portion of the user's body in order to align a second visual cue fixed to a representation of the portion of the user's body with the target fixed within the user interface. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system updates (1222) the size of the displayed second visual cue according to a change in the position of the body part relative to the one or more cameras. For example, in accordance with the computer system determining that the body part moves closer to the one or more cameras (e.g., and the size of the representation of the user's body part increases), the computer system increases the size of the second visual cue by an amount corresponding to (e.g., proportionally to) the change in the size of the body part. For example, as shown in fig. 6C-6E, the size of the representation of the user's hand increases (shown in fig. 6D) and the size of the dot 620 also increases (e.g., proportionally) such that the dot 620 is proportional to the current size of the representation of the user's hand.
Automatically updating the size of the second visual cue so that it is proportional to the size of the representation of the portion of the user's body provides continuous visual feedback to the user so that the user knows whether the user should move the portion of the user's body closer or farther from the device in order to fit the second visual cue within the target. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, in accordance with determining that the portion of the user's body changes position relative to the one or more cameras, the computer system updates (1224) the position of the second visual cue within the first user interface while maintaining the position of the second visual cue at a fixed predefined position relative to the representation of the portion of the user's body. For example, as described with reference to fig. 6C-6E, as the user's hand moves in the physical environment, the computer system updates the displayed representation of the user's hand, and the computer system updates the point 620 (e.g., the second visual cue) in position so as to remain in the same fixed position relative to the representation of the user's hand.
Automatically moving the second visual cue fixed relative to the representation of the portion of the user's body without moving the target (e.g., the first visual cue) provides continuous visual feedback to the user so that the user is aware of where and how to move the portion of the user's body in order to align the second visual cue with the target. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system displays (1226) the second visual cue at the first angle. In some embodiments, in accordance with a determination that the body part changes position relative to the one or more cameras, the computer system updates the second visual cue for display at the second angle. In some embodiments, the second angle is determined based on the changed position of the body part. In some implementations, the second visual cue is a virtual object that rotates relative to the plane of the display such that it starts at a significant angle (e.g., substantially perpendicular) relative to the plane of the display and then rotates such that it is substantially parallel to the display. For example, in some embodiments, the points 644 in fig. 6I-6K are displayed by the computer system so as to be at an angle such that the points 644 appear elliptical (e.g., as if the sides of the points 644 were viewed), and as the representation of the user's hand is rotated, the angle at which the points 644 are displayed is also updated by the computer system according to how much the user's hand has been rotated.
Automatically changing the display angle of the second visual cue that is fixed relative to the representation of the portion of the user's body provides continuous visual feedback to the user so that the user is aware of the angle of rotation at which the user must move the user's body in order to align the second visual cue with the target. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system displays (1228) text including instructions to move the body part (and optionally instructions indicating the manner in which the body part is moved) in order to satisfy the body part positioning precondition. In some embodiments, the second visual cue is displayed simultaneously with text comprising the instructions. In some embodiments, the text is displayed before the second visual cue is displayed. For example, the text includes instructions to move a portion of the user's body to align the second visual cue with the first visual cue in the first user interface. For example, as shown in FIG. 6A, the device provides text instructions 602 and displays exemplary points 606 and targets 608 to instruct the user to place the points 606 into circles.
Displaying text instructions to the user to explain how the user implements the appropriate position of the user's body for the measurement improves visual feedback to the user so that the user knows how to move the user's body in order to obtain the measurement. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to the body part meeting the body part positioning precondition, the computer system displays (1230) a first timer indicating an amount of time the user must maintain the body part in a position that meets the body part positioning precondition. For example, fig. 6G shows a text prompt 630 including time (e.g., countdown time) and a timer 632 (e.g., countdown timer). In some embodiments, the body part positioning precondition includes a user's requirement to position the second visual cue inside (e.g., on top of) the first visual cue (e.g., align the point within a circle), and in response to the second visual cue being aligned with the first visual cue, the computer system displays a time and/or timer indicating that the user must maintain the second visual cue aligned with the first visual cue for a predetermined amount of time (e.g., 1 second, 3 seconds, 5 seconds, 15 seconds, 30 seconds, 1 minute, 3 minutes, 5 minutes). In some embodiments, the second visual cue is transformed into or presented by the computer system to include the first timer (e.g., by adding a mobile user interface element, such as a line or point that moves in a predetermined pattern (such as sweeping around a circle within a predefined amount of time)).
Displaying a timer to the user to indicate the amount of time the user has to maintain the position of the user's body improves visual feedback to the user so that the user knows how long to stay stationary in order to obtain the first survey scan. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to the body part satisfying the body part positioning precondition, the computer system displays (1232) an indication (e.g., a success message or indication) that the first scan of the body part is complete. In some embodiments, the first scan of the body part includes capturing an image of the body part. For example, the first scan includes an image of the captured palm of the user's hand, and the displayed indication may indicate that the first scan has been completed successfully, as shown in fig. 6H. In some embodiments, the indication is or includes a change in the appearance of the timer. For example, the computer system updates the display of the timer (e.g., timer 618, fig. 6G) to make an animated transition to the visual cue when the precondition is met (e.g., similar to a second visual cue having a change in color or intensity as compared to a previously displayed second visual cue).
A success message is displayed to the user to indicate that the first scan has completed improving visual feedback to the user so that the user is aware that the first survey scan was successful and the user knows without further interrogation that the user does not need to restart the process to obtain the survey. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system displaying an indication that the first scan of the body part is complete includes (1234) the computer system increasing the brightness of the displayed first user interface from a first level to a second level for a predetermined period of time, and decreasing the brightness of the displayed first user interface (e.g., to the first level or to a level below or near the first level) after increasing the brightness to the second level. For example, prior to displaying graphic 6H, the device generates a flash (e.g., or other animated transition) on the display. In some implementations, the second level is brighter than the first level (e.g., the display blinks white). In some embodiments, the second level is darker than the first level.
Displaying a flash animation to the user to indicate that the first scan has completed improves visual feedback to the user so that the user knows that the first survey scan has been obtained and the user does not need to continue to maintain the user's body in the same proper position for the survey. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system displaying an indication of completion of the first scan of the body part includes (1236) the computer system displaying a check mark. In some embodiments, a check mark is displayed at the location of the first visual cue. For example, the check mark is displayed by the computer system in the open circle of the first visual cue (e.g., as shown in fig. 6H, the check mark 638 is displayed at a location within the user interface in which the target is displayed).
Displaying check marks to indicate that the first scan has completed improves visual feedback to the user so that the user knows that the first survey scan has been obtained and that the user does not need to continue to maintain the user's body in the same proper position for the survey. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in accordance with the computer system determining that the body part satisfies the body part positioning precondition, the computer system displays (1238) text comprising instructions to: the body part is moved so as to satisfy the second body part positioning precondition, the display of the first visual cue at a first fixed location within the first user interface is maintained, and the second visual cue is replaced by a third visual cue fixed at a second predefined location (e.g., in simulated three-dimensional space) relative to the representation of the portion of the user's body. In some embodiments, the position of the third visual cue relative to the position of the first visual cue is indicative of movement of the body part required to meet the second body part positioning precondition. For example, after completing the first scan (e.g., capturing the first image) in response to meeting the first body part positioning precondition, the computing device displays instructions for completing the second scan (e.g., capturing the second image) in accordance with determining that the body part meets the second body part positioning precondition (e.g., different from the first body part positioning precondition). For example, the first body part precondition includes a condition for aligning a second visual cue (e.g., fixed to a portion of the palm of the user's hand) with the first visual cue, and the second body part precondition includes a condition for aligning a third visual cue (e.g., fixed to a location near the side of the user's hand) with the first visual cue. For example, fig. 6I shows a text instruction 642 that instructs the user to align a point 644 (e.g., a third visual cue) with the target 618 in order to satisfy the second body part positioning precondition.
Displaying text instructions to the user to explain how the user achieves the proper position of the user's body for the measurement improves visual feedback to the user so that the user knows how to obtain the second survey scan. Displaying a success message to the user to indicate that the second scan has completed improves visual feedback to the user so that the user is aware that the second survey scan was successful and the user does not need to restart the process to obtain the survey. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to the body part meeting the second body part positioning precondition, the computer system displays (1240) a second timer indicating an amount of time the user must maintain the body part in a position that meets the second body part positioning precondition. In some implementations, the second timer is different from the first timer (e.g., a different amount of time on the timer is displayed and/or a different timer). In some embodiments, the second timer is displayed by the computer system to indicate that the user must maintain the third visual cue aligned with the first visual cue for a predetermined amount of time (e.g., 1 second, 3 seconds, 5 seconds, 15 seconds, 30 seconds, 1 minute, 3 minutes, 5 minutes). In some embodiments, the third visual cue is transformed into or presented by the computer system to include a second timer, for example, by adding a mobile user interface element, such as a line or point that moves in a predetermined pattern, such as a swipe hand or point that sweeps around a circle until it completes a 360 degree cycle within a predefined amount of time. For example, fig. 6L-6M illustrate that once the point 644 has been aligned with the target 618 (in fig. 6L), a timer 656 (in fig. 6M) is displayed.
Displaying a timer to the user to indicate the amount of time the user has to maintain the position of the user's body improves visual feedback to the user so that the user knows how long to stay stationary in order to obtain the second survey scan. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in accordance with the computer system determining that the body part satisfies the second body part positioning precondition, the computer system displays (1242) an indication that the second scan of the body part is complete. For example, the computer system captures a second image of the body part (e.g., at a different angle and/or different portion of the body part). In some implementations, the indication includes a screen blinking, a check mark, or an instruction to move to the next user interface (e.g., "continue"). For example, FIG. 6N shows a text prompt 660 with a success message and a check mark 662.
Displaying a success message to the user to indicate that the second scan has completed improves visual feedback to the user so that the user is aware that the second survey scan was successful and the user does not need to restart the process to obtain the survey. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system displays (1244) one or more instructions on the display device before scanning the portion of the user's body using the one or more cameras. For example, fig. 6A-6B illustrate a coaching user interface prior to scanning (e.g., detecting) a user's hand/wrist.
Displaying instructions to the user in the form of animations and/or text prompts to explain how the user achieves the proper position of the user's body for the measurement improves visual feedback to the user so that the user knows how to obtain the measurement. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the one or more instructions include (1246) instructions for placing the computing device into an appropriate location for scanning. In some embodiments, the instructions include instructions to lay the device flat on a table. In some implementations, instructions for placing the computing device are displayed in response to determining that the device is not flat. For example, the device displays an error message that the device is not flat, as shown in fig. 5M.
Providing instructions indicating that the user has not placed the device in the proper pose for measurement (e.g., flat) makes it easier for the user to know how to adjust the device into the proper position without requiring the user to provide additional user input asking the device whether the measurement has been successful. Providing improved visual feedback to the user when a set of conditions has been met and reducing the amount and/or degree of input required to perform an operation by (e.g., automatically) performing the operation enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the one or more instructions include (1248) instructions for selecting which body part is to be scanned using the one or more cameras. For example, the computing device prompts the user to select whether to scan a left body part (e.g., wrist/hand) or a right body part (e.g., right wrist/hand), as shown in fig. 5D-5E.
Providing additional options for selecting which side of the part of the user's body should be measured avoids cluttering the user interface used for aligning the part of the user's body for the measurement by displaying only the guidance for the selected side of the user's body to be measured. Providing different sides (e.g., left or right) for measuring portions of the user's body without cluttering the user interface with additional guidance related to unmeasured portions of the user's body enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the one or more instructions include instructions for moving the body part such that the body part is in a field of view of the one or more cameras (1250). For example, the instructions include instructions on how to move the hand (e.g., in a particular direction) so that the entire body part is in the field of view of the one or more cameras (e.g., instructions to move closer, farther, roll up sleeves, and/or remove jewelry, etc.). For example, fig. 5F shows an instruction of "place your left hand over the device and rotate your hand". Fig. 5K-5N illustrate error conditions including instructions for a user to move his hand into position within the field of view of one or more cameras.
Displaying instructions to the user in the form of animations and/or text prompts to explain how the user achieves the proper position of the user's body for the measurement improves visual feedback to the user so that the user knows how to obtain the measurement. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the one or more instructions include (1252) displaying a representation of the body part in a position that satisfies the body part positioning precondition. For example, fig. 6A-6B include animations of representations 610 of user hands rotating to move points (e.g., point 606 and/or point 612) into target 608. In some embodiments, the one or more instructions further comprise displaying a representation of the body part in a position that satisfies the second body part positioning precondition. For example, as an example of the device successfully scanning the sides of the user's palm and user's hand, representation 610 of the user's hand in fig. 6A-6B is animated to rotate.
The animation of the display demonstrating to the user how to move the representation of the body part into the appropriate position for the measurement improves visual feedback to the user so that the user knows how to obtain the measurement. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the computer system updates (1254) the display of the representation of the portion of the user's body prior to displaying the second visual cue. For example, a representation of a portion of the user's body updates the representation to be semi-transparent or faded. In some embodiments, the representation of the portion of the user's body is updated as the user's body portion moves (e.g., changes position) such that the representation of the portion of the user's body is updated according to the movement of the body portion in the field of view of the one or more cameras. For example, as described with reference to fig. 5K-5M, when the user's hand is not in place, the representation of the user's hand is faded (e.g., displayed with some transparency). In some implementations, after the user's hand is in place for scanning the user's hand, the representation of the user's hand is no longer rendered translucent (or partially transparent), and the second visual cue (e.g., point 620) appears at its predefined fixed position relative to the representation of the user's hand.
Visually de-emphasizing (e.g., fading) the representation of the portion of the user's body provides visual feedback that informs the user that the portion of the user's body is not in place within the field of view and prompts the user to move the portion of the user's body if the user wishes to measure the portion of the user's body. Providing improved visual feedback to the user enhances the operability of the system and makes the user-device interface more efficient (e.g., by helping the user obtain desired results and reducing user errors in operating system/interacting with the system), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
It should be understood that the particular order in which the operations in fig. 12A-12D are described is merely exemplary and is not intended to suggest that the order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. In addition, it should be noted that the details of other processes described herein with respect to other methods described herein (e.g., methods 900, 1000, and 1100) are likewise applicable in a similar manner to method 1200 described above with respect to fig. 12A-12D. For example, the user interface object described above with reference to method 1200 optionally has one or more of the features of the user interface object described herein with reference to other methods described herein (e.g., methods 900, 1000, and 1100). For the sake of brevity, these details are not repeated here.
The operations described above with reference to fig. 9A to 9C, 10A to 10D, 11A to 11C, and 12A to 12D are optionally implemented by the components depicted in fig. 1A to 1B. For example, optionally implemented by event sorter 170, event recognizer 180, and event handler 190. An event monitor 171 in the event sorter 170 detects a contact on the touch-sensitive display 112 and an event dispatcher module 174 delivers event information to the application 136-1. The respective event identifier 180 of the application 136-1 compares the event information to the respective event definition 186 and determines whether the first contact at the first location on the touch-sensitive surface (or whether the rotation of the device) corresponds to a predefined event or sub-event, such as a selection of an object on the user interface, or a rotation of the device from one orientation to another. When a respective predefined event or sub-event is detected, the event recognizer 180 activates an event handler 190 associated with the detection of the event or sub-event. Event handler 190 optionally uses or invokes data updater 176 or object updater 177 to update the application internal state 192. In some embodiments, event handler 190 accesses a corresponding GUI updater 178 to update what is displayed by the application. Similarly, it will be apparent to those skilled in the art how other processes may be implemented based on the components depicted in fig. 1A-1B.
The foregoing description, for purposes of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention and various described embodiments with various modifications as are suited to the particular use contemplated.

Claims (104)

1. A method, comprising:
at a computer system in communication with a display device and one or more cameras:
displaying a visual cue in a first area of a first user interface to move a body part into a field of view of the one or more cameras;
upon displaying the visual cue to move the body part into the field of view of the one or more cameras:
detecting, using the one or more cameras, a portion of a user's body in the field of view of the one or more cameras and corresponding to the body portion;
In response to detecting the portion of the user's body, displaying a representation of the portion of the user's body, comprising:
in accordance with a determination that the portion of the user's body in the field of view of the one or more cameras meets a first criterion, displaying, via the display device, the representation of the portion of the user's body in the field of view of the one or more cameras with a first transparency; and
in accordance with a determination that the portion of the user's body in the field of view of the one or more cameras fails to meet the first criterion, the representation of the portion of the user's body is displayed as having a second transparency indicating that the first criterion has not been met.
2. The method of claim 1, further comprising, upon displaying the visual cue to move the body part into the field of view of the one or more cameras, displaying an animated transition, wherein at least a portion of the visual cue is moved to a position proximate the representation of the portion of the user's body.
3. The method of claim 2, wherein the at least a portion of the visual cue comprises a contour aligned to the representation of the portion of the user's body.
4. A method according to any one of claims 2 to 3, wherein the at least a portion of the visual cue is displayed in a shape that matches a shape of the representation of the portion of the user's body in the field of view of the one or more cameras.
5. The method of any of claims 1-4, wherein the visual cue comprises a representation of a hand.
6. The method of any one of claims 1 to 5, wherein the portion of the user's body comprises a hand.
7. The method of any of claims 1-6, further comprising, in accordance with the determination that the portion of the user's body in the field of view of the one or more cameras fails to meet the first criterion, displaying text indicating that the first criterion has not been met.
8. The method of any of claims 1-7, wherein displaying the representation of the portion of the user's body with a second transparency indicating that the first criterion has not been met comprises visually de-emphasizing the representation of the portion of the user's body.
9. The method of any of claims 1 to 8, wherein the portion of the user's body is in a physical environment, and the method comprises:
Displaying a background prior to detecting the portion of the user's body; and
in response to detecting that the portion of the user's body is in the field of view of the one or more cameras, displaying the representation of the portion of the user's body in the field of view of the one or more cameras over the background.
10. The method of any of claims 1 to 8, wherein the portion of the user's body is in a physical environment, and the method comprises:
displaying a representation of the field of view of the one or more cameras comprising a representation of the physical environment; and
in response to detecting that the portion of the user's body is in the field of view of the one or more cameras, the representation of the physical environment is visually de-emphasized.
11. The method of any of claims 1 to 10, further comprising, in response to detecting the portion of the user's body, displaying an indicator at least partially overlaying the representation of the portion of the user's body in accordance with the determining that the portion of the user's body meets the first criterion.
12. The method of any of claims 1 to 11, further comprising displaying a second user interface, the second user interface comprising:
An option for selecting a product having a plurality of size options, the size options being selectable in accordance with measurements of the portion of the user's body in the field of view of the one or more cameras; and
an affordance of a display of the first user interface is initiated when selected.
13. The method of any of claims 1-12, further comprising displaying in a third user interface instructions identifying the first body portion as either a right body portion or a left body portion.
14. The method of any of claims 1-13, further comprising, in the first user interface, displaying a first color in a second area of the first user interface; and
the display of the first color is replaced by a display of a second color different from the first color.
15. A computer system, comprising:
a display generation section;
one or more cameras;
one or more input devices;
one or more processors; and
a memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs comprising instructions for:
Displaying a visual cue in a first area of a first user interface to move a body part into a field of view of the one or more cameras;
upon displaying the visual cue to move the body part into the field of view of the one or more cameras:
detecting, using the one or more cameras, a portion of a user's body in the field of view of the one or more cameras and corresponding to the body portion;
in response to detecting the portion of the user's body, displaying a representation of the portion of the user's body, comprising:
in accordance with a determination that the portion of the user's body in the field of view of the one or more cameras meets a first criterion, displaying, via the display device, the representation of the portion of the user's body in the field of view of the one or more cameras with a first transparency; and
in accordance with a determination that the portion of the user's body in the field of view of the one or more cameras fails to meet the first criterion, the representation of the portion of the user's body is displayed as having a second transparency indicating that the first criterion has not been met.
16. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by and/or in communication with a computer system comprising a display generation component, one or more cameras, and one or more input devices, cause the computer system to:
displaying a visual cue in a first area of a first user interface to move a body part into a field of view of the one or more cameras;
upon displaying the visual cue to move the body part into the field of view of the one or more cameras:
detecting, using the one or more cameras, a portion of a user's body in the field of view of the one or more cameras and corresponding to the body portion;
in response to detecting the portion of the user's body, displaying a representation of the portion of the user's body, comprising:
in accordance with a determination that the portion of the user's body in the field of view of the one or more cameras meets a first criterion, displaying, via the display device, the representation of the portion of the user's body in the field of view of the one or more cameras with a first transparency; and
In accordance with a determination that the portion of the user's body in the field of view of the one or more cameras fails to meet the first criterion, the representation of the portion of the user's body is displayed as having a second transparency indicating that the first criterion has not been met.
17. A computer system, comprising:
a display generation section;
one or more cameras;
one or more input devices; and
means for displaying a visual cue in a first area of a first user interface to move a body part into a field of view of the one or more cameras;
means enabled when displaying the visual cue moving the body part into the field of view of the one or more cameras to:
detecting, using the one or more cameras, a portion of a user's body in the field of view of the one or more cameras and corresponding to the body portion;
an apparatus enabled to display a representation of the portion of the user's body in response to detecting the portion of the user's body, comprising:
means enabled to display, via the display device, the representation of the portion of the user's body in the field of view of the one or more cameras with a first transparency in accordance with a determination that the portion of the user's body in the field of view of the one or more cameras meets a first criterion; and
In accordance with a determination that the portion of the user's body in the field of view of the one or more cameras fails to meet the first criterion, is enabled to display the representation of the portion of the user's body as a device having a second transparency indicating that the first criterion has not been met.
18. An information processing apparatus for use in a computer system that includes and/or communicates with a display generating component, one or more cameras, and one or more input devices, the information processing apparatus comprising:
means for displaying a visual cue in a first area of a first user interface to move a body part into a field of view of the one or more cameras;
means enabled when displaying the visual cue moving the body part into the field of view of the one or more cameras to:
detecting, using the one or more cameras, a portion of a user's body in the field of view of the one or more cameras and corresponding to the body portion;
an apparatus enabled to display a representation of the portion of the user's body in response to detecting the portion of the user's body, comprising:
Means enabled to display, via the display device, the representation of the portion of the user's body in the field of view of the one or more cameras with a first transparency in accordance with a determination that the portion of the user's body in the field of view of the one or more cameras meets a first criterion; and
in accordance with a determination that the portion of the user's body in the field of view of the one or more cameras fails to meet the first criterion, is enabled to display the representation of the portion of the user's body as a device having a second transparency indicating that the first criterion has not been met.
19. A computer system, comprising:
a display generation section;
one or more cameras;
one or more input devices;
one or more processors; and
a memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-14.
20. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer system comprising and/or in communication with a display generation component, one or more cameras, and one or more input devices, cause the computer system to perform the method of any of claims 1-14.
21. A graphical user interface on a computer system comprising a display generation component, one or more cameras and one or more input devices, a memory, and one or more processors to execute one or more programs stored in the memory and/or in communication with the display generation component, the one or more cameras and the one or more input devices, the memory, and the one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with the method of any of claims 1-14.
22. A computer system, comprising:
a display generation section;
one or more cameras;
one or more input devices; and
apparatus for performing the method of any one of claims 1 to 14.
23. An information processing apparatus for use in a computer system that includes and/or communicates with a display generating component, one or more cameras, and one or more input devices, the information processing apparatus comprising:
Apparatus for performing the method of any one of claims 1 to 14.
24. A method, comprising:
at a computer system in communication with a display device and one or more cameras:
displaying in a user interface a first representation of a body part in a field of view of the one or more cameras;
detecting movement of the body part using the one or more cameras, wherein the displayed first representation of the body part is updated in accordance with the movement of the body part; and
when displaying the first representation of the body part, displaying an indicator at a fixed position relative to the first representation of the body part, wherein:
displaying the indicator at a first location in the user interface overlaying at least a portion of the representation of the body part,
updating the indicator in accordance with the movement of the body part; and is also provided with
The indicator comprises an indication of a suggested direction of movement of the body part.
25. The method of claim 24, wherein the indication of the suggested movement direction comprises an animation of the indicator indicating the movement direction.
26. The method of any one of claims 24 to 25, wherein the body part is a wrist and/or a hand.
27. The method of any one of claims 24 to 26, wherein the body part is in a physical environment; and is also provided with
The method further comprises the steps of:
displaying a background in the first user interface;
detecting, using the one or more cameras, a portion of the physical environment and the body part within the field of view of the one or more cameras; and
the representation of the body part is displayed over the background without displaying a representation of the portion of the physical environment within the field of view of the one or more cameras.
28. The method of any of claims 24 to 27, further comprising:
displaying a background having a first color; and
replacing the display of the background having the first color by the display of the background having the second color.
29. The method of claim 28, wherein the first color and the second color correspond to color options of a physical object to be worn on the body part.
30. The method of any of claims 24 to 29, further comprising:
Detecting movement of the body part in a first direction; and
in response to detecting the movement of the body part in the first direction:
displaying the representation of the body part at a second location in the user interface in accordance with the movement of the body part; and
the indicator is displayed at the fixed location relative to the first representation of the body part displayed at the second location.
31. The method of any of claims 24-30, wherein the first representation of the body part and the indicator are displayed in the user interface in respective first sizes; and is also provided with
The method further comprises the steps of:
detecting movement of the body part that changes a distance between the one or more cameras and the body part; and
in response to detecting a change in the distance between the one or more cameras and the body part:
displaying the first representation of the body part in a second size according to the changed distance; and
the indicator is displayed in a corresponding second size according to the changed distance.
32. The method of any one of claims 24 to 31, wherein:
Detecting movement of the body part includes detecting rotation of the body part; and is also provided with
The method further comprises the steps of:
upon detecting rotation of the body part, scanning one or more images using the one or more cameras to determine a measurement of the body part; and
the indicator is updated to indicate progress of scanning the one or more images.
33. The method of any of claims 24-32, wherein the indication of the suggested direction of movement of the body part comprises an indication of a rotation of the body part.
34. The method of any of claims 24 to 33, further comprising, while displaying the first representation of the body part, capturing one or more images of the body part using the one or more cameras, wherein the one or more images are used to determine a measurement of the body part; and
displaying the measured dimension corresponding to the body part in a second user interface.
35. The method of any one of claims 24 to 34, wherein:
the fixed position relative to the first representation of the body part is a first fixed position relative to the first representation of the body part, and
The method further comprises the steps of:
receiving a first user input in a first direction; and
in response to the first user input, the first fixed position of the indicator relative to the body part is updated to a second fixed position relative to the body part that is different from the first fixed position.
36. The method of any one of claims 24 to 35, further comprising:
replacing a display of the indicator at the first location in the user interface with a user interface element at the first location in the user interface, wherein the user interface element is displayed at the fixed location relative to the first representation of the body part and the user interface element indicates a first size of a portion of the body part corresponding to the portion of the representation of the body part at the fixed location.
37. The method of claim 36, wherein the fixed position relative to the first representation of the body part is a first fixed position; and is also provided with
The method further comprises the steps of:
receiving a second user input for moving the user interface element; and
In response to receiving the second input, the user interface element is moved from the first position in the user interface overlaying at least a portion of the representation of the body part to a third fixed position relative to the first representation of the body part.
38. The method of claim 37, further comprising, in accordance with a determination that the user interface element is at the third fixed location relative to the first representation of the body part, updating the user interface element to indicate a second size of a portion of the body part corresponding to the third fixed location.
39. The method of any of claims 36-38, further comprising capturing an image comprising the first representation of the body part and the user interface element at the first fixed location relative to the first representation of the body part, wherein the user interface element indicates a size of the portion of the body part corresponding to the first fixed location of user interface element.
40. The method of claim 39, further comprising, when displaying the image, receiving a third user input for moving the user interface element to a different fixed position relative to the first representation of the body part in the image.
41. The method of any one of claims 24 to 40, wherein the user interface is a first user interface; and is also provided with
The method further comprises the steps of:
displaying a second user interface, the second user interface comprising:
an option for selecting a product having a plurality of size options, the size options being selectable in accordance with measurements of the body part in the field of view of the one or more cameras; and
an affordance of a display of the first user interface is initiated when selected.
42. The method of any one of claims 24 to 41, wherein the user interface is a user interface within a respective application executed by the computer system.
43. The method of any one of claims 24 to 42, wherein the appearance of the indicator gradually changes as the body part moves to indicate the progress of rotation of the body part.
44. The method of any one of claims 24 to 42, comprising, in response to detecting movement of the body part:
in accordance with a determination that the movement of the body part moves at a speed below a threshold speed, gradually changing the appearance of the indicator in appearance as the body part moves to indicate the progress of movement of the body part toward a target pose; and
In accordance with a determination that the movement of the body part moves at a speed above the threshold speed, at least a portion of the change in appearance of the indicator is visually abandoned upon movement of the body part to indicate that the body part is moving too fast toward the target pose.
45. A computer system, comprising:
a display generation section;
one or more cameras;
one or more input devices;
one or more processors; and
a memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs comprising instructions for:
displaying in a user interface a first representation of a body part in a field of view of the one or more cameras;
detecting movement of the body part using the one or more cameras, wherein the displayed first representation of the body part is updated in accordance with the movement of the body part; and
when displaying the first representation of the body part, displaying an indicator at a fixed position relative to the first representation of the body part, wherein:
Displaying the indicator at a first location in the user interface overlaying at least a portion of the representation of the body part,
updating the indicator in accordance with the movement of the body part; and is also provided with
The indicator comprises an indication of a suggested direction of movement of the body part.
46. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by and/or in communication with a computer system comprising a display generation component, one or more cameras, and one or more input devices, cause the computer system to:
displaying in a user interface a first representation of a body part in a field of view of the one or more cameras;
detecting movement of the body part using the one or more cameras, wherein the displayed first representation of the body part is updated in accordance with the movement of the body part; and
when displaying the first representation of the body part, displaying an indicator at a fixed position relative to the first representation of the body part, wherein:
Displaying the indicator at a first location in the user interface overlaying at least a portion of the representation of the body part,
updating the indicator in accordance with the movement of the body part; and is also provided with
The indicator comprises an indication of a suggested direction of movement of the body part.
47. A computer system, comprising:
a display generation section;
one or more cameras;
one or more input devices; and
means for displaying in a user interface a first representation of a body part in a field of view of the one or more cameras;
means for detecting movement of the body part using the one or more cameras, wherein the displayed first representation of the body part is updated in accordance with the movement of the body part; and
means enabled when displaying the first representation of the body part to display an indicator at a fixed position relative to the first representation of the body part, wherein:
displaying the indicator at a first location in the user interface overlaying at least a portion of the representation of the body part,
updating the indicator in accordance with the movement of the body part; and is also provided with
The indicator comprises an indication of a suggested direction of movement of the body part.
48. An information processing apparatus for use in a computer system that includes and/or communicates with a display generating component, one or more cameras, and one or more input devices, the information processing apparatus comprising:
means for displaying in a user interface a first representation of a body part in a field of view of the one or more cameras;
means for detecting movement of the body part using the one or more cameras, wherein the displayed first representation of the body part is updated in accordance with the movement of the body part; and
means enabled when displaying the first representation of the body part to display an indicator at a fixed position relative to the first representation of the body part, wherein:
displaying the indicator at a first location in the user interface overlaying at least a portion of the representation of the body part,
updating the indicator in accordance with the movement of the body part; and is also provided with
The indicator comprises an indication of a suggested direction of movement of the body part.
49. A computer system, comprising:
a display generation section;
one or more cameras;
one or more input devices;
one or more processors; and
a memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 24-44.
50. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer system comprising and/or in communication with a display generation component, one or more cameras, and one or more input devices, cause the computer system to perform the method of any of claims 24-44.
51. A graphical user interface on a computer system comprising a display generation component, one or more cameras and one or more input devices, a memory, and one or more processors to execute one or more programs stored in the memory and/or in communication with the display generation component, the one or more cameras and the one or more input devices, the memory, and the one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with the method of any of claims 24-44.
52. A computer system, comprising:
a display generation section;
one or more cameras;
one or more input devices; and
apparatus for performing the method of any one of claims 24 to 44.
53. An information processing apparatus for use in a computer system that includes and/or communicates with a display generating component, one or more cameras, and one or more input devices, the information processing apparatus comprising:
apparatus for performing the method of any one of claims 24 to 44.
54. A method, comprising:
at a computer system in communication with a display device and one or more cameras:
detecting, using the one or more cameras, a portion of a user's body in the field of view of the one or more cameras;
scanning the portion of the user's body in the field of view of the one or more cameras to determine a measurement of the portion of the user's body in the field of view of the one or more cameras; and
after scanning the portion of the user's body, generating machine-readable code comprising information that identifies one or more sizing parameters of a wearable object or describes the measurement of the portion of the user's body based on the measurement of the portion of the user's body.
55. The method of claim 54, comprising:
after scanning the portion of the user's body, displaying a first user interface comprising a user interface object that when selected generates the machine-readable code;
detecting a user input selecting the user interface object; and
the machine-readable code is generated in response to detecting the user input.
56. The method of claim 54, wherein the machine-readable code comprises a QR code.
57. The method of any one of claims 54 to 56, further comprising:
scanning, at a second computer system, the machine-readable code; and
in response to scanning the machine-readable code, a process is initiated for displaying information about the wearable object on the second computer system or a third computer system communicatively coupled to the second computer system.
58. The method of claim 57, further comprising:
at the second computer system or the third computer system, in accordance with a determination that the machine-readable code has been scanned, displaying a first application on a first portion of a user interface displayed using a display generation component of the second computer system or the third computer system, the first portion being less than all, wherein the first application includes the one or more sizing parameters identifying the wearable object or the information describing the measurement of the portion of the user's body within the first application.
59. The method of any one of claims 57 to 58, further comprising:
at the second computer system or the third computer system, in response to scanning the machine-readable code, displaying, using a display generating component of the second computer system or the third computer system, a card comprising information about the wearable object including the information identifying one or more sizing parameters of the wearable object stored in the machine-readable code.
60. The method of any one of claims 54 to 59, further comprising:
displaying an option for adding a virtual card comprising the information about the wearable object to a virtual wallet; and
in response to detecting user input selecting the option, the virtual card is stored in the virtual wallet.
61. The method of claim 60, further comprising, at a respective computer system on which the virtual card is stored or accessed, in accordance with a determination that the respective computer system is within a predefined proximity to a predefined location, displaying a visual cue for displaying the virtual card stored in the virtual wallet using a display generating component of the respective computer system.
62. The method of claim 61, further comprising:
at the respective computer system:
detecting a user input selecting the visual cue for displaying the virtual card; and
in response to detecting the user input selecting the visual cue, the virtual card is displayed using the display generating component of the respective computer system.
63. The method of any of claims 61-62, wherein displaying the virtual card comprises displaying the machine-readable code.
64. The method of any of claims 61-63, wherein displaying the virtual card includes displaying a description of the wearable object.
65. A computer system, comprising:
a display generation section;
one or more cameras;
one or more input devices;
one or more processors; and
a memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs comprising instructions for:
detecting, using the one or more cameras, a portion of a user's body in the field of view of the one or more cameras;
Scanning the portion of the user's body in the field of view of the one or more cameras to determine a measurement of the portion of the user's body in the field of view of the one or more cameras; and
after scanning the portion of the user's body, generating machine-readable code comprising information that identifies one or more sizing parameters of a wearable object or describes the measurement of the portion of the user's body based on the measurement of the portion of the user's body.
66. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by and/or in communication with a computer system comprising a display generation component, one or more cameras, and one or more input devices, cause the computer system to:
detecting, using the one or more cameras, a portion of a user's body in the field of view of the one or more cameras;
scanning the portion of the user's body in the field of view of the one or more cameras to determine a measurement of the portion of the user's body in the field of view of the one or more cameras; and
After scanning the portion of the user's body, generating machine-readable code comprising information that identifies one or more sizing parameters of a wearable object or describes the measurement of the portion of the user's body based on the measurement of the portion of the user's body.
67. A computer system, comprising:
a display generation section;
one or more cameras;
one or more input devices; and
means for detecting a portion of a user's body in the field of view of the one or more cameras using the one or more cameras;
means for scanning the portion of the user's body in the field of view of the one or more cameras to determine a measurement of the portion of the user's body in the field of view of the one or more cameras; and
means enabled after scanning the portion of the user's body to generate a machine-readable code comprising information that identifies one or more sizing parameters of a wearable object or describes the measurement of the portion of the user's body based on the measurement of the portion of the user's body.
68. An information processing apparatus for use in a computer system that includes and/or communicates with a display generating component, one or more cameras, and one or more input devices, the information processing apparatus comprising:
means for detecting a portion of a user's body in the field of view of the one or more cameras using the one or more cameras;
means for scanning the portion of the user's body in the field of view of the one or more cameras to determine a measurement of the portion of the user's body in the field of view of the one or more cameras; and
means enabled after scanning the portion of the user's body to generate a machine-readable code comprising information that identifies one or more sizing parameters of a wearable object or describes the measurement of the portion of the user's body based on the measurement of the portion of the user's body.
69. A computer system, comprising:
a display generation section;
one or more cameras;
One or more input devices;
one or more processors; and
a memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 54-64.
70. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer system comprising and/or in communication with a display generation component, one or more cameras, and one or more input devices, cause the computer system to perform the method of any of claims 54-64.
71. A graphical user interface on a computer system comprising a display generation component, one or more cameras and one or more input devices, a memory, and one or more processors to execute one or more programs stored in the memory and/or in communication with the display generation component, the one or more cameras and the one or more input devices, the memory, and the one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with the method of any of claims 54-64.
72. A computer system, comprising:
a display generation section;
one or more cameras;
one or more input devices; and
apparatus for performing the method of any one of claims 54 to 64.
73. An information processing apparatus for use in a computer system that includes and/or communicates with a display generating component, one or more cameras, and one or more input devices, the information processing apparatus comprising:
apparatus for performing the method of any one of claims 54 to 64.
74. A method, comprising:
at a computer system in communication with a display device and one or more cameras:
displaying a first visual cue at a first fixed location within a first user interface indicating a location for moving a body part into a field of view of the one or more cameras;
upon displaying the first visual cue indicating the location in the field of view for moving the body part to the one or more cameras:
detecting, using the one or more cameras, a portion of a user's body in the field of view of the one or more cameras and corresponding to the body portion;
In response to detecting the portion of the user's body in the field of view of the one or more cameras:
displaying a representation of the portion of the user's body; and
a second visual cue fixed at a predefined position relative to the representation of the portion of the user's body is displayed, wherein a position of the second visual cue relative to a position of the first visual cue indicates movement of the body part required to meet a body part positioning precondition.
75. The method of claim 74, further comprising:
detecting movement of the body part while the first visual cue and the second visual cue are displayed; and
in response to detecting movement of the body part, the second visual cue is moved on the display without moving the first visual cue on the display.
76. The method of any one of claims 74-75, wherein:
in accordance with a determination that the body portion is moved a first amount, moving the second visual cue on the display a second amount corresponding to the first amount of movement of the body portion; and
in accordance with a determination that the body portion moves a third amount different from the first amount, the second visual cue is moved on a display by a fourth amount corresponding to the third amount of movement of the visual cue and different from the second amount of movement of the body portion.
77. The method of any one of claims 74 to 76, wherein:
in accordance with a determination that the body part is moving in a first direction, moving the second visual cue on the display in a second direction corresponding to the first direction of movement of the body part; and
in accordance with a determination that the body portion is moving in a third direction different from the first direction, the second visual cue is moved on the display in a fourth direction corresponding to the third direction of movement of the body portion and different from the second direction.
78. The method of any of claims 74-77, wherein the first visual cue is displayed at a fixed size and at a fixed location in the first user interface as the body portion of the user moves.
79. The method of any of claims 74-78, wherein a display size of the second visual cue is updated according to a change in a position of the body part relative to the one or more cameras.
80. The method of any one of claims 74-79, further comprising:
in accordance with a determination that the portion of the user's body changes position relative to the one or more cameras,
Updating a position of the second visual cue within the first user interface while maintaining the position of the second visual cue at the fixed predefined position relative to the representation of the portion of the user's body.
81. The method of any one of claims 74-80, wherein:
displaying the second visual cue at a first angle; and is also provided with
The method further includes, in accordance with a determination that the body part changes position relative to the one or more cameras, updating the second visual cue for display at a second angle, wherein the second angle is determined based on the changed position of the body part.
82. The method of any one of claims 74 to 81, further comprising displaying text comprising instructions to move the body part so as to satisfy the body part positioning precondition.
83. The method of any of claims 74-82, further comprising, in response to the body part meeting the body part positioning precondition, displaying a first timer indicating an amount of time the user must maintain the body part in a position that meets the body part positioning precondition.
84. The method of any of claims 74-83, further comprising, in response to the body part meeting the body part positioning precondition, displaying an indication that the first scan of the body part is complete.
85. The method of claim 84, wherein displaying the indication that the first scan of the body part is complete comprises:
increasing the brightness of the displayed first user interface from a first level to a second level for a predetermined period of time; and
after increasing the brightness to the second level, the brightness of the displayed first user interface is reduced.
86. The method of claim 84, wherein displaying the indication that the first scan of the body part is complete comprises displaying a check mark.
87. The method of any one of claims 74-86, further comprising:
in accordance with a determination that the body part satisfies the body part positioning precondition:
displaying text comprising instructions to move the body part so as to meet a second body part positioning precondition;
maintaining a display of the first visual cue at the first fixed location within the first user interface; and
Replacing the second visual cue by a third visual cue fixed at a second predefined position relative to the representation of the portion of the user's body, wherein a position of the third visual cue relative to a position of the first visual cue indicates movement of the body portion required to meet the second body portion positioning precondition.
88. The method of claim 87, further comprising, in response to the body part meeting the second body part positioning precondition, displaying a second timer indicating an amount of time the user must maintain the body part in a position meeting the second body part positioning precondition.
89. The method of claim 87, further comprising, in accordance with a determination that the body part satisfies the second body part positioning precondition, displaying an indication that a second scan of the body part is complete.
90. The method of any one of claims 74-89, further comprising:
one or more instructions are displayed on the display device prior to scanning the portion of the user's body using the one or more cameras.
91. The method of claim 90, wherein the one or more instructions comprise instructions for placing the computing device into an appropriate location for scanning.
92. The method of any one of claims 90-91, wherein the one or more instructions include instructions for selecting which body part is to be scanned using the one or more cameras.
93. The method of any one of claims 90-92, wherein the one or more instructions include instructions for moving the body part such that the body part is in the field of view of the one or more cameras.
94. The method of any of claims 90-93, wherein the one or more instructions include displaying a representation of a body part in a location that satisfies the body part positioning precondition.
95. The method of any of claims 74-94, further comprising, prior to displaying the second visual cue, updating a display of the representation of the portion of the user's body.
96. A computer system, comprising:
a display generation section;
one or more cameras;
one or more input devices;
one or more processors; and
a memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs comprising instructions for:
Displaying a first visual cue at a first fixed location within a first user interface indicating a location for moving a body part into a field of view of the one or more cameras;
upon displaying the first visual cue indicating the location in the field of view for moving the body part to the one or more cameras:
detecting, using the one or more cameras, a portion of a user's body in the field of view of the one or more cameras and corresponding to the body portion;
in response to detecting the portion of the user's body in the field of view of the one or more cameras:
displaying a representation of the portion of the user's body; and
a second visual cue fixed at a predefined position relative to the representation of the portion of the user's body is displayed, wherein a position of the second visual cue relative to a position of the first visual cue indicates movement of the body part required to meet a body part positioning precondition.
97. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by and/or in communication with a computer system comprising a display generation component, one or more cameras, and one or more input devices, cause the computer system to:
Displaying a first visual cue at a first fixed location within a first user interface indicating a location for moving a body part into a field of view of the one or more cameras;
upon displaying the first visual cue indicating the location in the field of view for moving the body part to the one or more cameras:
detecting, using the one or more cameras, a portion of a user's body in the field of view of the one or more cameras and corresponding to the body portion;
in response to detecting the portion of the user's body in the field of view of the one or more cameras:
displaying a representation of the portion of the user's body; and
a second visual cue fixed at a predefined position relative to the representation of the portion of the user's body is displayed, wherein a position of the second visual cue relative to a position of the first visual cue indicates movement of the body part required to meet a body part positioning precondition.
98. A computer system, comprising:
a display generation section;
one or more cameras;
one or more input devices; and
Means for displaying a first visual cue at a first fixed location within a first user interface indicating a location for moving a body part into a field of view of the one or more cameras;
means enabled when displaying the first visual cue indicating the location in the field of view for moving the body part to the one or more cameras to:
detecting, using the one or more cameras, a portion of a user's body in the field of view of the one or more cameras and corresponding to the body portion;
means, responsive to detecting the portion of the user's body in the field of view of the one or more cameras, enabled to:
displaying a representation of the portion of the user's body; and
a second visual cue fixed at a predefined position relative to the representation of the portion of the user's body is displayed, wherein a position of the second visual cue relative to a position of the first visual cue indicates movement of the body part required to meet a body part positioning precondition.
99. An information processing apparatus for use in a computer system that includes and/or communicates with a display generating component, one or more cameras, and one or more input devices, the information processing apparatus comprising:
Means for displaying a first visual cue at a first fixed location within a first user interface indicating a location for moving a body part into a field of view of the one or more cameras;
means enabled when displaying the first visual cue indicating the location in the field of view for moving the body part to the one or more cameras to:
detecting, using the one or more cameras, a portion of a user's body in the field of view of the one or more cameras and corresponding to the body portion;
means, responsive to detecting the portion of the user's body in the field of view of the one or more cameras, enabled to:
displaying a representation of the portion of the user's body; and
a second visual cue fixed at a predefined position relative to the representation of the portion of the user's body is displayed, wherein a position of the second visual cue relative to a position of the first visual cue indicates movement of the body part required to meet a body part positioning precondition.
100. A computer system, comprising:
A display generation section;
one or more cameras;
one or more input devices;
one or more processors; and
a memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 74-95.
101. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer system comprising and/or in communication with a display generation component, one or more cameras, and one or more input devices, cause the computer system to perform the method of any of claims 74-95.
102. A graphical user interface on a computer system comprising a display generation component, one or more cameras and one or more input devices, a memory, and one or more processors to execute one or more programs stored in the memory and/or in communication with the display generation component, the one or more cameras and the one or more input devices, the memory, and the one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with the method of any of claims 74-95.
103. A computer system, comprising:
a display generation section;
one or more cameras;
one or more input devices; and
apparatus for performing the method of any one of claims 74 to 95.
104. An information processing apparatus for use in a computer system that includes and/or communicates with a display generating component, one or more cameras, and one or more input devices, the information processing apparatus comprising:
apparatus for performing the method of any one of claims 74 to 95.
CN202280015105.7A 2021-02-15 2022-01-19 System, method, and graphical user interface for automated measurement in an augmented reality environment Pending CN117120956A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311417788.1A CN117472182A (en) 2021-02-15 2022-01-19 System, method, and graphical user interface for automated measurement in an augmented reality environment

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/149,553 2021-02-15
US17/576,735 2022-01-14
US17/576,735 US20220261066A1 (en) 2021-02-15 2022-01-14 Systems, Methods, and Graphical User Interfaces for Automatic Measurement in Augmented Reality Environments
PCT/US2022/012856 WO2022173561A1 (en) 2021-02-15 2022-01-19 Systems, methods, and graphical user interfaces for automatic measurement in augmented reality environments

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202311417788.1A Division CN117472182A (en) 2021-02-15 2022-01-19 System, method, and graphical user interface for automated measurement in an augmented reality environment

Publications (1)

Publication Number Publication Date
CN117120956A true CN117120956A (en) 2023-11-24

Family

ID=88806084

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202280015105.7A Pending CN117120956A (en) 2021-02-15 2022-01-19 System, method, and graphical user interface for automated measurement in an augmented reality environment
CN202311417788.1A Pending CN117472182A (en) 2021-02-15 2022-01-19 System, method, and graphical user interface for automated measurement in an augmented reality environment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202311417788.1A Pending CN117472182A (en) 2021-02-15 2022-01-19 System, method, and graphical user interface for automated measurement in an augmented reality environment

Country Status (1)

Country Link
CN (2) CN117120956A (en)

Also Published As

Publication number Publication date
CN117472182A (en) 2024-01-30

Similar Documents

Publication Publication Date Title
US20220261066A1 (en) Systems, Methods, and Graphical User Interfaces for Automatic Measurement in Augmented Reality Environments
AU2022200965B2 (en) Avatar creation and editing
US11822778B2 (en) User interfaces related to time
AU2018279037B2 (en) Sharing user-configurable graphical constructs
US11257464B2 (en) User interface for a flashlight mode on an electronic device
US10007418B2 (en) Device, method, and graphical user interface for enabling generation of contact-intensity-dependent interface responses
EP3404526B1 (en) User interface for a flashlight mode on an electronic device
AU2022220279B2 (en) User interfaces related to time
CN109416599B (en) Apparatus and method for processing touch input
CN117120956A (en) System, method, and graphical user interface for automated measurement in an augmented reality environment
WO2022173561A1 (en) Systems, methods, and graphical user interfaces for automatic measurement in augmented reality environments
CN117501316A (en) System, method, and graphical user interface for adding effects in an augmented reality environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination