CN110851053A - Apparatus, method and graphical user interface for system level behavior of 3D models - Google Patents

Apparatus, method and graphical user interface for system level behavior of 3D models Download PDF

Info

Publication number
CN110851053A
CN110851053A CN201911078900.7A CN201911078900A CN110851053A CN 110851053 A CN110851053 A CN 110851053A CN 201911078900 A CN201911078900 A CN 201911078900A CN 110851053 A CN110851053 A CN 110851053A
Authority
CN
China
Prior art keywords
virtual object
cameras
view
representation
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911078900.7A
Other languages
Chinese (zh)
Inventor
P·洛科
J·R·达斯科拉
S·O·勒梅
J·M·弗科纳
D·阿迪
D·卢依
G·耶基斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DKPA201870346A external-priority patent/DK201870346A1/en
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN110851053A publication Critical patent/CN110851053A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an apparatus, method and graphical user interface for system level behavior of 3D models. A computer system having a display, a touch-sensitive surface, and one or more cameras displays a representation of a virtual object in a first user interface area on the display. A first input by contact is detected at a location on the touch-sensitive surface that corresponds to a representation of a virtual object on the display. In response to detecting the first input by contact: in accordance with a determination that the first input by contact satisfies the first criteria, displaying a second user interface region on the display, including replacing display of at least a portion of the first user interface region, the first user interface region having a representation of a field of view of the one or more cameras, and continuously displaying the representation of the virtual object while switching from displaying the first user interface region to displaying the second user interface region.

Description

Apparatus, method and graphical user interface for system level behavior of 3D models
The present application is a divisional application of the inventive patent application having application date of 2018, 29/9, application No. 201811165504.3 entitled "apparatus, method and graphical user interface for system level behavior of 3D models".
RELATED APPLICATIONS
This application is related to U.S. provisional application No.62/621,529 filed on 24.1.2018, which is incorporated herein by reference in its entirety.
Technical Field
The present invention generally relates to electronic devices that display virtual objects, including, but not limited to, electronic devices that display virtual objects in various scenarios.
Background
In recent years, the development of computer systems for augmented reality has increased significantly. The example augmented reality environment includes at least some virtual elements that replace or augment the physical world. Input devices such as touch-sensitive surfaces for computer systems and other electronic computing devices are used to interact with virtual/augmented reality environments. Example touch-sensitive surfaces include touch pads, touch-sensitive remote controls, and touch screen displays. Such surfaces are used to manipulate the user interface on the display and objects therein. Exemplary user interface objects include digital images, videos, text, icons, and control elements (such as buttons) and other graphics.
Methods and interfaces for interacting with an environment that includes at least some virtual elements (e.g., applications, augmented reality environments, mixed reality environments, and virtual reality environments) are cumbersome, inefficient, and limited. For example, using a series of inputs to orient and position a virtual object in an augmented reality environment is cumbersome, places a significant cognitive burden on the user, and impairs the experience of the virtual/augmented reality environment. Furthermore, these methods take longer than necessary, thereby wasting energy. This latter consideration is particularly important in battery-powered devices.
Disclosure of Invention
Accordingly, there is a need for a computer system having an improved method and interface for interacting with virtual objects. Such methods and interfaces optionally complement or replace conventional methods for interacting with virtual objects. Such methods and interfaces reduce the number, extent, and/or nature of inputs from a user and result in a more efficient human-machine interface. For battery-driven devices, such methods and interfaces may conserve power and increase the time between battery charges.
The computer system of the present disclosure reduces or eliminates the above deficiencies and other problems associated with interfaces for interacting with virtual objects (e.g., user interfaces for Augmented Reality (AR) and related non-AR interfaces). In some embodiments, the computer system comprises a desktop computer. In some embodiments, the computer system is portable (e.g., a laptop, tablet, or handheld device). In some embodiments, the computer system comprises a personal electronic device (e.g., a wearable electronic device, such as a watch). In some embodiments, the computer system has (and/or is in communication with) a touch panel. In some embodiments, the computer system has a touch-sensitive display (also referred to as a "touchscreen" or "touchscreen display") (and/or is in communication with a touch-sensitive display). In some embodiments, the computer system has a Graphical User Interface (GUI), one or more processors, memory, and one or more modules, programs or sets of instructions stored in the memory for performing various functions. In some embodiments, the user interacts with the GUI in part by stylus and/or finger contacts and gestures on the touch-sensitive surface. In some embodiments, the functions optionally include game play, image editing, drawing, presentation, word processing, spreadsheet creation, answering a call, video conferencing, emailing, instant messaging, fitness support, digital photography, digital video recording, web browsing, digital music playing, note taking, and/or digital video playing. Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
According to some embodiments, a method is performed at a computer system having a display, a touch-sensitive surface, and one or more cameras. The method includes displaying a representation of a virtual object in a first user interface area on a display. The method also includes, while displaying the first representation of the virtual object in the first user interface area on the display, detecting a first input by contact at a location on the touch-sensitive surface that corresponds to the representation of the virtual object on the display. The method also includes, in response to detecting the first input by the contact, in accordance with a determination that the first input by the contact satisfies a first criterion: displaying a second user interface region on the display, including replacing display of at least a portion of the first user interface region, the first user interface region having a representation of a field of view of the one or more cameras, and continuously displaying representations of the virtual objects while switching from displaying the first user interface region to displaying the second user interface region.
According to some embodiments, a method is performed at a computer system having a display, a touch-sensitive surface, and one or more cameras. The method includes displaying a first representation of a virtual object in a first user interface area on a display. The method also includes, while displaying the first representation of the virtual object in the first user interface area on the display, detecting a first input by a first contact at a location on the touch-sensitive surface that corresponds to the first representation of the virtual object on the display. The method also includes, in response to detecting the first input by the first contact, and in accordance with a determination that the input by the first contact satisfies a first criterion, displaying a representation of the virtual object in a second user interface region, the second user interface region being different from the first user interface region. The method further includes, while displaying the second representation of the virtual object in the second user interface area, detecting a second input and, in response to detecting the second input, in accordance with a determination that the second input corresponds to a request to manipulate the virtual object in the second user interface area, changing a display property of the second representation of the virtual object within the second user interface area based on the second input; and in accordance with a determination that the second input corresponds to a request to display a virtual object in the augmented reality environment, displaying a third representation of the virtual object, the virtual object having a representation of a field of view of the one or more cameras.
According to some embodiments, a method is performed at a computer system having a display and a touch-sensitive surface. The method includes, in response to a request to display a first user interface, displaying the first user interface with a representation of a first item. The method also includes, in accordance with a determination that the first item corresponds to the respective virtual three-dimensional object, displaying a representation of the first item with a visual indication indicating that the first item corresponds to the first respective virtual three-dimensional object. The method also includes, in accordance with a determination that the first item does not correspond to the respective virtual three-dimensional object, displaying a representation of the first item without the visual indication. The method also includes, after displaying the representation of the first item, receiving a request to display a second user interface including a second item. The method also includes, in response to a request to display a second user interface, displaying the second user interface with a representation of a second item. The method also includes, in accordance with a determination that the second item corresponds to the respective virtual three-dimensional object, displaying a representation of the second item with a visual indication indicating that the second item corresponds to the second respective virtual three-dimensional object. The method also includes, in accordance with a determination that the second item does not correspond to the respective virtual three-dimensional object, displaying a representation of the second item without the visual indication.
According to some embodiments, the method is performed at a computer system having a display generation component, one or more input devices, and one or more cameras. The method includes receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of one or more cameras. The method also includes, in response to a request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of a field of view of one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located. Displaying a representation of a virtual object includes: in accordance with a determination that the object placement criteria are not satisfied, displaying a representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identifiable in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of the portion of the physical environment displayed in the field of view of the one or more cameras; and in accordance with a determination that the object placement criteria are satisfied, displaying a representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras.
According to some embodiments, a method is performed at a computer system having a display generation component, one or more input devices, one or more cameras, and one or more gesture sensors for detecting changes in a gesture of a device including the one or more cameras. The method includes receiving a request to display an augmented reality view of a physical environment in a first user interface region, the first user interface region including a representation of a field of view of one or more cameras. The method further includes, in response to receiving a request to display an augmented reality view of the physical environment, displaying a representation of a field of view of the one or more cameras and, in accordance with a determination that the calibration criteria for the augmented reality view of the physical environment are not satisfied, displaying a calibration user interface object that is dynamically animated according to movement of the one or more cameras in the physical environment, wherein displaying the calibration user interface object includes: while displaying the calibration user interface object, detecting, via one or more gesture sensors, a change in a gesture of one or more cameras in the physical environment; and, in response to detecting a change in pose of the one or more cameras in the physical environment, adjusting at least one display parameter of the calibration user interface object in accordance with the detected change in pose of the one or more cameras in the physical environment. The method also includes detecting that the calibration criteria are satisfied while displaying a calibration user interface object that moves on the display according to the detected changes in pose of the one or more cameras in the physical environment. The method also includes, in response to detecting that the calibration criteria are satisfied, ceasing to display the calibration user interface object.
According to some embodiments, a method is performed at a computer system having a display generation component and one or more input devices including a touch-sensitive surface. The method includes displaying, by a display generation component, a representation of a first perspective of a virtual three-dimensional object in a first user interface region. The method also includes, while displaying the representation of the virtual three-dimensional object from the first perspective in the first user interface region on the display, detecting a first input corresponding to a request to rotate the virtual three-dimensional object relative to the display displaying a portion of the virtual three-dimensional object that is not visible from the first perspective of the virtual three-dimensional object. The method also includes, in response to detecting the first input: in accordance with a determination that the first input corresponds to a request to rotate the three-dimensional object about the first axis, rotating the virtual three-dimensional object relative to the first axis by an amount determined based on a magnitude of the first input and constrained by a movement limit that limits rotation of the virtual three-dimensional object relative to the first axis by more than a threshold amount of rotation; and, in accordance with a determination that the first input corresponds to a request to rotate the three-dimensional object about a second axis different from the first axis, rotating the virtual three-dimensional object relative to the second axis by an amount determined based on a magnitude of the first input, wherein for inputs having magnitudes higher than respective thresholds, the device rotates the virtual three-dimensional object relative to the second axis by more than a threshold amount of rotation.
According to some embodiments, a method is performed at a computer system having a display generation component and a touch-sensitive surface. The method includes displaying, via a display generation component, a first user interface region including user interface objects associated with a plurality of object manipulation behaviors including a first object manipulation behavior performed in response to an input satisfying first gesture recognition criteria and a second object manipulation behavior performed in response to an input satisfying second gesture recognition criteria. The method also includes, while displaying the first user interface region, detecting a first portion of the input that relates to the user interface object, including detecting movement of one or more contacts on the touch-sensitive surface, and evaluating movement of the one or more contacts relative to the first gesture recognition criteria and the second gesture recognition criteria when the one or more contacts are detected on the touch-sensitive surface. The method also includes, in response to detecting the first portion of the input, updating an appearance of the user interface object based on the first portion of the input, including: in accordance with a determination that the first portion of the input satisfies the first gesture recognition criteria before the second gesture recognition criteria are satisfied, change an appearance of the user interface object in accordance with the first object manipulation behavior and based on the first portion of the input, and update the second gesture recognition criteria by increasing a threshold of the second gesture recognition criteria; and in accordance with a determination that the input satisfies the second gesture recognition criteria before the first gesture recognition criteria are satisfied, based on the first portion of the input, changing an appearance of the user interface object in accordance with the second object manipulation behavior, and updating the first gesture recognition criteria by increasing a threshold of the first gesture recognition criteria.
According to some embodiments, the method is performed at a computer system having a display generation component, one or more input devices, one or more audio output generators, and one or more cameras. The method includes displaying, via the display generation component, a representation of a virtual object in a first user interface area, the first user interface area including a representation of a field of view of the one or more cameras, wherein displaying includes maintaining a first spatial relationship between the representation of the virtual object and a plane detected within a physical environment captured in the field of view of the one or more cameras. The method also includes detecting movement of a device that adjusts a field of view of one or more cameras. The method also includes, in response to detecting movement of the device that adjusts the field of view of the one or more cameras: in adjusting the field of view of the one or more cameras, display of a representation of the virtual object in the first user interface area is adjusted according to a first spatial relationship between the virtual object and a plane detected within the field of view of the one or more cameras, and in accordance with determining that movement of the device causes the virtual object to move outside of the displayed portion of the field of view of the one or more cameras and by an amount that exceeds a threshold amount, a first audio alert is generated via the one or more audio output generators.
According to some embodiments, an electronic device comprises a display generation component, optionally one or more input devices, optionally one or more touch-sensitive surfaces, optionally one or more cameras, optionally one or more sensors for detecting intensity of contacts with a touch-sensitive surface, optionally one or more audio output generators, optionally one or more device orientation sensors, optionally one or more tactile output generators, optionally one or more gesture sensors for detecting changes in gesture, one or more processors, and memory storing one or more programs; the one or more programs are configured to be executed by the one or more processors and the one or more programs include instructions for performing or causing the performance of the operations of any of the methods described herein. According to some embodiments, a computer-readable storage medium has instructions stored therein that, when executed by an electronic device having display generation means, optionally one or more input devices, optionally one or more touch-sensitive surfaces, optionally one or more cameras, optionally one or more sensors for detecting intensity of contacts with a touch-sensitive surface, optionally one or more audio output generators, optionally one or more device orientation sensors, optionally one or more tactile output generators, and optionally one or more gesture sensors, cause the device to perform or cause to be performed the operations of any method described herein. According to some embodiments, a graphical user interface on an electronic device with display generation components, optionally one or more input devices, optionally one or more touch-sensitive surfaces, optionally one or more cameras, optionally one or more sensors to detect intensity of contacts with a touch-sensitive surface, optionally one or more audio output generators, optionally one or more device orientation sensors, optionally one or more tactile output generators, and optionally one or more gesture sensors, a memory, and one or more processors to execute one or more programs stored in the memory includes one or more elements displayed in any of the methods described herein that are updated in response to input, as described in any of the methods described herein. According to some embodiments, an electronic device comprises: a display generation component, optionally one or more input devices, optionally one or more touch-sensitive surfaces, optionally one or more cameras, optionally one or more sensors for detecting intensity of contacts with the touch-sensitive surface, optionally one or more audio output generators, optionally one or more device orientation sensors, optionally one or more tactile output generators, and optionally one or more gesture sensors for detecting gesture changes; and means for performing or causing performance of operations of any of the methods described herein. According to some embodiments, an information processing apparatus for use in an electronic device with a display generation component, optionally one or more input devices, optionally one or more touch-sensitive surfaces, optionally one or more cameras, optionally one or more sensors for detecting intensity of contacts with a touch-sensitive surface, optionally one or more audio output generators, optionally one or more device orientation sensors, optionally one or more tactile output generators, and optionally one or more gesture sensors for detecting gesture changes, comprises means for performing, or causing to be performed, operations of any method described herein.
Accordingly, improved methods and interfaces for displaying virtual objects in various scenarios are provided for electronic devices having display generation components, optionally one or more input devices, optionally one or more touch-sensitive surfaces, optionally one or more cameras, optionally one or more sensors for detecting intensity of contacts with touch-sensitive surfaces, optionally one or more audio output generators, optionally one or more device orientation sensors, optionally one or more tactile output generators, and optionally one or more gesture sensors, thereby increasing the effectiveness, efficiency, and user satisfaction of such devices. Such methods and interfaces may complement or replace conventional methods for displaying virtual objects in various scenarios.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following detailed description taken in conjunction with the following drawings, wherein like reference numerals designate corresponding parts throughout the figures.
FIG. 1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
Fig. 1B is a block diagram illustrating example components for event processing, according to some embodiments.
Fig. 1C is a block diagram illustrating a haptic output module according to some embodiments.
FIG. 2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
FIG. 3 is a block diagram of an example multifunction device with a display and a touch-sensitive surface, in accordance with some embodiments.
FIG. 4A illustrates an example user interface of an application menu on a portable multifunction device according to some embodiments.
FIG. 4B illustrates an example user interface for a multifunction device with a touch-sensitive surface separate from a display, in accordance with some embodiments.
Fig. 4C-4E illustrate examples of dynamic intensity thresholds according to some embodiments.
Fig. 4F-4K illustrate a set of sample haptic output patterns, according to some embodiments.
Fig. 5A-5 AT illustrate example user interfaces for displaying representations of virtual objects when switching from displaying a first user interface region to displaying a second user interface region, according to some embodiments.
Fig. 6A-6 AJ illustrate an example user interface, according to some embodiments, for displaying a first representation of a virtual object in a first user interface area, a second representation of the virtual object in a second user interface area, and a third representation of the virtual object with a representation of a field of view of one or more cameras, according to some embodiments.
Fig. 7A-7E, 7F 1-7F 2, 7G 1-7G 2, and 7H-7P illustrate example user interfaces for displaying items with visual indications indicating that the items correspond to virtual three-dimensional objects, according to some embodiments.
Fig. 8A-8E are flow diagrams of a process for displaying a representation of a virtual object when switching from displaying a first user interface region to displaying a second user interface region, according to some embodiments.
Fig. 9A-9D are flow diagrams of processes for displaying a first representation of a virtual object in a first user interface area, displaying a second representation of the virtual object in a second user interface area, and displaying a third representation of the virtual object with a representation of a field of view of one or more cameras, according to some embodiments.
Fig. 10A-10D are flow diagrams of processes for displaying an item with a visual indication indicating that the item corresponds to a virtual three-dimensional object, according to some embodiments.
11A-11V illustrate example user interfaces for displaying virtual objects having different visual attributes depending on whether object placement criteria are satisfied, according to some embodiments.
12A-12D, 12E-1, 12E-2, 12F-1, 12F-2, 12G-1, 12G-2, 12H-1, 12H-2, 12I-1, 12I-2, 12J, 12K-1, 12K-2, 12L-1, and 12L-2 illustrate example user interfaces for displaying calibration user interface objects that are dynamically animated according to movement of one or more cameras of a device, according to some embodiments.
Fig. 13A-13M illustrate example user interfaces that constrain rotation of a virtual object about an axis, according to some embodiments.
Fig. 14A-14Z illustrate example user interfaces that increase a second threshold amount of movement required for a second object manipulation behavior in accordance with a determination that the first object manipulation behavior satisfies the first threshold amount of movement, according to some embodiments.
Fig. 14 AA-14 AD illustrate flow diagrams showing operations for increasing a second threshold movement value required for a second object manipulation behavior in accordance with a determination that the first object manipulation behavior satisfies the first threshold movement value, according to some embodiments.
Fig. 15A-15 AI illustrate example user interfaces that generate an audio alert in accordance with a determination that movement of a device moves a virtual object out of a field of view of a displayed one or more device cameras, according to some embodiments.
16A-16G are flow diagrams of processes for displaying virtual objects having different visual attributes according to whether object placement criteria are satisfied, according to some embodiments.
17A-17D are flow diagrams of processes for displaying a calibration user interface object that is dynamically animated according to movement of one or more cameras of a device, according to some embodiments.
18A-18I are flow diagrams of processes for constraining rotation of a virtual object about an axis, according to some embodiments.
Fig. 19A-19H are flow diagrams of processes for increasing a second threshold movement magnitude required for a second object manipulation behavior in accordance with a determination that the first object manipulation behavior satisfies the first threshold movement magnitude, according to some embodiments.
20A-20F are flow diagrams of processes for generating an audio alert in accordance with a determination that movement of a device moves a virtual object out of a displayed field of view of one or more device cameras, according to some embodiments.
Detailed Description
A virtual object is a graphical representation of a three-dimensional object in a virtual environment. Conventional approaches to interacting with virtual objects to transition the virtual objects from being displayed in the context of an application user interface (e.g., a two-dimensional application user interface that does not display an augmented reality environment) to being displayed in the context of an augmented reality environment (e.g., an environment in which a view of the physical world is augmented with supplemental information that provides additional information to the user that is not available in the physical world) typically require multiple independent inputs (e.g., a series of gestures and button presses, etc.) to achieve a desired result (e.g., to adjust the size, position, and/or orientation of the virtual objects to achieve a realistic or desired appearance in the augmented reality environment). Additionally, conventional input methods typically involve a delay between receiving a request to display an augmented reality environment and displaying the augmented reality environment caused by the time required to activate one or more device cameras to capture a view of the physical world and/or the time required to analyze and characterize a view of the physical world in relation to a virtual object that may be placed in the augmented reality environment (e.g., detecting a plane and/or surface in the captured view of the physical world). Embodiments herein provide an intuitive way for a user to display and/or interact with virtual objects in various scenarios (e.g., by allowing the user to provide input to switch from displaying the virtual objects in the context of an application user interface to displaying the virtual objects in an augmented reality environment, by allowing the user to change display properties of the virtual objects before the virtual objects are displayed in the augmented reality environment (e.g., in a three-dimensional staging environment), by providing an indication that allows the user to easily identify system-level virtual objects from multiple applications, by changing visual properties of the objects when placement information for the objects is determined, by calibrating the user interface objects by providing an animation that indicates device movement required for calibration, by constraining rotation of the displayed virtual objects about an axis, by increasing a threshold movement magnitude for a second object manipulation behavior when the threshold movement magnitude for the first object manipulation behavior is satisfied, to provide an intuitive way for the user to display and/or interact with the virtual objects in various scenarios And by providing an audio alert indicating that the virtual object has moved out of the displayed field of view).
The systems, methods, and GUIs described herein improve user interface interaction with a virtual/augmented reality environment in a number of ways. For example, they make the following operations easier: the method includes displaying a virtual object in an augmented reality environment, and adjusting an appearance of the virtual object displayed in the augmented reality environment in response to a different input.
Fig. 1A-1C, 2, and 3 provide a description of example devices. Fig. 4A to 4B, 5A to 5AT, 6A to 6AJ, 7A to 7P, 11A to 11V, 12A to 12L, 13A to 13M, 14A to 14Z, and 15A to 15AI illustrate example user interfaces for displaying virtual objects in various scenarios. 8A-8E illustrate a process for displaying a representation of a virtual object when switching from displaying a first user interface region to displaying a second user interface region. Fig. 9A-9D illustrate a process for displaying a first representation of a virtual object in a first user interface area, displaying a second representation of the virtual object in a second user interface area, and displaying a third representation of the virtual object with a representation of a field of view of one or more cameras. Fig. 10A-10D illustrate a process for displaying an item with a visual indication indicating that the item corresponds to a virtual three-dimensional object. Fig. 16A to 16G show a process for displaying virtual objects having different visual attributes according to whether object placement criteria are satisfied. 17A-17D illustrate a process for displaying a calibration user interface object that is dynamically animated according to movement of one or more cameras of a device. Fig. 18A to 18I show a process for restricting rotation of a virtual object around an axis. Fig. 14AA to 14AD and fig. 19A to 19H illustrate a process for increasing a second threshold movement value required for a second object manipulation behavior in accordance with a determination that the first object manipulation behavior satisfies the first threshold movement value. 20A-20F illustrate a process for generating an audio alert in accordance with a determination that movement of a device moves a virtual object out of a field of view of a displayed one or more device cameras. The user interfaces in fig. 5A to 5AT, 6A to 6AJ, 7A to 7P, 11A to 11V, 12A to 12L, 13A to 13M, 14A to 14Z, and 15A to 15AI are used to illustrate the processes in fig. 8A to 8E, 9A to 9D, 10A to 10D, 14AA to 14AD, 16A to 16G, 17A to 17D, 18A to 18I, 19A to 19H, and 20A to 20F.
Exemplary device
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of various described embodiments. It will be apparent, however, to one skilled in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail as not to unnecessarily obscure aspects of the embodiments.
It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements in some cases, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact can be termed a second contact, and, similarly, a second contact can be termed a first contact, without departing from the scope of the various described embodiments. The first contact and the second contact are both contacts, but they are not the same contact unless the context clearly indicates otherwise.
The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term "if" is optionally to be interpreted to mean "when.. or" ("where" or "upon") or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if it is determined.. -." or "if [ the stated condition or event ] is detected" is optionally to be construed to mean "upon determining. -. or" in response to determining. -. or "upon detecting [ the stated condition or event ]" or "in response to detecting [ the stated condition or event ]", depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and related processes for using such devices are described herein. In some embodiments, the device is a portable communication device, such as a mobile phone, that also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of portable multifunction devices include, but are not limited to, those from the AppOf leInc (Cupertino, California)
Figure BSA0000194339440000121
And
Figure BSA0000194339440000122
an apparatus. Other portable electronic devices are optionally used, such as laptops or tablets with touch-sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the following discussion, an electronic device including a display and a touch-sensitive surface is described. However, it should be understood that the electronic device optionally includes one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.
The device typically supports various applications, such as one or more of the following: a note taking application, a drawing application, a rendering application, a word processing application, a website creation application, a disc editing application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, a fitness support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications executing on the device optionally use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device are optionally adjusted and/or varied for different applications and/or within respective applications. In this way, a common physical architecture of the device (such as a touch-sensitive surface) optionally supports various applications with a user interface that is intuitive and clear to the user.
Attention is now directed to embodiments of portable devices having touch sensitive displays. FIG. 1A is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display system 112 is sometimes referred to as a "touch screen" for convenience and is sometimes referred to simply as a touch-sensitive display. Device 100 includes memory 102 (which optionally includes one or more computer-readable storage media), a memory controller 122, one or more processing units (CPUs) 120, a peripheral interface 118, RF circuitry 108, audio circuitry 110, a speaker 111, a microphone 113, an input/output (I/O) subsystem 106, other input or control devices 116, and an external port 124. The device 100 optionally includes one or more optical sensors 164. Device 100 optionally includes one or more intensity sensors 165 for detecting intensity of contacts on device 100 (e.g., a touch-sensitive surface, such as touch-sensitive display system 112 of device 100). Device 100 optionally includes one or more tactile output generators 167 for generating tactile outputs on device 100 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 112 of device 100 or touch panel 355 of device 300). These components optionally communicate over one or more communication buses or signal lines 103.
It should be understood that device 100 is merely one example of a portable multifunction device, and that device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of these components. The various components shown in fig. 1A are implemented in hardware, software, firmware, or any combination thereof, including one or more signal processing circuits and/or application specific integrated circuits.
The memory 102 optionally includes high-speed random access memory, and also optionally includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to memory 102 by other components of device 100, such as one or more CPUs 120 and peripheral interface 118, is optionally controlled by a memory controller 122.
Peripheral interface 118 may be used to couple the input and output peripherals of the device to memory 102 and one or more CPUs 120. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in the memory 102 to perform various functions of the device 100 and to process data.
In some embodiments, peripherals interface 118, one or more CPUs 120, and memory controller 122 are optionally implemented on a single chip, such as chip 104. In some other embodiments, they are optionally implemented on separate chips.
RF (radio frequency) circuitry 108 receives and transmits RF signals, also referred to as electromagnetic signals. The RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and so forth. RF circuitry 108 optionally communicates with networks, such as the internet, also known as the World Wide Web (WWW), intranets, and/or wireless networks, such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), as well as other devices via wireless communications. The wireless communication optionally uses any of a number of communication standards, protocols, and technologies, including, but not limited to, global system for mobile communications (GSM), Enhanced Data GSM Environment (EDGE), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), evolution data only (EV-DO), HSPA +, dual cell HSPA (DC-HSPDA), Long Term Evolution (LTE), Near Field Communication (NFC), wideband code division multiple access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), bluetooth, wireless fidelity (Wi-Fi) (e.g., IEEE802.11a, IEEE802.11 ac, IEEE802.11 ax, IEEE802.11 b, IEEE802.11 g, and/or IEEE802.11n), voice over internet protocol (VoIP), Wi-MAX, email protocols (e.g., Internet Message Access Protocol (IMAP), and/or Post Office Protocol (POP)) Instant messaging (e.g., extensible messaging and presence protocol (XMPP), session initiation protocol with extensions for instant messaging and presence (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol including communication protocols not yet developed at the filing date of this document.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. The audio circuitry 110 receives audio data from the peripherals interface 118, converts the audio data to an electrical signal, and transmits the electrical signal to the speaker 111. The speaker 111 converts the electric signal into a sound wave audible to a human. The audio circuit 110 also receives electrical signals converted by the microphone 113 from sound waves. The audio circuit 110 converts the electrical signals to audio data and transmits the audio data to the peripheral interface 118 for processing. Audio data is optionally retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripheral interface 118. In some embodiments, the audio circuit 110 also includes a headset jack (e.g., 212 in fig. 2). The headset jack provides an interface between the audio circuitry 110 and a removable audio input/output peripheral such as an output-only headset or a headset having both an output (e.g., a monaural headset or a binaural headset) and an input (e.g., a microphone).
The I/O subsystem 106 couples input/output peripheral devices on the device 100, such as a touch-sensitive display system 112 and other input or control devices 116, to a peripheral interface 118. The I/O subsystem 106 optionally includes a display controller 156, an optical sensor controller 158, an intensity sensor controller 159, a haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive/transmit electrical signals from/to other input or control devices 116. Other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels, and the like. In some alternative embodiments, one or more input controllers 160 are optionally coupled to (or not coupled to) any of: a keyboard, an infrared port, a USB port, a stylus, and/or a pointing device such as a mouse. The one or more buttons (e.g., 208 in fig. 2) optionally include an up/down button for volume control of the speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206 in fig. 2).
Touch-sensitive display system 112 provides an input interface and an output interface between the device and the user. Display controller 156 receives electrical signals from touch-sensitive display system 112 and/or transmits electrical signals to touch-sensitive display system 112. Touch-sensitive display system 112 displays visual output to a user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively "graphics"). In some embodiments, some or all of the visual output corresponds to a user interface object. As used herein, the term "affordance" refers to a user-interactive graphical user interface object (e.g., a graphical user interface object configured to respond to input directed to the graphical user interface object). Examples of user interactive graphical user interface objects include, but are not limited to, buttons, sliders, icons, selectable menu items, switches, hyperlinks, or other user interface controls.
Touch-sensitive display system 112 has a touch-sensitive surface, sensor, or group of sensors that accept input from a user based on haptic and/or tactile contact. Touch-sensitive display system 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch-sensitive display system 112 and convert the detected contact into interaction with user interface objects (e.g., one or more soft keys, icons, web pages, or images) that are displayed on touch-sensitive display system 112. In some embodiments, the point of contact between touch-sensitive display system 112 and the user corresponds to a user's finger or a stylus.
Touch sensitive display system 112 optionally uses an LCD (liquid crystal display)Display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch-sensitive display system 112 and display controller 156 optionally detect contact and any movement or breaking thereof using any of a variety of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch-sensitive display system 112. In some embodiments, projected mutual capacitance sensing technology is used, such as that from Apple Inc (Cupertino, California)
Figure BSA0000194339440000161
Andthe technique found in (1).
Touch sensitive display system 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touchscreen video resolution exceeds 400dpi (e.g., 500dpi, 800dpi, or greater). The user optionally makes contact with touch-sensitive display system 112 using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work with finger-based contacts and gestures, which may not be as accurate as stylus-based input due to the larger contact area of the finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the action desired by the user.
In some embodiments, in addition to a touch screen, device 100 optionally includes a touch pad (not shown) for activating or deactivating particular functions. In some embodiments, the trackpad is a touch-sensitive area of the device that, unlike a touchscreen, does not display visual output. The touchpad is optionally a touch-sensitive surface separate from touch-sensitive display system 112, or an extension of the touch-sensitive surface formed by the touch screen.
The device 100 also includes a power system 162 for powering the various components. Power system 162 optionally includes a power management system, one or more power sources (e.g., battery, Alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a Light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in a portable device.
The device 100 optionally further includes one or more optical sensors 164. FIG. 1A shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The one or more optical sensors 164 optionally include Charge Coupled Devices (CCDs) or Complementary Metal Oxide Semiconductor (CMOS) phototransistors. The one or more optical sensors 164 receive light projected through the one or more lenses from the environment and convert the light into data representing an image. In conjunction with imaging module 143 (also called a camera module), one or more optical sensors 164 optionally capture still images and/or video. In some embodiments, an optical sensor is located on the back of device 100 opposite touch-sensitive display system 112 on the front of the device, enabling the touch screen to be used as a viewfinder for still and/or video image capture. In some embodiments, another optical sensor is located on the front of the device to capture images of the user (e.g., for self-timer shooting, for video conferencing while the user is viewing other video conference participants on a touch screen, etc.).
Device 100 optionally further comprises one or more contact intensity sensors 165. FIG. 1A shows a contact intensity sensor coupled to an intensity sensor controller 159 in the I/O subsystem 106. The one or more contact intensity sensors 165 optionally include one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors for measuring the force (or pressure) of a contact on a touch-sensitive surface). One or more contact intensity sensors 165 receive contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. In some implementations, at least one contact intensity sensor is collocated with or proximate to a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on the back of device 100 opposite touch-sensitive display system 112, which is located on the front of device 100.
The device 100 optionally further includes one or more proximity sensors 166. Fig. 1A shows a proximity sensor 166 coupled to the peripheral interface 118. Alternatively, the proximity sensor 166 is coupled with the input controller 160 in the I/O subsystem 106. In some embodiments, the proximity sensor turns off and disables touch-sensitive display system 112 when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call).
Device 100 optionally further comprises one or more tactile output generators 167. FIG. 1A shows a haptic output generator coupled to a haptic feedback controller 161 in I/O subsystem 106. In some embodiments, one or more tactile output generators 167 include one or more electro-acoustic devices such as speakers or other audio components; and/or an electromechanical device for converting energy into linear motion such as a motor, solenoid, electroactive aggregator, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component for converting an electrical signal into a tactile output on the device). Tactile output generator 167 receives tactile feedback generation instructions from tactile feedback module 133 and generates tactile outputs on device 100 that can be felt by a user of device 100. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., into/out of the surface of device 100) or laterally (e.g., back and forth in the same plane as the surface of device 100). In some embodiments, at least one tactile output generator sensor is located on the back of device 100 opposite touch-sensitive display system 112 located on the front of device 100.
Device 100 optionally also includes one or more accelerometers 168. Fig. 1A shows accelerometer 168 coupled with peripheral interface 118. Alternatively, accelerometer 168 is optionally coupled with input controller 160 in I/O subsystem 106. In some embodiments, information is displayed in a portrait view or a landscape view on the touch screen display based on analysis of data received from the one or more accelerometers. Device 100 optionally includes a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) in addition to accelerometer 168 for obtaining information about the position and orientation (e.g., portrait or landscape) of device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or set of instructions) 128, a contact/motion module (or set of instructions) 130, a graphics module (or set of instructions) 132, a haptic feedback module (or set of instructions) 133, a text input module (or set of instructions) 134, a Global Positioning System (GPS) module (or set of instructions) 135, and an application program (or set of instructions) 136. Further, in some embodiments, memory 102 stores device/global internal state 157, as shown in fig. 1A and 3. Device/global internal state 157 includes one or more of: an active application state indicating which applications (if any) are currently active; display state, which indicates what applications, views, or other information occupy various areas of touch-sensitive display system 112; sensor status, including information obtained from various sensors of the device and other input or control devices 116; and position and/or orientation information regarding the position and/or attitude of the device.
The operating system 126 (e.g., iOS, Darwin, RTXC, LINUX, UNIX, OSX, WINDOWS, or embedded operating systems such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
The communication module 128 facilitates communication with other devices through one or more external ports 124 and also includes a means for processing RF powerVarious software components for data received by way 108 and/or external port 124. The external port 124 (e.g., Universal Serial Bus (USB), firewire, etc.) is adapted to couple directly to other devices or indirectly via a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external port is some of Apple Inc
Figure BSA0000194339440000191
A multi-pin (e.g., 30-pin) connector that is the same as or similar and/or compatible with the 30-pin connector used in iPod devices. In some embodiments, the external port is some of Apple InciPod
Figure BSA0000194339440000192
A Lightning connector that is the same as or similar and/or compatible with the Lightning connector used in the iPod device.
Contact/motion module 130 optionally detects contact with touch-sensitive display system 112 (in conjunction with display controller 156) and other touch-sensitive devices (e.g., a touchpad or a physical click wheel). Contact/motion module 130 includes various software components for performing various operations related to contact detection (e.g., by a finger or stylus), such as determining whether contact has occurred (e.g., detecting a finger-down event), determining the intensity of contact (e.g., the force or pressure of the contact, or a surrogate for the force or pressure of the contact), determining whether there is movement of the contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining whether contact has ceased (e.g., detecting a finger-lift-off event or a contact-break). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining velocity (magnitude), velocity (magnitude and direction), and/or acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single finger contacts or stylus contacts) or multiple simultaneous contacts (e.g., "multi-touch"/multi-finger contacts). In some embodiments, the contact/motion module 130 and the display controller 156 detect contact on the touch panel.
The contact/motion module 130 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, the gesture is optionally detected by detecting a particular contact pattern. For example, detecting a single-finger tap gesture includes detecting a finger-down event, and then detecting a finger-up (lift-off) event at the same location (or substantially the same location) as the finger-down event (e.g., at an icon location). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event, then detecting one or more finger-dragging events, and then subsequently detecting a finger-up (lift-off) event. Similarly, taps, swipes, drags, and other gestures of the stylus are optionally detected by detecting a particular contact pattern of the stylus.
In some embodiments, detecting a finger tap gesture is dependent on detecting a length of time between a finger down event and a finger up event, but is independent of a finger contact intensity between the finger down event and the finger up event. In some embodiments, a flick gesture is detected based on a determination that the length of time between a finger down event and a finger up event is less than a predetermined value (e.g., less than 0.1, 0.2, 0.3, 0.4, or 0.5 seconds), regardless of whether the intensity of the finger contact during the flick reaches a given intensity threshold (greater than a nominal contact detection intensity threshold), such as a light press or a deep press intensity threshold. Thus, a finger tap gesture may satisfy a particular input criterion that does not require that the characteristic intensity of the contact satisfy a given intensity threshold to satisfy the particular input criterion. For clarity, a finger contact in a flick gesture generally requires that a nominal contact detection intensity threshold below which no contact is detected be met in order to detect a finger press event. A similar analysis applies to detecting flick gestures by a stylus or other contact. Where the device is capable of detecting a finger or stylus contact hovering over the touch-sensitive surface, the nominal contact detection intensity threshold optionally does not correspond to physical contact between the finger or stylus and the touch-sensitive surface.
The same concept applies in a similar manner to other types of gestures. For example, a swipe gesture, a pinch gesture, a spread gesture, and/or a long press gesture may optionally be detected based on satisfaction of criteria that do not relate to the intensity of contacts included in the gesture or require contacts performing the gesture to reach an intensity threshold in order to be recognized. For example, a swipe gesture is detected based on the amount of movement of one or more contacts; a zoom gesture is detected based on movement of two or more contacts toward each other; a magnification gesture is detected based on movement of two or more contacts away from each other; the long press gesture is detected based on a duration of contact on the touch-sensitive surface having less than a threshold amount of movement. Thus, the statement that a particular gesture recognition criterion does not require that the contact intensity satisfy a respective intensity threshold to satisfy the particular gesture recognition criterion means that the particular gesture recognition criterion can be satisfied when a contact in the gesture does not meet the respective intensity threshold, and can also be satisfied if one or more contacts in the gesture meet or exceed the respective intensity threshold. In some embodiments, a flick gesture is detected based on determining that a finger-down event and a finger-up event are detected within a predefined time period, regardless of whether the contact is above or below a respective intensity threshold during the predefined time period, and a swipe gesture is detected based on determining that the contact movement is greater than a predefined magnitude, even if the contact is above the respective intensity threshold at the end of the contact movement. Even in implementations where the detection of gestures is affected by the intensity of the contact performing the gesture (e.g., when the intensity of the contact is above an intensity threshold, the device detects long presses more quickly, or when the intensity of the contact is higher, the device delays the detection of tap inputs), the detection of these gestures may not require the contact to reach a particular intensity threshold (e.g., even if the amount of time required to recognize the gesture changes) as long as the criteria for recognizing the gesture can be met without the contact reaching the particular intensity threshold.
In some cases, the contact intensity threshold, the duration threshold, and the movement threshold are combined in various different combinations in order to create a heuristic algorithm to distinguish two or more different gestures for the same input element or region, such that multiple different interactions with the same input element can provide a richer set of user interactions and responses. The statement that a particular set of gesture recognition criteria does not require that the intensity of the contact satisfy a respective intensity threshold to satisfy the particular gesture recognition criteria does not preclude simultaneous evaluation of other intensity-related gesture recognition criteria to identify other gestures having criteria that are satisfied when the gesture includes a contact having an intensity above the respective intensity threshold. For example, in some cases, a first gesture recognition criterion of a first gesture (which does not require that the intensity of a contact satisfy a respective intensity threshold to satisfy the first gesture recognition criterion) competes with a second gesture recognition criterion of a second gesture (which depends on a contact reaching the respective intensity threshold). In such a competition, if the second gesture recognition criteria of the second gesture are first satisfied, the gesture is optionally not recognized as satisfying the first gesture recognition criteria of the first gesture. For example, if the contact reaches a respective intensity threshold before the contact moves by a predefined amount of movement, a deep press gesture is detected instead of a swipe gesture. Conversely, if the contact moves by a predefined amount of movement before the contact reaches the respective intensity threshold, a swipe gesture is detected instead of a deep press gesture. Even in this case, the first gesture recognition criteria of the first gesture do not require that the intensity of the contact satisfy the respective intensity threshold to satisfy the first gesture recognition criteria, because if the contact remains below the respective intensity threshold until the end of the gesture (e.g., a swipe gesture with the contact not increasing to an intensity above the respective intensity threshold), the gesture will be recognized by the first gesture recognition criteria as a swipe gesture. Thus, a particular gesture recognition criterion that does not require the intensity of the contact to satisfy a respective intensity threshold to satisfy the particular gesture recognition criterion will (a) in some cases ignore the intensity of the contact relative to the intensity threshold (e.g., for a flick gesture) and/or (B) in some cases fail to satisfy the particular gesture recognition criterion (e.g., for a long press gesture) if a competing set of intensity-related gesture recognition criteria (e.g., for a deep press gesture) recognizes the input as corresponding to the intensity-related gesture before the particular gesture recognition criteria recognizes the gesture corresponding to the input, in the sense that the intensity of the contact relative to the intensity threshold is still relied upon (e.g., for a long press gesture that competes for recognition of a deep press gesture).
Graphics module 132 includes various known software components for rendering and displaying graphics on touch-sensitive display system 112 or other displays, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual properties) of displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including without limitation text, web pages, icons (such as user interface objects including soft keys), digital images, videos, animations and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphic module 132 receives one or more codes for specifying a graphic to be displayed from an application program or the like, and also receives coordinate data and other graphic attribute data together if necessary, and then generates screen image data to output to the display controller 156.
Haptic feedback module 133 includes various software components for generating instructions (e.g., instructions used by haptic feedback controller 161) to generate haptic output at one or more locations on device 100 using haptic output generator 167 in response to user interaction with device 100.
Text input module 134, which is optionally a component of graphics module 132, provides a soft keyboard for entering text in various applications such as contacts 137, email 140, IM 141, browser 147, and any other application that requires text input.
The GPS module 135 determines the location of the device and provides such information for use in various applications (e.g., to the phone 138 for location-based dialing; to the camera 143 as picture/video metadata; and to applications that provide location-based services such as weather desktop widgets, local yellow pages desktop widgets, and map/navigation desktop widgets).
Application 136 optionally includes the following modules (or sets of instructions), or a subset or superset thereof:
● contact module 137 (sometimes referred to as a contact list or contact list);
● a telephone module 138;
video conferencing module 139;
e-mail client module 140;
● Instant Messaging (IM) module 141;
fitness support module 142;
camera module 143 for still and/or video images;
● an image management module 144;
browser module 147;
calendar module 148;
desktop applet module 149, optionally including one or more of: a weather desktop applet 149-1, a stock market desktop applet 149-2, a calculator desktop applet 149-3, an alarm desktop applet 149-4, a dictionary desktop applet 149-5 and other desktop applets obtained by the user, and a user created desktop applet 149-6;
● for forming a user-created desktop applet 149-6;
search module 151;
● a video and music player module 152, optionally made up of a video player module and a music player module;
notepad module 153;
map module 154; and/or
online video module 155.
Examples of other applications 136 optionally stored in memory 102 include other word processing applications, other image editing applications, drawing applications, rendering applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, contacts module 137 includes executable instructions for managing contact lists or contact lists (e.g., stored in memory 102 or in an application internal state 192 of contacts module 137 in memory 370), including: adding a name to the address book; delete names from the address book; associating a telephone number, email address, physical address, or other information with a name; associating the image with a name; classifying and classifying names; providing a telephone number and/or email address to initiate and/or facilitate communication via telephone 138, video conference 139, email 140, or instant message 141; and so on.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, phone module 138 includes executable instructions for: entering a sequence of characters corresponding to a telephone number, accessing one or more telephone numbers in the address book 137, modifying the entered telephone number, dialing a corresponding telephone number, conducting a conversation, and disconnecting or hanging up when the conversation is completed. As noted above, the wireless communication optionally uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, one or more optical sensors 164, optical sensor controller 158, contact module 130, graphics module 132, text input module 134, contact list 137, and telephony module 138, video conference module 139 includes executable instructions to initiate, conduct, and terminate video conferences between the user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions for creating, sending, receiving, and managing emails in response to user instructions. In conjunction with the image management module 144, the email client module 140 makes it very easy to create and send an email with a still image or a video image captured by the camera module 143.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, instant messaging module 141 includes executable instructions for: entering a sequence of characters corresponding to an instant message, modifying previously entered characters, sending a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Messaging Service (MMS) protocol for telephone-based instant messages or using XMPP, SIMPLE, Apple Push Notification Service (APNs) or IMPS for internet-based instant messages), receiving an instant message, and viewing the received instant message. In some embodiments, the transmitted and/or received instant messages optionally include graphics, photos, audio files, video files, and/or MMS and/or other attachments supported in an Enhanced Messaging Service (EMS). As used herein, "instant messaging" refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, APNs, or IMPS).
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and video and music player module 152, fitness support module 142 includes executable instructions for creating a workout (e.g., having time, distance, and/or calorie burn goals); communicating with fitness sensors (in sports equipment and smart watches); receiving fitness sensor data; calibrating a sensor for monitoring fitness; selecting and playing music for fitness; and displaying, storing and transmitting fitness data.
In conjunction with touch-sensitive display system 112, display controller 156, one or more optical sensors 164, optical sensor controller 158, contact module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions to: capturing still images or video (including video streams) and storing them in the memory 102, modifying features of the still images or video, and/or deleting the still images or video from the memory 102.
In conjunction with touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions for arranging, modifying (e.g., editing), or otherwise manipulating, labeling, deleting, presenting (e.g., in a digital slide or album), and storing still and/or video images.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions to browse the internet (including searching, linking to, receiving, and displaying web pages or portions thereof, and attachments and other files linked to web pages) according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, email client module 140, and browser module 147, calendar module 148 includes executable instructions for creating, displaying, modifying, and storing calendars and data associated with calendars (e.g., calendar entries, to-do items, etc.) according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, the desktop applet module 149 is a mini-application (e.g., weather desktop applet 149-1, stock market desktop applet 149-2, calculator desktop applet 149-3, alarm clock desktop applet 149-4, and dictionary desktop applet 149-5) that is optionally downloaded and used by the user, or a mini-application created by the user (e.g., user-created desktop applet 149-6). In some embodiments, the desktop applet includes an HTML (hypertext markup language) file, a CSS (cascading style sheet) file, and a JavaScript file. In some embodiments, the desktop applet includes an XML (extensible markup language) file and a JavaScript file (e.g., Yahoo! desktop applet).
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, desktop applet creator module 150 includes executable instructions for creating a desktop applet (e.g., turning a user-specified portion of a web page into a desktop applet).
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions for searching memory 102 for text, music, sound, images, videos, and/or other files that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions.
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, audio circuitry 110, speakers 111, RF circuitry 108, and browser module 147, video and music player module 152 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, as well as executable instructions for displaying, rendering, or otherwise playing back video (e.g., on touch-sensitive display system 112 or on an external display wirelessly connected via external port 124). In some embodiments, the device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple inc.).
In conjunction with touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, notepad module 153 includes executable instructions for creating and managing notepads, backlogs, and the like according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 includes executable instructions for receiving, displaying, modifying, and storing maps and data associated with maps (e.g., driving directions; data for stores and other points of interest at or near a particular location; and other location-based data) according to user instructions.
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, audio circuit 110, speaker 111, RF circuit 108, text input module 134, email client module 140, and browser module 147, online video module 155 includes executable instructions that allow a user to access, browse, receive (e.g., by streaming and/or downloading), play back (e.g., on touch screen 112 or on an external display that is wirelessly connected or connected via external port 124), send emails with links to particular online videos, and otherwise manage online videos in one or more file formats, such as h.264. In some embodiments, the link to the particular online video is sent using instant messaging module 141 instead of email client module 140.
Each of the modules and applications identified above corresponds to a set of executable instructions for performing one or more of the functions described above as well as the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 102 optionally stores a subset of the modules and data structures described above. Further, memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device on which the operation of a predefined set of functions is performed exclusively through a touch screen and/or a touchpad. By using a touch screen and/or trackpad as the primary input control device for operating the device 100, the number of physical input control devices (e.g., push buttons, dials, etc.) on the device 100 is optionally reduced.
The predefined set of functions performed exclusively through the touchscreen and/or trackpad optionally includes navigation between user interfaces. In some embodiments, the touchpad, when touched by a user, navigates device 100 from any user interface displayed on device 100 to a main, home, or root menu. In such embodiments, a touchpad is used to implement a "menu button". In some other embodiments, the menu button is a physical push button or other physical input control device, rather than a touchpad.
Fig. 1B is a block diagram illustrating exemplary components for event processing, according to some embodiments. In some embodiments, memory 102 (in FIG. 1A) or memory 370 (FIG. 3) includes event classifier 170 (e.g., in operating system 126) and corresponding application 136-1 (e.g., any of the aforementioned applications 136, 137-155, 380-390).
Event sorter 170 receives the event information and determines application 136-1 and application view 191 of application 136-1 to which the event information is to be delivered. The event sorter 170 includes an event monitor 171 and an event dispatcher module 174. In some embodiments, application 136-1 includes an application internal state 192 that indicates one or more current application views that are displayed on touch-sensitive display system 112 when the application is active or executing. In some embodiments, device/global internal state 157 is used by event classifier 170 to determine which application(s) are currently active, and application internal state 192 is used by event classifier 170 to determine the application view 191 to which to deliver event information.
In some embodiments, the application internal state 192 includes additional information, such as one or more of: resume information to be used when the application 136-1 resumes execution, user interface state information indicating information being displayed by the application 136-1 or information that is ready for display by the application, a state queue for enabling a user to return to a previous state or view of the application 136-1, and a repeat/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripheral interface 118. The event information includes information about a sub-event (e.g., a user touch on touch-sensitive display system 112 as part of a multi-touch gesture). Peripherals interface 118 transmits information it receives from I/O subsystem 106 or sensors such as proximity sensor 166, accelerometer 168, and/or microphone 113 (through audio circuitry 110). Information received by peripheral interface 118 from I/O subsystem 106 includes information from touch-sensitive display system 112 or a touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to peripheral interface 118 at predetermined intervals. In response, peripheral interface 118 transmits the event information. In other embodiments, peripheral interface 118 transmits event information only when there is a significant event (e.g., receiving input above a predetermined noise threshold and/or receiving input for more than a predetermined duration).
In some embodiments, event classifier 170 further includes hit view determination module 172 and/or active event recognizer determination module 173.
When touch-sensitive display system 112 displays more than one view, hit view determination module 172 provides a software process for determining where within one or more views a sub-event has occurred. The view consists of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected optionally corresponds to a programmatic level within a programmatic or view hierarchy of applications. For example, the lowest level view in which a touch is detected is optionally referred to as a hit view, and the set of events identified as correct inputs is optionally determined based at least in part on the hit view of the initial touch that initiated the touch-based gesture.
Hit view determination module 172 receives information related to sub-events of the touch-based gesture. When the application has multiple views organized in a hierarchy, the hit view determination module 172 identifies the hit view as the lowest view in the hierarchy that should handle the sub-event. In most cases, the hit view is the lowest level view in which the initiating sub-event (i.e., the first sub-event in the sequence of sub-events that form an event or potential event) occurs. Once a hit view is identified by the hit view determination module, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
The active event recognizer determination module 173 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some implementations, the active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of the sub-event are actively participating views, and thus determines that all actively participating views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely confined to the area associated with a particular view, the higher views in the hierarchy will remain actively participating views.
The event dispatcher module 174 dispatches the event information to an event recognizer (e.g., event recognizer 180). In embodiments that include active event recognizer determination module 173, event dispatcher module 174 delivers event information to event recognizers determined by active event recognizer determination module 173. In some embodiments, the event dispatcher module 174 stores event information in an event queue, which is retrieved by the respective event receiver module 182.
In some embodiments, the operating system 126 includes an event classifier 170. Alternatively, application 136-1 includes event classifier 170. In another embodiment, the event classifier 170 is a stand-alone module or is part of another module stored in the memory 102 (such as the contact/motion module 130).
In some embodiments, the application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for processing touch events that occur within a respective view of the application's user interface. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, the respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of the event recognizers 180 are part of a separate module, such as a user interface toolkit (not shown) or a higher level object from which the application 136-1 inherits methods and other properties. In some embodiments, the respective event handlers 190 comprise one or more of: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or calls data updater 176, object updater 177 or GUI updater 178 to update application internal state 192. Alternatively, one or more of the application views 191 include one or more respective event handlers 190. Additionally, in some embodiments, one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
The corresponding event recognizer 180 receives event information (e.g., event data 179) from the event classifier 170 and recognizes events from the event information. The event recognizer 180 includes an event receiver 182 and an event comparator 184. In some embodiments, event recognizer 180 also includes metadata 183 and at least a subset of event delivery instructions 188 (which optionally include sub-event delivery instructions).
The event receiver 182 receives event information from the event sorter 170. The event information includes information about a sub-event such as a touch or touch movement. According to the sub-event, the event information further includes additional information, such as the location of the sub-event. When the sub-event relates to motion of a touch, the event information optionally also includes the velocity and direction of the sub-event. In some embodiments, the event comprises rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information comprises corresponding information about the current orientation of the device (also referred to as the device pose).
Event comparator 184 compares the event information to predefined event or sub-event definitions and determines an event or sub-event or determines or updates the state of an event or sub-event based on the comparison. In some embodiments, event comparator 184 includes event definitions 186. Event definition 186 contains definitions of events (e.g., predefined sub-event sequences), such as event 1(187-1), event 2(187-2), and other events. In some embodiments, sub-events in event 187 include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition of event 1(187-1) is a double click on the displayed object. For example, a double tap includes a first touch (touch start) on the displayed object for a predetermined length of time, a first lift-off (touch end) for a predetermined length of time, a second touch (touch start) on the displayed object for a predetermined length of time, and a second lift-off (touch end) for a predetermined length of time. In another example, the definition of event 2(187-2) is a drag on the displayed object. For example, dragging includes a predetermined length of time of touch (or contact) on a displayed object, movement of the touch across touch-sensitive display system 112, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some embodiments, event definition 187 includes definitions of events for respective user interface objects. In some embodiments, event comparator 184 performs a hit test to determine which user interface object is associated with a sub-event. For example, in an application view that displays three user interface objects on touch-sensitive display system 112, when a touch is detected on touch-sensitive display system 112, event comparator 184 performs a hit Test to determine which of the three user interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 190, the event comparator uses the results of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects the event handler associated with the sub-event and the object that triggered the hit test.
In some embodiments, the definition of the respective event 187 further includes a delay action that delays the delivery of the event information until it has been determined whether the sequence of sub-events does or does not correspond to the event type of the event identifier.
When the respective event recognizer 180 determines that the sequence of sub-events does not match any event in the event definition 186, the respective event recognizer 180 enters an event not possible, event failed, or event ended state, after which subsequent sub-events of the touch-based gesture are ignored. In this case, other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the ongoing touch-based gesture.
In some embodiments, the respective event recognizer 180 includes metadata 183 having configurable attributes, tags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively participating event recognizers. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate how or how event recognizers interact with each other. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate whether a sub-event is delivered to a different level in the view or programmatic hierarchy.
In some embodiments, when one or more particular sub-events of an event are identified, the respective event identifier 180 activates an event handler 190 associated with the event. In some embodiments, the respective event identifier 180 delivers event information associated with the event to the event handler 190. Activating the event handler 190 is different from sending (and deferring) sub-events to the corresponding hit view. In some embodiments, the event recognizer 180 throws a flag associated with the recognized event, and the event handler 190 associated with the flag catches the flag and performs a predefined process.
In some embodiments, event delivery instructions 188 include sub-event delivery instructions that deliver event information about sub-events without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the sequence of sub-events or to actively participating views. Event handlers associated with the sequence of sub-events or with actively participating views receive the event information and perform a predetermined process.
In some embodiments, data updater 176 creates and updates data used in application 136-1. For example, the data updater 176 updates a phone number used in the contacts module 137 or stores a video file used in the video or music player module 152. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, object updater 177 creates a new user interface object or updates the location of a user interface object. GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends the display information to graphics module 132 for display on the touch-sensitive display.
In some embodiments, event handler 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It should be understood that the above discussion of event processing with respect to user touches on a touch sensitive display also applies to other forms of user input utilizing an input device to operate multifunction device 100, not all of which are initiated on a touch screen. For example, mouse movements and mouse button presses, optionally in conjunction with single or multiple keyboard presses or holds; contact movements on the touchpad, such as taps, drags, scrolls, and the like; inputting by a stylus; movement of the device; verbal instructions; detected eye movement; inputting biological characteristics; and/or any combination thereof, is optionally used as input corresponding to sub-events defining the event to be identified.
Fig. 1C is a block diagram illustrating a haptic output module according to some embodiments. In some embodiments, the I/O subsystem 106, such as the tactile feedback controller 161 (fig. 1A) and/or other input controller 160 (fig. 1A), includes at least some of the exemplary components shown in fig. 1C. In some embodiments, peripheral interface 118 includes at least some of the exemplary components shown in fig. 1C.
In some embodiments, the haptic output module includes a haptic feedback module 133. In some embodiments, the tactile feedback module 133 aggregates and combines tactile output from user interface feedback from software applications on the electronic device (e.g., feedback responsive to user input corresponding to a displayed user interface and prompts and other notifications indicating performance of operations or occurrence of events in the user interface of the electronic device). The haptic feedback module 133 includes one or more of a waveform module 123 (for providing a waveform for generating a haptic output), a mixer 125 (for mixing waveforms, such as waveforms in different channels), a compressor 127 (for reducing or compressing the dynamic range of the waveform), a low pass filter 129 (for filtering out high frequency signal components in the waveform), and a thermal controller 131 (for adjusting the waveform according to thermal conditions). In some embodiments, the tactile feedback module 133 is included in the tactile feedback controller 161 (fig. 1A). In some embodiments, a separate unit of the haptic feedback module 133 (or a separate implementation of the haptic feedback module 133) is also included in the audio controller (e.g., the audio circuit 110, fig. 1A) and used to generate the audio signal. In some embodiments, a single haptic feedback module 133 is used to generate the audio signal and to generate the waveform of the haptic output.
In some embodiments, the haptic feedback module 133 also includes a trigger module 121 (e.g., a software application, operating system, or other software module that determines that a haptic output is to be generated and initiates a process for generating a corresponding haptic output). In some embodiments, the trigger module 121 generates a trigger signal for causing (e.g., by the waveform module 123) the generation of the waveform. For example, the trigger module 121 generates a trigger signal based on a preset timing criterion. In some embodiments, the trigger module 121 receives a trigger signal from outside the tactile feedback module 133 (e.g., in some embodiments, the tactile feedback module 133 receives a trigger signal from a hardware input processing module 146 located outside the tactile feedback module 133) and relays the trigger signal to other components within the tactile feedback module 133 (e.g., the waveform module 123) or to a software application that triggers an operation based on activation of a user interface element (e.g., an application icon or an affordance within an application) or a hardware input device (e.g., a home button or an intensity-sensitive input surface, such as an intensity-sensitive touch screen) (with the trigger module 121). In some embodiments, the trigger module 121 also receives haptic feedback generation instructions (e.g., from the haptic feedback module 133, fig. 1A and 3). In some embodiments, the trigger module 121 generates the trigger signal in response to the haptic feedback module 133 (or the trigger module 121 in the haptic feedback module 133) receiving a haptic feedback instruction (e.g., from the haptic feedback module 133, fig. 1A and 3).
Waveform module 123 receives as input a trigger signal (e.g., from trigger module 121) and provides waveforms for generating one or more tactile outputs (e.g., waveforms selected from a predefined set of waveforms assigned for use by waveform module 123, such as the waveforms described in more detail below with reference to fig. 4F-4G) in response to receiving the trigger signal.
The mixer 125 receives waveforms as input (e.g., from the waveform module 123) and mixes the waveforms together. For example, when the mixer 125 receives two or more waveforms (e.g., a first waveform in a first channel and a second waveform in a second channel that at least partially overlaps the first waveform), the mixer 125 outputs a combined waveform corresponding to the sum of the two or more waveforms. In some embodiments, the mixer 125 also modifies one or more of the two or more waveforms to emphasize a particular waveform relative to the rest of the two or more waveforms (e.g., by increasing the scale of the particular waveform and/or decreasing the scale of the other of the waveforms). In some cases, mixer 125 selects one or more waveforms to remove from the combined waveform (e.g., when waveforms from more than three sources have been requested to be output by tactile output generator 167 simultaneously, the waveform from the oldest source is discarded).
The mixer 127 receives as input waveforms, such as a combined waveform from the mixer 125, and modifies the waveforms. In some embodiments, compressor 127 reduces the waveforms (e.g., according to the physical specifications of tactile output generator 167 (fig. 1A) or 357 (fig. 3)) such that the tactile outputs corresponding to the waveforms are reduced. In some embodiments, the compressor 127 limits the waveform, such as by imposing a predefined maximum amplitude for the waveform. For example, the compressor 127 reduces the amplitude of the waveform portions that exceed a predefined amplitude threshold, while maintaining the amplitude of the waveform portions that do not exceed the predefined amplitude threshold. In some implementations, the compressor 127 reduces the dynamic range of the waveform. In some embodiments, compressor 127 dynamically reduces the dynamic range of the waveform such that the combined waveform remains within the performance specifications (e.g., force and/or movable mass displacement limits) of tactile output generator 167.
Low pass filter 129 receives as input a waveform (e.g., a compressed waveform from compressor 127) and filters (e.g., smoothes) the waveform (e.g., removes or reduces high frequency signal components in the waveform). For example, in some cases, compressor 127 includes extraneous signals (e.g., high frequency signal components) in the compressed waveform that prevent haptic output generation and/or exceed the performance specifications of haptic output generator 167 in generating haptic output from the compressed waveform. Low pass filter 129 reduces or removes such extraneous signals in the waveform.
Thermal controller 131 receives as input a waveform (e.g., a filtered waveform from low pass filter 129) and adjusts the waveform according to a thermal condition of apparatus 100 (e.g., based on an internal temperature detected within apparatus 100, such as a temperature of tactile feedback controller 161, and/or an external temperature detected by apparatus 100). For example, in some cases, the output of the tactile feedback controller 161 varies as a function of temperature (e.g., in response to receiving the same waveform, the tactile feedback controller 161 generates a first tactile output when the tactile feedback controller 161 is at a first temperature and a second tactile output when the tactile feedback controller 161 is at a second temperature different from the first temperature). For example, the magnitude of the haptic output may vary as a function of temperature. To reduce the effects of temperature variations, the waveform is modified (e.g., the amplitude of the waveform is increased or decreased based on temperature).
In some embodiments, the haptic feedback module 133 (e.g., trigger module 121) is coupled to the hardware input processing module 146. In some embodiments, the other input controller 160 in fig. 1A includes a hardware input processing module 146. In some embodiments, the hardware input processing module 146 receives input from a hardware input device 145 (e.g., the other input or control device 116 in fig. 1A, such as a home button or an intensity-sensitive input surface, such as an intensity-sensitive touch screen). In some embodiments, the hardware input device 145 is any input device described herein, such as the touch-sensitive display system 112 (fig. 1A), the keyboard/mouse 350 (fig. 3), the touchpad 355 (fig. 3), one of the other input or control devices 116 (fig. 1A), or an intensity-sensitive home button. In some embodiments, hardware input device 145 is comprised of an intensity-sensitive home button, rather than touch-sensitive display system 112 (FIG. 1A), keyboard/mouse 350 (FIG. 3), or touchpad 355 (FIG. 3). In some embodiments, in response to input from the hardware input device 145 (e.g., an intensity-sensitive home button or touch screen), the hardware input processing module 146 provides one or more trigger signals to the tactile feedback module 133 to indicate that a user input has been detected that meets predefined input criteria, such as an input corresponding to a primary button "click" (e.g., "press click" or "release click"). In some embodiments, the tactile feedback module 133 provides a waveform corresponding to the primary button "click" in response to an input corresponding to the primary button "click" to simulate tactile feedback of pressing a physical primary button.
In some embodiments, the haptic output module includes a haptic feedback controller 161 (e.g., haptic feedback controller 161 in fig. 1A) that controls the generation of haptic outputs. In some embodiments, the tactile feedback controller 161 is coupled to a plurality of tactile output generators and selects one or more of the plurality of tactile output generators and sends a waveform to the selected one or more tactile output generators for generating a tactile output. In some embodiments, the haptic feedback controller 161 coordinates haptic output requests corresponding to activating the hardware input device 145 and haptic output requests corresponding to software events (e.g., haptic output requests from the haptic feedback module 133) and modifies one or more of the two or more waveforms to emphasize a particular waveform relative to the rest of the two or more waveforms (e.g., by increasing the scale of the particular waveform and/or decreasing the scale of the rest of the waveforms to preferentially process haptic output corresponding to activating the hardware input device 145 over haptic output corresponding to software events).
In some embodiments, as shown in fig. 1C, the output of the tactile feedback controller 161 is coupled to an audio circuit (e.g., audio circuit 110, fig. 1A) of the device 100 and provides an audio signal to the audio circuit of the device 100. In some embodiments, the tactile feedback controller 161 provides both a waveform for generating a tactile output and an audio signal for providing an audio output in conjunction with generating the tactile output. In some embodiments, the tactile feedback controller 161 modifies the audio signal and/or waveform (used to generate the haptic output) such that the audio output and the haptic output are synchronized (e.g., by delaying the audio signal and/or waveform). in some embodiments, the tactile feedback controller 161 includes a digital-to-analog converter for converting a digital waveform to an analog signal, which is received by the amplifier 163 and/or the haptic output generator 167.
In some embodiments, the haptic output module includes an amplifier 163. In some embodiments, amplifier 163 receives a waveform (e.g., from tactile feedback controller 161) and amplifies the waveform and then sends the amplified waveform to tactile output generator 167 (e.g., either tactile output generator 167 (fig. 1A) or 357 (fig. 3)). For example, amplifier 163 amplifies the received waveform to a signal level that meets the physical specifications of tactile output generator 167 (e.g., to a voltage and/or current required by tactile output generator 167 to generate a tactile output such that the signal sent to tactile output generator 167 generates a tactile output corresponding to the waveform received from tactile feedback controller 161) and sends the amplified waveform to tactile output generator 167. In response, tactile output generator 167 generates a tactile output (e.g., by shifting the movable mass back and forth in one or more dimensions relative to a neutral position of the movable mass).
In some embodiments, the haptic output module includes a sensor 169 coupled to a haptic output generator 167. Sensor 169 detects a state or change in state (e.g., mechanical position, physical displacement, and/or movement) of tactile output generator 167 or one or more components of tactile output generator 167 (e.g., one or more moving components, such as a membrane, used to generate tactile output). In some embodiments, the sensor 169 is a magnetic field sensor (e.g., a hall effect sensor) or other displacement and/or motion sensor. In some embodiments, sensor 169 provides information (e.g., position, displacement, and/or movement of one or more components in tactile output generator 167) to tactile feedback controller 161, and tactile feedback controller 161 adjusts the waveform output from tactile feedback controller 161 (e.g., the waveform optionally sent to tactile output generator 167 via amplifier 163) based on the information provided by sensor 169 regarding the state of tactile output generator 167.
FIG. 2 illustrates a portable multifunction device 100 with a touch screen (e.g., touch-sensitive display system 112 of FIG. 1A) in accordance with some embodiments. The touch screen optionally displays one or more graphics within the User Interface (UI) 200. In these embodiments, as well as others described below, a user can select one or more of these graphics by making gestures on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figure) or one or more styluses 203 (not drawn to scale in the figure). In some embodiments, selection of one or more graphics will occur when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up, and/or down), and/or a rolling of a finger (right to left, left to right, up, and/or down) that has made contact with device 100. In some implementations, or in some cases, inadvertent contact with a graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture that swipes over the application icon optionally does not select the corresponding application.
Device 100 optionally also includes one or more physical buttons, such as a "home" button, or menu button 204. As previously described, the menu button 204 is optionally used to navigate to any application 136 in a set of applications that are optionally executed on the device 100. Alternatively, in some embodiments, the menu button is implemented as a soft key in a GUI displayed on the touch screen display.
In some embodiments, device 100 includes a touch screen display, a menu button 204 (sometimes referred to as a main button 204), a push button 206 for powering the device on/off and for locking the device, a volume adjustment button 208, a Subscriber Identity Module (SIM) card slot 210, a headset jack 212, and a docking/charging external port 124. Pressing the button 206 optionally serves to turn the device on/off by pressing the button and holding the button in a pressed state for a predefined time interval; locking the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or unlocking the device or initiating an unlocking process. In some embodiments, device 100 also accepts voice input through microphone 113 for activating or deactivating certain functions. Device 100 also optionally includes one or more contact intensity sensors 165 for detecting the intensity of contacts on touch-sensitive display system 112 and/or one or more tactile output generators 167 for generating tactile outputs for a user of device 100.
Fig. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, the device 300 is a laptop, desktop, tablet, multimedia player device, navigation device, educational device (such as a child learning toy), gaming system, or control device (e.g., a home controller or industrial controller). Device 300 typically includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. The communication bus 320 optionally includes circuitry (sometimes referred to as a chipset) that interconnects and controls communication between system components. Device 300 includes an input/output (I/O) interface 330 with a display 340, typically a touch screen display. I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and a touchpad 355, a tactile output generator 357 (e.g., similar to one or more tactile output generators 167 described above with reference to fig. 1A) for generating tactile outputs on device 300, a sensor 359 (e.g., an optical sensor, an acceleration sensor, a proximity sensor, a touch-sensitive sensor, and/or a contact intensity sensor similar to one or more contact intensity sensors 165 described above with reference to fig. 1A). Memory 370 includes high speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory 370 optionally includes one or more storage devices located remotely from one or more CPUs 310. In some embodiments, memory 370 stores programs, modules, and data structures similar to, or a subset of, the programs, modules, and data structures stored in memory 102 of portable multifunction device 100 (FIG. 1A). Further, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk editing module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (FIG. 1A) optionally does not store these modules.
Each of the above identified elements in fig. 3 is optionally stored in one or more of the previously mentioned memory devices. Each of the identified modules corresponds to a set of instructions for performing the functions described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures described above. Further, memory 370 optionally stores additional modules and data structures not described above.
Attention is now directed to embodiments of a user interface ("UI") optionally implemented on portable multifunction device 100.
Fig. 4A illustrates an example user interface 400 for an application menu on portable multifunction device 100 according to some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
● one or more signal strength indicators for one or more wireless communications (such as cellular signals and Wi-Fi signals);
●, time;
● a Bluetooth indicator;
● a battery status indicator;
● have trays 408 of common application icons such as:
icon 416 of phone module 138 labeled "phone", optionally including an indicator 414 of the number of missed calls or voice messages;
○ an icon 418 of the email client module 140 labeled "mail", optionally including an indicator 410 of the number of unread emails;
browser module 147 icon 420 labeled "browser", and
○ video and music player module 152, an icon 422 labeled "music", and
● icons for other applications, such as:
icon 424 of IM module 141 labeled "message";
calendar module 148 icon 426 labeled "calendar";
○ icon 428 of image management module 144 labeled "photo";
○ icon 430 of camera module 143 labeled "camera";
○ icon 432 of online video module 155 labeled "online video";
○ stock market desktop applet 149-2 icon 434 labeled "stock market";
○ icon 436 of map module 154 labeled "map";
○ icon 438 labeled "weather" for weather desktop applet 149-1;
○ icon 440 of alarm clock desktop applet 149-4 labeled "clock";
fitness support module 142 icon 442 labeled "fitness support";
notepad module 153 icon 444 labeled "notepad", and
○, for setting up an application or module icon 446, the icon 446 providing access to the settings of the device 100 and its various applications 136.
It should be noted that the icon labels shown in fig. 4A are merely exemplary. For example, other tabs are optionally used for various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 in fig. 3) having a touch-sensitive surface 451 (e.g., tablet or trackpad 355 in fig. 3) separate from the display 450. Although many of the examples that follow will be given with reference to input on touch screen display 112 (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects input on a touch-sensitive surface that is separate from the display, as shown in fig. 4B. In some embodiments, the touch-sensitive surface (e.g., 451 in fig. 4B) has a major axis (e.g., 452 in fig. 4B) that corresponds to a major axis (e.g., 453 in fig. 4B) on the display (e.g., 450). In accordance with these embodiments, the device detects contacts (e.g., 460 and 462 in fig. 4B) with the touch-sensitive surface 451 at locations that correspond to corresponding locations on the display (e.g., in fig. 4B, 460 corresponds to 468 and 462 corresponds to 470). Thus, user inputs (e.g., contacts 460 and 462, and movements thereof) detected by the device on the touch-sensitive surface (e.g., 451 in FIG. 4B) are used by the device to manipulate the user interface on the display (e.g., 450 in FIG. 4B) of the multifunction device while the touch-sensitive surface is separate from the display. It should be understood that similar methods are optionally used for the other user interfaces described herein.
Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contact, single-finger tap gesture, finger swipe gesture, etc.), it should be understood that in some embodiments, one or more of these finger inputs are replaced by inputs from another input device (e.g., mouse-based inputs or stylus inputs). For example, the swipe gesture is optionally replaced by a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, a flick gesture is optionally replaced by a mouse click (e.g., instead of detecting a contact, followed by ceasing to detect a contact) while the cursor is over the location of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be understood that multiple computer mice are optionally used simultaneously, or mouse and finger contacts are optionally used simultaneously.
As used herein, the term "focus selector" refers to an input element that is used to indicate the current portion of the user interface with which the user is interacting. In some implementations that include a cursor or other position marker, the cursor acts as a "focus selector" such that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 355 in fig. 3 or touch-sensitive surface 451 in fig. 4B) while the cursor is over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations that include a touch screen display (e.g., touch-sensitive display system 112 in fig. 1A or the touch screen in fig. 4A) that enables direct interaction with user interface elements on the touch screen display, a contact detected on the touch screen serves as a "focus selector" such that when an input (e.g., a press input by the contact) is detected at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element) on the touch screen display, the particular user interface element is adjusted in accordance with the detected input. In some implementations, the focus is moved from one area of the user interface to another area of the user interface without corresponding movement of a cursor or movement of a contact on the touch screen display (e.g., by moving the focus from one button to another using tab or arrow keys); in these implementations, the focus selector moves according to movement of the focus between different regions of the user interface. Regardless of the particular form taken by the focus selector, the focus selector is typically a user interface element (or contact on a touch screen display) that is controlled by a user to communicate a user-desired interaction with the user interface (e.g., by indicating to the device an element with which the user of the user interface desires to interact). For example, upon detection of a press input on a touch-sensitive surface (e.g., a touchpad or touchscreen), the location of a focus selector (e.g., a cursor, contact, or selection box) over a respective button will indicate that the user desires to activate the respective button (as opposed to other user interface elements shown on the device display).
As used in this specification and claims, the term "intensity" of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of the contact on the touch-sensitive surface (e.g., finger contact or stylus contact), or to a substitute (surrogate) for the force or pressure of the contact on the touch-sensitive surface. The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., at least 256). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average or sum) to determine an estimated contact force. Similarly, the pressure sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereof, the capacitance of the touch-sensitive surface in the vicinity of the contact and/or changes thereof and/or the resistance of the touch-sensitive surface in the vicinity of the contact and/or changes thereof are optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the surrogate measurement of contact force or pressure is used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the surrogate measurement). In some implementations, the substitute measurement of contact force or pressure is converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). The intensity of the contact is used as an attribute of the user input, allowing the user to access additional device functionality that the user would otherwise not have readily accessible on a smaller sized device with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or physical/mechanical controls such as knobs or buttons).
In some embodiments, the contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by the user (e.g., determine whether the user has "clicked" on an icon). In some embodiments, at least a subset of the intensity thresholds are determined as a function of software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and may be adjusted without changing the physical hardware of device 100). For example, the mouse "click" threshold of the trackpad or touch screen display may be set to any one of a wide range of predefined thresholds without changing the trackpad or touch screen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more intensity thresholds in a set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting multiple intensity thresholds at once with a system-level click on an "intensity" parameter).
As used in the specification and in the claims, the term "characteristic intensity" of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. The characteristic intensity is optionally based on a predefined number of intensity samples or a set of intensity samples acquired during a predetermined time period (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, 5 seconds, 10 seconds) relative to a predefined event (e.g., after detecting contact, before detecting contact liftoff, before or after detecting contact start movement, before or after detecting contact end, before or after detecting an increase in intensity of contact, and/or before or after detecting a decrease in intensity of contact). The characteristic intensity of the contact is optionally based on one or more of: a maximum value of the contact strength, a mean value of the contact strength, a value at the first 10% of the contact strength, a half maximum value of the contact strength, a 90% maximum value of the contact strength, a value generated by low-pass filtering the contact strength over a predefined period of time or from a predefined time, etc. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an operation. For example, the set of one or more intensity thresholds may include a first intensity threshold and a second intensity threshold. In this example, a contact whose characteristic intensity does not exceed the first intensity threshold results in a first operation, a contact whose characteristic intensity exceeds the first intensity threshold but does not exceed the second intensity threshold results in a second operation, and a contact whose characteristic intensity exceeds the second intensity threshold results in a third operation. In some embodiments, a comparison between the characteristic intensity and one or more intensity thresholds is used to determine whether to perform one or more operations (e.g., whether to perform a respective option or forgo performing a respective operation) rather than to determine whether to perform a first operation or a second operation.
In some implementations, a portion of the gesture is recognized for determining the feature intensity. For example, the touch-sensitive surface may receive a continuous swipe contact that transitions from a starting location and reaches an ending location (e.g., a drag gesture) where the intensity of the contact increases. In this embodiment, the characteristic strength of the contact at the end position may be based on only a portion of the continuous swipe contact, rather than the entire swipe contact (e.g., only a portion of the swipe contact at the end position). In some implementations, a smoothing algorithm may be applied to the intensity of the swipe gesture before determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of: a non-weighted moving average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some cases, these smoothing algorithms eliminate narrow spikes or dips in the intensity of the swipe contact for the purpose of determining the feature intensity.
The user interface diagrams described herein optionally include various intensity diagrams that illustrate contacts on the touch-sensitive surface relative to one or more intensity thresholds (e.g., a contact detection intensity threshold, IT)0Light press pressure intensity threshold ITLDeep compression strength threshold ITD(e.g., at least initially higher than ITL) And/or one or more other intensity thresholds (e.g., than IT)LLow intensity threshold ITH) Current intensity of the light source). This intensity map is typically not part of the displayed user interface, but is provided to assist in interpreting the map. In some embodiments, the light press intensity threshold corresponds to an intensity that: at which intensity the device will perform the operations typically associated with clicking a button of a physical mouse or touchpad. In some embodiments, the deep press intensity threshold corresponds to an intensity that: at which intensity the device will perform a different operation than that typically associated with clicking a button of a physical mouse or trackpad. In some embodiments, when the characteristic intensity is detected to be below the light press intensity threshold (e.g., and above the nominal contact detection intensity threshold IT)0A contact lower than the nominal contact detection intensity threshold is no longer detected), the device will move the focus selector in accordance with movement of the contact across the touch-sensitive surface without performing the operations associated with the light press intensity threshold or the deep press intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface drawings.
In some embodiments, the response of the device to an input detected by the device depends on criteria based on the intensity of the contact during the input. For example, for some "tap" inputs, the intensity of the contact exceeding a first intensity threshold during the input triggers a first response. In some embodiments, the response of the device to an input detected by the device depends on criteria including both the intensity of contact during the input and time-based criteria. For example, for some "deep press" inputs, the intensity of a contact that exceeds a second intensity threshold, greater than the first intensity threshold of a light press, triggers a second response during the input as long as a delay time elapses between the first intensity threshold being met and the second intensity threshold being met. The delay time is typically less than 200ms (milliseconds) in duration (e.g., 40ms, 100ms, or 120ms, depending on the magnitude of the second intensity threshold, wherein the delay time increases as the second intensity threshold increases). This delay time helps avoid accidentally recognizing a deep press input. As another example, for some "deep press" inputs, a period of reduced sensitivity will occur after the first intensity threshold is reached. During this period of reduced sensitivity, the second intensity threshold is increased. This temporary increase in the second intensity threshold also helps to avoid accidental deep press inputs. For other deep press inputs, the response to detecting a deep press input does not depend on time-based criteria.
In some embodiments, one or more of the input intensity thresholds and/or corresponding outputs vary based on one or more factors (such as user settings, contact motion, input timing, application execution, rate at which intensity is applied, number of simultaneous inputs, user history, environmental factors (e.g., environmental noise), focus selector position, etc.. exemplary factors are described in U.S. patent application Ser. Nos. 14/399,606 and 14/624,296, which are incorporated herein by reference in their entirety.
For example, FIG. 4C illustrates a dynamic intensity threshold 480 that varies over time based in part on the intensity of the touch input 476 over time. The dynamic intensity threshold 480 is the sum of two components: a first component 474 that decays over time after a predefined delay time p1 from when the touch input 476 was initially detected, and a second component 478 that tracks the intensity of the touch input 476 over time. The initial high intensity threshold of the first component 474 reduces accidental triggering of a "deep press" response while still allowing an immediate "deep press" response if the touch input 476 provides sufficient intensity. The second component 478 reduces the gradual intensity fluctuations through the touch input that inadvertently trigger a "deep press" response. In some embodiments, a "deep press" response is triggered when the touch input 476 meets a dynamic intensity threshold 480 (e.g., at point 481 in fig. 4C).
FIG. 4D showsAnother dynamic intensity threshold 486 (e.g., intensity threshold I)D). Fig. 4D also shows two other intensity thresholds: first intensity threshold IHAnd a second intensity threshold IL. In FIG. 4D, although the touch input 484 satisfies the first intensity threshold I before time p2HAnd a second intensity threshold ILBut no response is provided until a delay time p2 elapses at time 482. Also in FIG. 4D, the dynamic intensity threshold 486 decays over time, with decay at slave time 482 (triggering and second intensity threshold I)LTime of associated response) has elapsed a time 488 after a predefined delay time p 1. This type of dynamic intensity threshold reduction is immediately triggered with a lower threshold intensity (such as the first intensity threshold I)HOr a second intensity threshold IL) Accidental triggering of and dynamic intensity threshold I after or concurrently with an associated responseDAn associated response.
FIG. 4E shows yet another dynamic intensity threshold 492 (e.g., intensity threshold I)D). In FIG. 4E, the trigger vs. intensity threshold I is shown after a delay time p2 has elapsed since the time that the touch input 490 was initially detectedLAn associated response. At the same time, the dynamic intensity threshold 492 decays after a predefined delay time p1 has elapsed from the time the touch input 490 was initially detected. Thus, at the trigger and intensity threshold ILDecreasing the intensity of the touch input 490 after the associated response, then increasing the intensity of the touch input 490 without releasing the touch input 490 may trigger the intensity threshold IDThe associated response (e.g., at time 494), even when the intensity of the touch input 490 is below another intensity threshold (e.g., intensity threshold I)L) The same is true of the case.
Contact characteristic intensity from below light press intensity threshold ITLTo be between the light press intensity threshold ITLAnd deep press intensity threshold ITDThe intensity in between is sometimes referred to as a "light press" input. Characteristic intensity of contact from below deep press intensity threshold ITDIs increased to above the deep press strength threshold ITDIs sometimes referred to as a "deep press" input. Contact featureIntensity from below contact detection intensity threshold IT0To be intermediate the contact detection intensity threshold IT0And light press intensity threshold ITLIs sometimes referred to as detecting contact on the touch surface. Characteristic intensity of contact from above contact detection intensity threshold IT0Is reduced to below the contact detection intensity threshold IT0Is sometimes referred to as detecting lift-off of the contact from the touch surface. In some embodiments, IT0Is zero. In some embodiments, IT0Greater than zero in some illustrations, a shaded circle or ellipse is used to represent the intensity of a contact on the touch-sensitive surface. In some illustrations, circles or ellipses without shading are used to represent respective contacts on the touch-sensitive surface without specifying the intensity of the respective contacts.
In some embodiments described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting a respective press input performed with a respective contact (or contacts), wherein the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or contacts) above a press input intensity threshold. In some embodiments, the respective operation is performed in response to detecting that the intensity of the respective contact increases above the press input intensity threshold (e.g., performing the respective operation on a "down stroke" of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below the press input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press input threshold (e.g., the respective operation is performed on an "up stroke" of the respective press input).
In some embodiments, the device employs intensity hysteresis to avoid accidental input sometimes referred to as "jitter," where the device defines or selects a hysteresis intensity threshold having a predefined relationship to the press input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press input intensity threshold, or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below a hysteresis intensity threshold corresponding to the press input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., the respective operation is performed on an "up stroke" of the respective press input). Similarly, in some embodiments, a press input is detected only when the device detects an increase in contact intensity from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press input intensity threshold and optionally a subsequent decrease in contact intensity to an intensity at or below the hysteresis intensity, and a corresponding operation is performed in response to detecting the press input (e.g., depending on the circumstances, the increase in contact intensity or the decrease in contact intensity).
For ease of explanation, the description of operations performed in response to a press input associated with a press input intensity threshold or in response to a gesture that includes a press input is optionally triggered in response to detecting: the intensity of the contact increases above the press input intensity threshold, the intensity of the contact increases from an intensity below the hysteresis intensity threshold to an intensity above the press input intensity threshold, the intensity of the contact decreases below the press input intensity threshold, or the intensity of the contact decreases below the hysteresis intensity threshold corresponding to the press input intensity threshold. Additionally, in examples in which operations are described as being performed in response to detecting that the intensity of the contact decreases below the press input intensity threshold, the operations are optionally performed in response to detecting that the intensity of the contact decreases below a hysteresis intensity threshold that corresponds to and is less than the press input intensity threshold. As described above, in some embodiments, the triggering of these operations also depends on the time-based criteria being met (e.g., a delay time has elapsed between the first intensity threshold being met and the second intensity threshold being met).
As used in this specification and claims, the term "haptic output" refers to a physical displacement of a device relative to a previous position of the device, a physical displacement of a component of the device (e.g., a touch-sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a center of mass of the device that is to be detected by a user with the user's sense of touch. For example, where a device or component of a device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other portion of a user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a haptic sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of the touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is optionally interpreted by the user as a "down click" or "up click" of a physical actuation button. In some cases, the user will feel a tactile sensation, such as a "press click" or "release click," even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moving. As another example, even when there is no change in the smoothness of the touch sensitive surface, the movement of the touch sensitive surface is optionally interpreted or sensed by the user as "roughness" of the touch sensitive surface. While such interpretation of touch by a user will be limited by the user's individualized sensory perception, many sensory perceptions of touch are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click down," "click up," "roughness"), unless otherwise stated, the generated haptic output corresponds to a physical displacement of the device or a component thereof that would generate the sensory perception of a typical (or ordinary) user. Providing haptic feedback to a user using haptic output enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the haptic output pattern specifies a characteristic of the haptic output, such as a magnitude of the haptic output, a shape of a motion waveform of the haptic output, a frequency of the haptic output, and/or a duration of the haptic output.
When the device generates haptic outputs having different haptic output patterns (e.g., via one or more haptic output generators that move the movable mass to generate the haptic outputs), the haptic outputs may produce different haptic sensations in a user holding or touching the device. While the user's senses are based on the user's perception of the haptic output, most users will be able to recognize changes in the waveform, frequency, and amplitude of the haptic output generated by the device. Thus, the waveform, frequency, and amplitude may be adjusted to indicate to the user that a different operation has been performed. As such, having characteristics (e.g., size, material, weight, stiffness, smoothness, etc.) designed, selected, and/or arranged to simulate an object in a given environment (e.g., a user interface including graphical features and objects, a simulated physical environment having virtual boundaries and virtual objects, a real physical environment having physical boundaries and physical objects, and/or a combination of any of the above); behavior (e.g., oscillation, displacement, acceleration, rotation, stretching, etc.); and/or interaction (e.g., collision, adhesion, repulsion, attraction, friction, etc.) will in some cases provide helpful feedback to the user that reduces input errors and improves the efficiency of the user's operation of the device. Additionally, haptic output is optionally generated to correspond to feedback that is independent of the simulated physical characteristic (such as an input threshold or object selection). Such tactile output will in some cases provide helpful feedback to the user, which reduces input errors and improves the efficiency of the user's operation of the device.
In some embodiments, the haptic output with the appropriate haptic output mode serves as a cue for an event of interest to occur in the user interface or behind the screen in the device. Examples of events of interest include activation of an affordance (e.g., a real or virtual button, or a toggle switch) provided on the device or in the user interface, success or failure of a requested operation, reaching or crossing a boundary in the user interface, entering a new state, switching input focus between objects, activating a new mode, reaching or crossing an input threshold, detecting or recognizing a type of input or gesture, and so forth. In some embodiments, a tactile output is provided to serve as a warning or cue as to an impending event or result that may occur unless a change of direction or an interrupt input is detected in time. Haptic output is also used in other contexts to enrich the user experience, improve accessibility to the device by users having visual or motor difficulties or other accessibility needs, and/or improve the efficiency and functionality of the user interface and/or device. Optionally comparing the tactile output with the audio input and/or visual user interface changes further enhances the user's experience when interacting with the user interface and/or device and facilitates better transfer of information about the state of the user interface and/or device, and this reduces input errors and improves the efficiency of the user's operation of the device.
Fig. 4F-4H provide a set of sample haptic output patterns that can be used, individually or in combination, as is or through one or more transformations (e.g., modulation, magnification, truncation, etc.) to generate suitable haptic feedback in various scenarios for various purposes, such as those described above and those described for the user interfaces and methods discussed herein. This example of a control panel of haptic output shows how a set of three waveforms and eight frequencies can be used to generate an array of haptic output patterns. In addition to the haptic output modes shown in these figures, each of these haptic output modes is optionally adjusted in magnitude by changing the gain value of the haptic output mode, as shown, for example, for FullTap 80Hz, FullTap200Hz, MiniTap 80Hz, MiniTap 200Hz, MicroTap 80Hz, and MicroTap 200Hz in fig. 4I through 4K, which are each shown as variants with gains of 1.0, 0.75, 0.5, and 0.25. As shown in fig. 4I-4K, changing the gain of the haptic output pattern changes the amplitude of the pattern without changing the frequency of the pattern or changing the shape of the waveform. In some embodiments, changing the frequency of the tactile output pattern also results in a lower amplitude because some tactile output generators are limited in how much force can be applied to the movable mass, so the higher frequency movement of the mass is constrained to a lower amplitude to ensure that the acceleration required to generate the waveform does not require forces outside the operating force range of the tactile output generator (e.g., peak amplitudes of fulltaps at 230Hz, 270Hz, and 300Hz are lower than amplitudes of fulltaps at 80Hz, 100Hz, 125Hz, and 200 Hz).
Fig. 4F to 4K show haptic output patterns having specific waveforms. The waveform of the haptic output pattern represents a pattern of physical displacement versus time relative to a neutral position (e.g., xzero) through which the movable mass passes to generate the haptic output having the haptic output pattern. For example, the first set of haptic output patterns shown in fig. 4F (e.g., the haptic output pattern of "FullTap") each have a waveform that includes an oscillation having two full cycles (e.g., an oscillation that begins and ends at a neutral position and passes through the neutral position three times). The second set of haptic output patterns shown in fig. 4G (e.g., the haptic output pattern of "MiniTap") each have a waveform that includes an oscillation having one full cycle (e.g., an oscillation that begins and ends at a neutral position and crosses the neutral position once). The third set of haptic output patterns shown in fig. 4H (e.g., the haptic output patterns of "MicroTap") each have a waveform that includes an oscillation having half a full cycle (e.g., an oscillation that begins and ends at a neutral position and does not pass through the neutral position). The waveform of the haptic output pattern also includes a start buffer and an end buffer representing the gradual acceleration and deceleration of the movable mass at the beginning and end of the haptic output. The example waveforms shown in fig. 4F-4K include xmin and xmax values representing the maximum and minimum degrees of movement of the movable mass. For larger electronic devices where the movable mass is large, the minimum and maximum degrees of movement of the mass may be greater or less. The examples shown in fig. 4F to 4K describe the movement of a mass in 1 dimension, however, similar principles can be applied to the movement of a movable mass in two or three dimensions.
As shown in fig. 4F-4K, each haptic output pattern also has a corresponding characteristic frequency that affects the "pitch" of the tactile sensation felt by the user from the haptic output having that characteristic frequency. For continuous haptic output, the characteristic frequency represents the number of cycles (e.g., cycles per second) that a movable mass of the haptic output generator completes within a given time period. For discrete haptic output, a discrete output signal (e.g., having 0.5, 1, or 2 cycles) is generated, and the characteristic frequency value specifies how fast the movable mass needs to move to generate the haptic output having the characteristic frequency. As shown in fig. 4F-4H, for each type of haptic output (e.g., defined by a respective waveform, such as FullTap, MiniTap, or MicroTap), a higher frequency value corresponds to faster movement of the movable mass, and thus, in general, to a shorter haptic output completion time (e.g., a time including the number of cycles required to complete the discrete haptic output plus start and end buffer times). For example, a FullTap with a characteristic frequency of 80Hz takes longer to complete than a FullTap with a characteristic frequency of 100Hz (e.g., 35.4msvs.28.3ms in FIG. 4F). Further, for a given frequency, a haptic output having more cycles in its waveform at the corresponding frequency takes longer to complete than a haptic output having fewer cycles in its waveform at the same corresponding frequency. For example, a 150Hz FullTap takes longer to complete than a 150Hz MiniTap (e.g., 19.4ms vs.12.8ms), and a 150Hz MiniTap takes longer to complete than a 150Hz MicroTap (e.g., 12.8ms vs.9.4 ms). However, for haptic output patterns having different frequencies, this rule may not apply (e.g., a haptic output with more cycles but a higher frequency may take a shorter amount of time to complete than a haptic output with fewer cycles but a lower frequency, or vice versa). For example, at 300Hz, FullTap takes as long as MiniTap (e.g., 9.9 ms).
As shown in fig. 4F to 4K, the haptic output pattern also has a characteristic magnitude that affects the amount of energy contained in the haptic signal, or the "intensity" of the tactile sensation that the user can feel through the haptic output having the characteristic magnitude. In some embodiments, the characteristic magnitude of the tactile output pattern refers to an absolute or normalized value representing the maximum displacement of the movable mass relative to a neutral position when generating the tactile output. In some embodiments, the characteristic magnitude of the haptic output pattern may be adjusted according to various conditions (e.g., customized based on user interface context and behavior) and/or preconfigured metrics (e.g., input-based metrics, and/or user interface-based metrics), such as by a fixed or dynamically determined gain factor (e.g., a value between 0 and 1). In some embodiments, a characteristic of an input (e.g., a rate of change of a characteristic intensity of a contact in a press input or a rate of movement of a contact on a touch-sensitive surface) during the input that triggers generation of a tactile output is measured based on a metric of the input (e.g., an intensity change metric or an input speed metric). In some embodiments, a characteristic of a user interface element (e.g., the speed of movement of the element through a bug or visible boundary in the user interface) during a user interface change that triggers generation of a tactile output is measured based on a metric of the user interface (e.g., a cross-boundary speed metric). In some embodiments, the characteristic amplitude of the tactile output pattern may be "envelope" modulated, and the peaks of adjacent cycles may have different amplitudes, with one of the waveforms shown above being further modified by multiplying with an envelope parameter that varies over time (e.g., from 0 to 1) to gradually adjust the amplitude of portions of the tactile output over time as the tactile output is generated.
Although in fig. 4F-4K, for illustrative purposes only specific frequencies, amplitudes, and waveforms are shown in the sample haptic output pattern, haptic output patterns having other frequencies, amplitudes, and waveforms may be used for similar purposes. For example, a waveform having between 0.5 and 4 cycles may be used. Other frequencies in the range of 60Hz-400Hz may also be used.
User interface and associated process
Attention is now directed to embodiments of a user interface ("UI") and associated processes that may be implemented on an electronic device, such as portable multifunction device 100 or device 300, having a display, a touch-sensitive surface, one or more tactile output generators (optionally) for generating tactile outputs, and one or more sensors (optionally) for detecting intensities of contacts with the touch-sensitive surface.
Fig. 5A-5 AT illustrate example user interfaces for displaying representations of virtual objects when switching from displaying a first user interface region to displaying a second user interface region, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 8A to 8E, 9A to 9D, 10A to 10D, 16A to 16G, 17A to 17D, 18A to 18I, 19A to 19H, and 20A to 20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having touch-sensitive display system 112. In such embodiments, the focus selector is optionally: a respective finger or stylus contact, a representative point corresponding to the finger or stylus contact (e.g., a center of gravity of or a point associated with the respective contact), or a center of gravity of two or more contacts detected on touch-sensitive display system 112. However, similar operations are optionally performed on a device having a display 450 and a separate touch-sensitive surface 451 in response to detecting a contact on the touch-sensitive surface 451 when the user interface shown in the figures is displayed on the display 450 along with a focus selector.
Fig. 5A illustrates a real world scenario in which the user interfaces described with reference to fig. 5B-5 AT are used.
Fig. 5A illustrates a physical space 5002 in which a table 5004 is located. The device 100 is held by a user in the user's hand 5006.
Fig. 5B illustrates an instant message user interface 5008 displayed on display 112. Instant messaging user interface 5008 includes: message bubble 5010, which includes received text message 5012, message bubble 5014, which includes sent text message 5016, and message bubble 5018, which includes a virtual object (e.g., virtual chair 5020) received in the message and a virtual object indicator 5022, which indicates that the virtual chair 5020 is an object that is visible in an augmented reality view (e.g., within a representation of the field of view of one or more cameras of device 100). Instant message user interface 5008 also includes a message input area 5024 configured to display message inputs.
Fig. 5C-5G illustrate inputs that cause a portion of instant message user interface 5008 to be replaced by the field of view of one or more cameras of device 100. In FIG. 5C, contact 5026 with the touch screen 112 of the device 100 is detected. The characteristic intensity of the contact is higher than a contact detection intensity threshold IT0And is below the prompt press intensity threshold ITHAs indicated by intensity level meter 5028. In FIG. 5D, the characteristic intensity of the contact 5026 is increased to the intensity level gauge 5028Above the prompt press intensity threshold ITHThis causes the area of message bubble 5018 to increase, the size of virtual chair 5020 to increase, and instant message user interface 5008 to begin to blur behind message bubble 5018 (e.g., providing visual feedback to the user of the effect of increasing the characteristic intensity of the contact). In FIG. 5E, as shown by the intensity level gauge 5028, the characteristic intensity of the contact 5026 increases above the light-press intensity threshold ITLThis causes message bubble 5018 to be replaced by disk surface 5030, the size of virtual chair 5020 to increase further, and instant message user interface 5008 to blur further behind disk surface 5030. In FIG. 5F, the characteristic intensity of the contact 5026 increases above the deep press intensity threshold IT, as indicated by the intensity level gauge 5028DThis causes tactile output generator 167 of device 100 to output tactile output (as shown at 5032) indicating that the criteria for replacing a portion of instant messaging user interface 5008 with the field of view of one or more cameras of device 100 has been met.
In some embodiments, the characteristic intensity at the contact 5026 reaches the deep press intensity threshold ITD(as shown in fig. 5F) the progression shown in fig. 5C-5E is reversible. For example, after the increase shown in FIG. 5D and/or FIG. 5E, decreasing the characteristic intensity of the contact 5026 will cause an interface state to be displayed that corresponds to the decreased intensity level of the contact 5026 (e.g., in accordance with a determination that the decreased characteristic intensity of the contact is above the light press intensity threshold ITLIllustrating the interface as shown in fig. 5E; upon determining that the reduced characteristic intensity of the contact is above a prompt press intensity threshold ITHIllustrating the interface as shown in fig. 5D; and in accordance with a determination that the reduced characteristic intensity of the contact is below a cue press intensity threshold ITHShowing the interface as shown in fig. 5C). In some embodiments, after the increase shown in fig. 5D and/or fig. 5E, decreasing the characteristic intensity of the contact 5026 will cause the interface shown in fig. 5C to be redisplayed.
Fig. 5F-5J illustrate animated transitions during which a portion of the instant message user interface is replaced by the field of view of one or more cameras (hereinafter "cameras") of device 100. From fig. 5F to 5G, the contact 5026 has lifted off the touch screen 112 and the virtual chair 5020 has rotated toward the final position in fig. 5I. In fig. 5G, the field of view 5034 of the camera has begun to fade-in view in the disk 5030 (as indicated by the dashed line). In fig. 5H, the field of view 5034 of the camera (e.g., showing a view of the physical space 5002 captured by the camera) has completed a fade-in view in the disk 5030. From fig. 5H to fig. 5I, the virtual chair 5020 continues to rotate towards its final position in fig. 5I. In fig. 5I, the tactile output generator 167 has output tactile outputs (as shown at 5036) indicating that at least one plane (e.g., floor surface 5038) has been detected in the field of view 5034 of the camera. The virtual chair 5020 is placed on the detected plane (e.g., the virtual object is configured to be placed in a vertical orientation on a detected horizontal surface, such as the floor surface 5038, as determined by the device 100). When a portion of the instant message user interface is converted into a representation of the camera's field of view 5034 on the display 112, the virtual chair 5020 is continuously resized on the display 112. For example, the proportion of the virtual chair 5020 relative to the physical space 5002 as shown in the field of view 5034 of the camera is determined based on the "real world" size of the virtual chair 5020 predefined in the field of view 5034 of the camera and/or the size of the detected object (such as the table 5004). In fig. 5J, a virtual chair 5020 is displayed in its final position, with a predefined orientation relative to the floor surface detected in the field of view 5034 of the camera. In some embodiments, the initial landing position of the virtual chair 5020 is a predefined position relative to a detected plane in the field of view of the camera, such as in the center of an unoccupied area of the detected plane. In some embodiments, the initial landing position of the virtual chair 5020 is determined from the lift-off position of the contact 5026 (e.g., in fig. 5F, the lift-off position of the contact 5026 may be different from the initial downward touch position of the contact 5026, which is caused by the movement of the contact 5026 on the touchscreen 112 after the criteria for transitioning to the augmented reality environment are met).
Fig. 5K-5L illustrate movement of the device 100 (e.g., by the user's hand 5006) to adjust the field of view 5034 of the camera. As the device 100 moves relative to the physical space 5002, the field of view 5034 of the displayed camera changes, and the virtual chair 5020 remains in the same position and orientation relative to the floor surface 5038 in the field of view 5034 of the displayed camera.
Fig. 5M-5Q illustrate an input that causes the virtual chair 5020 to move over the floor surface 5038 in the field of view 5034 of the displayed camera. In fig. 5N, a contact 5040 with the touchscreen 112 of the device 100 is detected at a location corresponding to the virtual chair 5020. In fig. 5N-5O, as the contact 5040 moves along the path indicated by arrow 5042, the contact 5040 drags the virtual chair 5020. As the virtual chair 5020 is moved by the contacts 5040, the size of the virtual chair 5020 changes to maintain the scale of the virtual chair 5020 relative to the physical space 5002 as shown in the field of view 5034 of the camera. For example, in fig. 5N-5P, as the virtual chair 5020 moves from the foreground of the field of view 5034 of the camera to a position away from the device 100 and closer to the table 5004 in the field of view 5034 of the camera, the size of the virtual chair 5020 decreases (e.g., such that the proportion of the chair in the field of view 5034 of the camera relative to the table 5004 is maintained). In addition, as the virtual chair 5020 is moved by the contacts 5040, the plane identified in the field of view 5034 of the camera is highlighted. For example, in fig. 5O, the floor plane 5038 is highlighted. In fig. 5O-5P, the contact 5040 continues to drag the virtual chair 5020 as the contact 5040 moves along the path indicated by arrow 5044. In FIG. 5Q, contact 5040 has been lifted off of touch screen 112. In some embodiments, as shown in fig. 5N-5Q, the path of movement of the virtual chair 5020 is constrained by the floor surface 5038 in the field of view 5034 of the camera as if the contact 5040 were dragging the virtual chair 5020 across the floor surface 5038. In some embodiments, the contact 5040 as described with reference to fig. 5N-5P is a continuation of the contact 5026 as described with reference to fig. 5C-5F (e.g., the contact 5026 is not lifted off, and such contact that causes a portion of the instant message user interface 5008 to be replaced by the field of view 5034 of the camera also drags the virtual chair 5020 in the field of view 5034 of the camera.
Fig. 5Q-5U illustrate an input moving the virtual chair 5020 from the floor surface 5038 to a different plane (e.g., the desktop 5046) detected in the field of view 5034 of the camera. In fig. 5R, a contact 5050 with the touch screen 112 of the device 100 is detected at a location corresponding to the virtual chair 5020. In fig. 5R-5S, as the contact 5048 moves along the path indicated by arrow 5050, the contact 5048 drags the virtual chair 5020. As the virtual chair 5020 is moved by the contacts 5048, the size of the virtual chair 5020 changes to maintain the scale of the virtual chair 5020 relative to the physical space 5002 as shown in the field of view 5034 of the camera. In addition, when the virtual chair 5020 is moved by the contacts 5040, the desktop plane 5046 is highlighted (e.g., as shown in fig. 5S). In fig. 5S-5T, the contact 5040 continues to drag the virtual chair 5020 as the contact 5048 moves along the path indicated by arrow 5052. In fig. 5U, the contact 5048 has been lifted off the touch screen 112, and the virtual chair 5020 is placed on the desktop plane 5046 in a vertical orientation facing the same direction as before.
Fig. 5U-5 AD show an input dragging the virtual chair 5020 to the edge of the touch screen display 112, which causes the field of view 5034 of the camera to stop displaying. In fig. 5V, a contact 5054 with the touchscreen 112 of the device 100 is detected at a location corresponding to the virtual chair 5020. In fig. 5V-5W, as the contact 5054 moves along the path indicated by arrow 5056, the contact 5054 drags the virtual chair 5020. In fig. 5W-5X, as the contact 5054 moves along the path indicated by arrow 5058, the contact 5054 continues to drag the virtual chair 5020 to the position shown in fig. 5X.
As shown in fig. 5Y to 5AD, input through the contact 5054 shown in fig. 5U to 5X causes a transition from displaying the field of view 5034 of the camera in the disk 5030 to stopping displaying the field of view 5034 of the camera and returning to fully displaying the instant message user interface 5008. In FIG. 5Y, the field of view 5034 of the camera begins to fade in the disk 5030. In fig. 5Y through 5Z, disc 5030 transitions to message bubble 5018. In fig. 5Z, the field of view 5034 of the camera is no longer shown. In fig. 5AA, instant message user interface 5008 ceases to obfuscate and the size of message bubble 5018 returns to the original size of message bubble 5018 (e.g., as shown in fig. 5B).
Fig. 5 AA-5 AD illustrate animated transitions of the virtual chair 5020 that occur when the virtual chair 5020 moves from a position corresponding to the contact 5054 in fig. 5AA to an original position of the virtual chair 5020 in the instant messaging user interface 5008 (e.g., as shown in fig. 5B). In fig. 5AB, contact 5054 has been lifted off of touch screen 112. In fig. 5AB to 5AC, the size of the virtual chair 5020 is gradually increased, and the virtual chair is rotated toward its final position in fig. 5 AD.
In fig. 5B-5 AD, the virtual chair 5020 has substantially the same three-dimensional appearance within the instant messaging user interface 5008 and within the displayed field of view 5034 of the camera, and the virtual chair 5020 maintains the same three-dimensional appearance during the transition from displaying the instant messaging user interface 5008 to displaying the field of view 5034 of the camera and during the reverse transition. In some embodiments, the representation of the virtual chair 5020 has a different appearance in the application user interface (e.g., an instant messaging user interface) than in the augmented reality environment (e.g., in the field of view of the displayed camera). For example, the virtual chair 5020 optionally has a two-dimensional or more stylized appearance in the application user interface while having a more realistic three-dimensional textured appearance in the augmented reality environment; and the intermediate appearance of the virtual chair 5020 during the transition between displaying the application user interface and displaying the augmented reality environment is a series of interpolated appearances between the two-dimensional appearance and the three-dimensional appearance of the virtual chair 5020.
FIG. 5AE shows an Internet browser user interface 5060. The internet browser user interface 5060 includes a URL/search input area 5062 configured to display URL/search inputs for a web browser and browser control 5064 (e.g., a navigation control including a back button and a forward button, a share control for displaying a share interface, a bookmark control for displaying a bookmark interface, and a tab control for displaying a tab interface). Internet browser user interface 5060 also includes network objects 5066, 5068, 5070, 5072, 5074, and 5076. In some embodiments, the respective network object includes a link such that, in response to a tap input on the respective network object, an internet location of the link corresponding to the network object is displayed in the internet browser user interface 5060 (e.g., replacing the display of the respective network object). Network objects 5066, 5068, and 5072 include two-dimensional representations of three-dimensional virtual objects, as indicated by virtual object indicators 5078, 5080, and 5082, respectively. Network objects 5070, 5074, and 5076 include two-dimensional images (but the two-dimensional images of network objects 5070, 5074, and 5076 do not correspond to three-dimensional virtual objects, as indicated by the absence of a virtual object indicator). The virtual object corresponding to the network object 5068 is a light object 5084.
Fig. 5AF through 5AH illustrate inputs that cause a portion of the internet browser user interface 5060 to be replaced by the field of view 5034 of the camera. In FIG. 5AF, contact 5086 with touch screen 112 of device 100 is detected. The characteristic intensity of the contact is higher than a contact detection intensity threshold IT0And is below the prompt press intensity threshold ITHAs indicated by intensity level meter 5028. In FIG. 5AG, as shown by the intensity level gauge 5028, the characteristic intensity of the contact 5026 increases above the light-press intensity threshold ITLThe field of view 5034 of the camera has been caused to be displayed at the network object 5068 (e.g., covered by the virtual light 5084). In FIG. 5AH, as shown by intensity level meter 5028, the characteristic intensity of contact 5086 increases above the deep press intensity threshold ITDSuch that camera's field of view 5034 replaces a larger portion of internet browser user interface 5060 (e.g., leaving only URL/search input region 5062 and browser control 5064), and tactile output generator 167 of device 100 outputs tactile output (as shown at 5088) indicating that the criteria for replacing a portion of internet browser user interface 5060 with camera's field of view 5034 has been met. In some embodiments, the field of view 5034 of the camera completely replaces the internet browser user interface 506 on the touch screen display 112 in response to the inputs described with reference to fig. 5AF through 5 AH.
Fig. 5AI to 5AM show inputs for moving the virtual lamp 5084. In fig. 5 AI-5 AJ, contact 5086 drags virtual light 5084 as contact 5086 moves along the path indicated by arrow 5090. When the virtual light 5084 is moved by the contacts 5086, the size of the virtual light 5084 is unchanged, and the path of the virtual light 5084 is optionally not constrained by the structure of the physical space captured in the field of view of the camera. As the virtual light 5084 is moved by the contact 5086, the plane identified in the field of view 5034 of the camera is highlighted. For example, in fig. 5AJ, the floor plane 5038 is highlighted as the virtual light 5084 is moved over the floor plane 5038. In fig. 5 AJ-5 AK, contact 5086 continues to drag virtual light 5084 as contact 5086 moves along the path indicated by arrow 5092. In fig. 5 AK-5 AL, as contact 5086 moves along the path indicated by arrow 5094, contact 5086 continues to drag virtual light 5084, stopping highlighting floor plane 5038, and highlighting desktop 5046 as virtual light 5084 moves over table 5004. In FIG. 5AM, contact 5086 has been lifted off of touch screen 112. When the contact 5086 has been lifted, the virtual light 5086 is sized to have the correct proportions relative to the table 5004 in the camera's field of view 5034, and the virtual light 5086 is placed in a vertical orientation on the desktop 5046 in the camera's field of view 5034.
Fig. 5 AM-5 AQ illustrate an input dragging the virtual light 5084 to the edge of the touch screen display 112, which causes the field of view 5034 of the camera to cease to be displayed and the internet browser user interface 5060 to resume. In fig. 5AN, contact 5096 with touch screen 112 of device 100 is detected at a location corresponding to virtual light 5084. In fig. 5 AN-5 AO, contact 5096 drags virtual light 5084 as contact 5096 moves along the path indicated by arrow 5098. In fig. 5AO to 5AP, as contact 5054 moves along the path indicated by arrow 5100, contact 5096 continues to drag virtual light 5084 to the position shown in fig. 5 AP. In FIG. 5AQ, contact 5096 has been lifted off of touch screen 112.
As shown in fig. 5AQ through 5AT, input through contact 5096 shown in fig. 5AM through 5AP results in a transition from displaying the field of view 5034 of the camera to ceasing to display the field of view 5034 of the camera and returning to fully displaying the internet browser user interface 5060. In fig. 5AR, the field of view 5034 of the camera begins to fade out (as indicated by the dashed line). In fig. 5 AR-5 AT, the size of the virtual light 5084 increases and the virtual light moves towards its original position in the internet browser user interface 5060. In fig. 5AS, the field of view 5034 of the camera is no longer displayed and the internet browser user interface 5060 begins to fade in (AS indicated by the dashed line). In FIG. 5AT, the Internet browser user interface 5060 is fully displayed and the virtual light 5084 has returned to its original size and position within the Internet browser user interface 5060.
Fig. 6A-6 AJ illustrate example user interfaces for displaying a first representation of a virtual object in a first user interface area, displaying a second representation of the virtual object in a second user interface area, and displaying a third representation of the virtual object with a representation of a field of view of one or more cameras, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 8A to 8E, 9A to 9D, 10A to 10D, 16A to 16G, 17A to 17D, 18A to 18I, 19A to 19H, and 20A to 20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having touch-sensitive display system 112. In such embodiments, the focus selector is optionally: a respective finger or stylus contact, a representative point corresponding to the finger or stylus contact (e.g., a center of gravity of or a point associated with the respective contact), or a center of gravity of two or more contacts detected on touch-sensitive display system 112. However, similar operations are optionally performed on a device having a display 450 and a separate touch-sensitive surface 451, in response to detecting a contact on the touch-sensitive surface 451 while displaying the user interface on the display 450 and the focus selector shown in the figures.
Fig. 6A illustrates an instant message user interface 5008, which includes: message bubble 5010, which includes received text message 5012, message bubble 5014, which includes transmitted text message 5016, and message bubble 5018, which includes a virtual object (e.g., virtual chair 5020) received in the message and a virtual object indicator 5022, which indicates that the virtual chair 5020 is an object that is visible in an augmented reality view (e.g., within the field of view of one or more cameras of the displayed device 100). Instant message user interface 5008 is described in further detail with reference to fig. 5B.
Fig. 6B to 6C illustrate an input to rotate the virtual chair 5020. In fig. 6B, contact 6002 with touch screen 112 of device 100 is detected. The contact 6002 moves on the touch screen 112 along a path indicated by arrow 6004. In fig. 6C, in response to the movement of the contact, instant message user interface 5008 scrolls upward (causing message bubble 5010 to scroll away from the display, causing message bubbles 5014 and 5018 to scroll upward and reveal additional message bubbles 6005) and virtual chair 5020 rotates (e.g., tilts upward). The magnitude and direction of rotation of the virtual chair 5020 corresponds to movement of the contact 6002 along the path indicated by arrow 6004. In fig. 6D, contact 6002 has lifted off touch screen 112. In some embodiments, this rotational behavior of virtual chair 5020 within message bubble 5018 is used as an indication that virtual chair 5020 is a virtual object visible in the augmented reality environment that includes the field of view of the camera of device 100.
Fig. 6E-6L illustrate an input that causes the instant messaging user interface 5008 to be replaced with a landing user interface 6010 and then changes the orientation of the virtual chair 5020. In fig. 6E, contact 6006 with touch screen 112 of device 100 is detected. The characteristic intensity of the contact is higher than a contact detection intensity threshold IT0And is below the prompt press intensity threshold ITHAs indicated by intensity level meter 5028. In FIG. 6F, the characteristic intensity of the contact 6006 increases above the cue compression intensity threshold IT, as shown by intensity level meter 5028HThis causes the area of message bubble 5018 to increase, the size of virtual chair 5020 to increase, and instant message user interface 5008 to begin to blur behind message bubble 5018 (e.g., providing visual feedback to the user of the effect of increasing the characteristic intensity of the contact). In FIG. 6G, the characteristic intensity of the contact 6006 increases above the light-press intensity threshold IT, as shown by intensity level gauge 5028LThis causes message bubble 5018 to be replaced by disk 6008, the size of virtual chair 5020 to increase further, and instant messaging user interface 5008 to blur further behind disk 6008. In fig. 6H, the characteristic intensity of the contact 6006 increases above the deep press intensity threshold IT, as shown by intensity level meter 5028DCausing instant message user interface 5008 to stop displaying and initiating a fade-in (indicated by the dashed line) of landing user interface 6010. In addition, as shown in fig. 6H, the characteristic intensity of the contact 6006 increases above the deep press intensity threshold ITDCause tactile output generator 167 of device 100 to output tactile output (as at 6012)Indicated), which indicates that the criteria for replacing instant message user interface 5008 with staging user interface 6010 have been met.
In some embodiments, the characteristic intensity at contact 6006 reaches a deep press intensity threshold ITD(as shown in fig. 6H) the progression shown in fig. 6E-6G is reversible. For example, after the increase shown in fig. 6F and/or 6G, decreasing the characteristic intensity of the contact 6006 will cause an interface state to be displayed that corresponds to the decreased intensity level of the contact 6006 (e.g., in accordance with a determination that the decreased characteristic intensity of the contact is above the light press intensity threshold ITLShowing the interface as shown in fig. 6G; upon determining that the reduced characteristic intensity of the contact is above a prompt press intensity threshold ITHIllustrating the interface as shown in fig. 6F; and in accordance with a determination that the reduced characteristic intensity of the contact is below a cue press intensity threshold ITHShowing the interface as shown in fig. 6E). In some embodiments, after the increase shown in fig. 6F and/or 6G, decreasing the characteristic intensity of the contact 6006 will cause the interface as shown in fig. 6E to be redisplayed.
In fig. 6I, a staging user interface 6010 is displayed. The staging user interface 6010 includes a gantry 6014 on which a virtual chair 5020 is displayed. From fig. 6H to 6I, the virtual chair 5020 is animated to indicate a transition from the position of the virtual chair 5020 in fig. 6H to the position of the virtual chair 5020 in fig. 6I. For example, the virtual chair 5020 is rotated to a predefined position, in a predefined orientation, and/or a predefined distance relative to the gantry 6014 (e.g., such that the virtual chair appears to be supported by the gantry 6014). The staging user interface 6010 also includes a back control 6016 that, when activated (e.g., by a tap input at a location corresponding to the back control 6016), causes the previously displayed user interface (e.g., the instant message user interface 5008) to be redisplayed. The staging user interface 6010 also includes a toggle control 6018 that indicates the current display mode (e.g., the current display mode is the staging user interface mode, as indicated by the highlighted "3D" indicator), and which, when activated, causes a transition to the selected display mode. For example, when the landing user interface 6010 is displayed, a tap input by contact at a location corresponding to the toggle control 6018 (e.g., a location corresponding to a portion of the toggle control 6018 that includes the text "world") causes the landing user interface 6010 to be replaced by the field of view of the camera. Staging user interface 6010 also includes a sharing control 6020 (e.g., a sharing control for displaying a sharing interface).
Fig. 6J to 6L illustrate rotation of the virtual chair 5020 relative to the gantry 6014 caused by movement of the contact 6006. In fig. 6J-6K, the virtual chair 5020 rotates (e.g., about a first axis perpendicular to the movement of the contact 6066) as the contact 6006 moves along the path indicated by arrow 6022. In fig. 6K-6L, the virtual chair 5020 rotates (e.g., about a second axis perpendicular to the movement of the contact 6066) as the contact 6006 moves along the path indicated by arrow 6024 and then along the path indicated by arrow 6025. In fig. 6M, contact 6006 has lifted off touch screen 112. In some embodiments, as shown in fig. 6J-6L, the rotation of the virtual chair 5020 is constrained by the surface of the gantry 6014. For example, during rotation of the virtual chair, at least one leg of the virtual chair 5020 remains in contact with the surface of the gantry 6014. In some embodiments, the surface of the gantry 6014 serves as a frame of reference for free rotation and vertical translation of the virtual chair 5020, without causing specific constraints on the movement of the virtual chair 5020.
Fig. 6N to 6O illustrate an input of adjusting the size of the displayed virtual chair 5020. In FIG. 6N, a first contact 6026 and a second contact 6030 with the touch screen 112 are detected. The first contact 6026 moves along the path indicated by arrow 6028 and, while the first contact 6026 moves, the second contact 6030 moves along the path indicated by arrow 6032. In fig. 6N-6O, the size of the displayed virtual chair 5020 increases as the first contact 6026 and second contact 6030 move along the paths indicated by arrows 6028 and 6032, respectively (e.g., in a separate gesture). In fig. 6P, the first contact 6030 and the second contact 6026 have lifted off the touch screen 112, and the virtual chair 5020 remains at an increased size after the contacts 6026 and 6030 are lifted off.
Fig. 6Q-6U illustrate enabling a staging user interface 6010 to be accessed by one or more cameras of the apparatus 100Field of view 6036. In fig. 6Q, a contact 6034 with the touch screen 112 of the device 100 is detected. The characteristic intensity of the contact is higher than a contact detection intensity threshold IT0And is below the prompt press intensity threshold ITHAs indicated by intensity level meter 5028. In FIG. 6R, as indicated by the intensity level meter 5028, the characteristic intensity of the contact 5026 increases above the cue compression intensity threshold ITHThe staging user interface 6010 has been made to begin obscuring behind the virtual chair 5020 (as indicated by the dashed line). In FIG. 6S, the characteristic intensity of the contact 6034 increases above the light-to-pressure intensity threshold IT, as shown by the intensity level gauge 5028LCausing the staging user interface 6010 to stop displaying and initiate a fade-in (indicated by the dashed line) of the camera's field of view 6036. In FIG. 6T, the characteristic intensity of the contact 6034 increases above the deep press intensity threshold IT, as indicated by the intensity level meter 5028DCausing the field of view 6036 of the camera to be displayed. In addition, as shown in FIG. 6T, the characteristic intensity of the contact 6034 increases above the deep press intensity threshold ITDCausing tactile output generator 167 of device 100 to output a tactile output (as indicated at 6038) indicating that the criteria for replacing the display of staging user interface 6010 with the display of field of view 6036 of the camera have been met. In FIG. 6U, contact 6034 has been lifted off of touch screen 112. In some embodiments, the deep press intensity threshold IT is reached at the characteristic intensity of contact 6034D(as shown in fig. 6T) the progression shown in fig. 6Q to fig. 6T is reversible. For example, after the increase shown in fig. 6R and/or 6S, decreasing the characteristic intensity of the contact 6034 will cause the interface state corresponding to the decreased intensity level of the contact 6034 to be displayed.
From fig. 6Q-6U, the virtual chair 5020 is placed on the detected plane (e.g., it is determined from the apparatus 100 that the virtual chair 5020 is configured to be placed in a vertical orientation on a detected horizontal surface, such as the floor surface 5038), and the size of the virtual chair 5020 is adjusted (e.g., the proportion of the virtual chair 5020 relative to the physical space 5002 as shown in the field of view 6036 of the camera is determined based on the "real world" size of the virtual chair 5020 and/or the detected size of the object (such as the table 5004) defined in the field of view 6036 of the camera). When the virtual chair 5020 transitions from the staging user interface 6010 to the field of view 6036 of the camera, the orientation of the virtual chair 5020 caused by the rotation of the virtual chair 5020 is maintained while the staging interface 6010 is displayed (e.g., as described with reference to fig. 6J-6K). For example, the orientation of the virtual chair 5020 relative to the floor surface 5038 is the same as the final orientation of the virtual chair 5020 relative to the surface of the gantry 5014. In some embodiments, adjustments to the size of the virtual object 5020 in the staging user interface are considered when adjusting the size of the virtual chair 5020 relative to the size of the physical space 5002 in the field of view 6036.
Fig. 6V-6Y illustrate inputs that cause the field of view 6036 of the camera to be replaced by the staging user interface 6010. In fig. 6V, an input (e.g., a tap input) by the contact 6040 is detected at a position corresponding to the toggle control 6018 (e.g., a position corresponding to a portion of the toggle control 6018 that includes the text "3D"). In fig. 6W to 6Y, in response to an input by a contact 6040, a field of view 6036 of the camera fades out (as indicated by a dotted line in fig. 6W), a stage user interface 6010 fades in (as indicated by a dotted line in fig. 6X), and the stage user interface 6010 is fully displayed (as shown in fig. 6Y). From fig. 6V to 6Y, the size of the virtual chair 5020 is adjusted, and the position of the virtual chair 5020 changes (e.g., returns the virtual chair 5020 to a predefined position and size for the landing user interface).
Fig. 6Z-6 AC illustrate inputs that cause the staging user interface 6010 to be replaced with an instant messaging user interface 5008. In fig. 6Z, an input (e.g., a tap input) by the contact 6042 is detected at a position corresponding to the back control 6016. In fig. 6 AA-6 AC, in response to an input through contact 6042, landing user interface 6010 fades out (as indicated by the dashed line in fig. 6 AA), instant message user interface 5008 fades in (as indicated by the dashed line in fig. 6 AB), and instant message user interface 5008 is fully displayed (as shown in fig. 6 AC). From fig. 6Z to 6AB, the size, orientation, and position of the virtual chair 5020 is continuously adjusted on the display (e.g., to return the virtual chair 5020 to a predefined position, size, and orientation for the instant messaging user interface 5008).
Fig. 6 AD-6 AJ illustrate an input that causes the instant message user interface 5008 to be replaced by the field of view 6036 of the camera (e.g., bypassing the display of the landing user interface 6010). In fig. 6AD, a contact 6044 is detected at a position corresponding to the virtual chair 5020. Inputs made through the contact 6044 include a long touch gesture (during which the contact 6044 remains in less than a threshold amount of movement for at least a predefined threshold amount of time at a location on the touch-sensitive surface that corresponds to the representation of the virtual object 5020) and a subsequent swipe-up gesture (dragging the virtual chair 5020 upward). As shown in fig. 6AD to 6AE, when the contact 6044 moves along the path indicated by the arrow 6046, the virtual chair 5020 is dragged upward. In fig. 6AE, instant message user interface 5008 fades out behind virtual chair 5020. As shown in fig. 6AE to 6AF, when the contact 6044 moves along the path indicated by the arrow 6048, the virtual chair 5020 continues to be dragged upward. In fig. 6AF, the field of view 5036 of the camera fades in behind the virtual chair 5020. In fig. 6AG, the field of view 5036 of the camera is fully displayed in response to an input through contact 6044 that includes a long touch gesture followed by a swipe-up gesture. In FIG. 6AH, contact 6044 is lifted off touch screen 112. In fig. 6 AH-6 AJ, in response to the lift-off of the contact 6044, the virtual chair 5020 is released (e.g., because the virtual chair 5020 is no longer constrained or dragged by the contact) and falls onto a plane (e.g., the floor surface 5038, which corresponds to the virtual chair 5020 as determined by the horizontal (floor) surface). In addition, as shown in fig. 6AJ, the tactile output generator 167 of the device 100 outputs a tactile output (as shown at 6050) indicating that the virtual chair 5020 has landed on the floor surface 5038.
Fig. 7A-7P illustrate example user interfaces for displaying items with visual indications indicating that the items correspond to virtual three-dimensional objects, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 8A to 8E, 9A to 9D, 10A to 10D, 16A to 16G, 17A to 17D, 18A to 18I, 19A to 19H, and 20A to 20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having touch-sensitive display system 112. In such embodiments, the focus selector is optionally: a respective finger or stylus contact, a representative point corresponding to the finger or stylus contact (e.g., a center of gravity of or a point associated with the respective contact), or a center of gravity of two or more contacts detected on touch-sensitive display system 112. However, similar operations are optionally performed on a device having a display 450 and a separate touch-sensitive surface 451, in response to detecting a contact on the touch-sensitive surface 451 while displaying the user interface on the display 450 and the focus selector shown in the figures.
Fig. 7A illustrates an input detected while displaying the user interface 400 of the application menu. The input corresponds to a request to display a first user interface (e.g., an internet browser user interface 5060). In fig. 7A, an input (e.g., a tap input) by contact 7000 is detected at a position corresponding to the icon 420 of the browser module 147. In response to the input, an Internet browser user interface 5060 is displayed, as shown in FIG. 7B.
Fig. 7B shows an internet browser user interface 5060 (e.g., as described in detail with reference to fig. 5 AE). Internet browser user interface 5060 includes network objects 5066, 5068, 5070, 5072, 5074, and 5076. Network objects 5066, 5068, and 5072 include two-dimensional representations of three-dimensional virtual objects, as indicated by virtual object indicators 5078, 5080, and 5082, respectively. Network objects 5070, 5074, and 5076 include two-dimensional images (but the two-dimensional images of network objects 5070, 5074, and 5076 do not correspond to three-dimensional virtual objects, as indicated by the absence of a virtual object indicator).
Fig. 7C-7D illustrate inputs that cause the internet browser user interface 5060 to pan (e.g., scroll). In fig. 7B, contact 7002 with the touch screen 112 is detected. In fig. 7C-7D, as contact 7002 moves along the path indicated by arrow 7004, network objects 5066, 5068, 5070, 5072, 5074, and 5076 scroll upward, revealing additional network objects 7003 and 7005. In addition, as contact 7002 moves along the path indicated by arrow 7004, virtual objects in network objects 5066, 5068, and 5072, which include virtual object indicators 5078, 5080, and 5082, respectively, rotate (e.g., tilt upward) according to the direction of the input (vertical upward). For example, the virtual light 5084 is tilted upward from the first orientation in fig. 7C to the second orientation in fig. 7D. When contacting the scrolling Internet browser user interface 5060, the two-dimensional images of network objects 5070, 5074, and 5076 do not rotate. In fig. 7E, the contact 7002 has been lifted off the touch screen 112. In some embodiments, the rotational behavior of objects depicted in network objects 5066, 5068, and 5072 is used as a visual indication that the network objects have corresponding three-dimensional virtual objects visible in the augmented reality environment, while the absence of such rotational behavior of objects depicted in network objects 5070, 5074, and 5076 is used as a visual indication that the network objects do not have corresponding three-dimensional virtual objects visible in the augmented reality environment.
Fig. 7F-7G illustrate the parallax effect, where a virtual object rotates on the display in response to changes in the orientation of the device 100 relative to the physical world.
Fig. 7F1 shows device 100 being held in a user's hand 5006 by user 7006 such that device 100 has a substantially vertical orientation. FIG. 7F2 shows an Internet browser user interface 5060 displayed by device 100 as when device 100 is in the orientation shown in FIG. 7F 1.
Fig. 7G1 shows device 100 being held by user 7006 in user's hand 5006 such that device 100 has a substantially horizontal orientation. FIG. 7G2 shows an Internet browser user interface 5060 displayed by device 100 as when device 100 is in the orientation shown in FIG. 7G 1. From fig. 7F2 to fig. 7G2, the orientation of the virtual object in network objects 5066, 5068, and 5072, which include virtual object indicators 5078, 5080, and 5082, respectively, is rotated (e.g., tilted upward) according to the change in orientation of the device. For example, the virtual light 5084 is tilted upward from the first orientation in fig. 7F2 to the second orientation in fig. 7G2 according to a simultaneous change in the orientation of the device in physical space. When the orientation of the device changes, the two-dimensional images of network objects 5070, 5074, and 5076 do not rotate. In some embodiments, the rotational behavior of objects depicted in network objects 5066, 5068, and 5072 is used as a visual indication that the network objects have corresponding three-dimensional virtual objects visible in the augmented reality environment, while the absence of such rotational behavior of objects depicted in network objects 5070, 5074, and 5076 is used as a visual indication that the network objects do not have corresponding three-dimensional virtual objects visible in the augmented reality environment.
Fig. 7H-7L illustrate inputs corresponding to a request to display a second user interface (e.g., instant message user interface 5008). In fig. 7H, the contact 7008 is detected at a position corresponding to the lower edge of the display 112. In fig. 7H-7I, the contact 7008 moves upward along a path indicated by an arrow 7010. In fig. 7I-7J, the contact 7008 continues to move upward along the path indicated by arrow 7012. In fig. 7H-7J, as the contact 7008 is moved up from the lower edge of the display 112, the size of the internet browser user interface 5060 is reduced, as shown in fig. 7I; and in fig. 7J, a multitasking user interface 7012 is displayed (e.g., in response to an upward edge swipe gesture by contact 7008). Multitasking user interface 7012 is configured to allow interfaces to be selected from various applications having a preserved state (e.g., the preserved state is the last state of a corresponding application when the corresponding application is the foreground application executing on the device) and various control interfaces (e.g., control center user interface 7014, Internet browser user interface 5060, and instant message user interface 5008, shown in FIG. 7J). In FIG. 7K, contact 7008 has been lifted off of touch screen 112. In fig. 7L, an input (e.g., tap input) by contact 7016 is detected at a location corresponding to instant message user interface 5008. In response to input made through contact 7016, instant message user interface 5008 is displayed, as shown in fig. 7M.
Fig. 7M illustrates an instant message user interface 5008 (e.g., as described in further detail with reference to fig. 5B) that includes a message bubble 5018 that includes a virtual object (e.g., a virtual chair 5020) received in a message and a virtual object indicator 5022 that indicates that the virtual chair 5020 is a virtual three-dimensional object (e.g., an object that is visible in an augmented reality view and/or an object that is visible from a different angle). Instant messaging user interface 5008 also includes a message bubble 6005 that includes sent text messages and a message bubble 7018 that includes received text messages that include emoticons 7020. The emoticon 7020 is a two-dimensional image that does not correspond to a virtual three-dimensional object. For this reason, the displayed emoticon 7020 does not have a virtual object indicator.
Fig. 7N illustrates a map user interface 7022 that includes a map 7024, a point of interest information area 7026 for a first point of interest, and a point of interest information area 7032 for a second point of interest. For example, the first and second points of interest are search results within or near an area shown in the map 7024 that corresponds to the search entry "Apple" in the search input area 7025. In the first point of interest information region 7026, a first point of interest object 7028 is displayed having a virtual object indicator 7030 indicating that the first point of interest object 7028 is a virtual three-dimensional object. In the second point of interest information region 7032, the second point of interest object 7034 is displayed without a virtual object indicator because the second point of interest object 7034 does not correspond to a virtual three-dimensional object that is visible in the augmented reality view.
FIG. 7O illustrates a file management user interface 7036 that includes a file management control 7038, a file management search input area 7040, a file information area 7042 for a first file (e.g., a Portable Document Format (PDF) file), a file information area 7044 for a second file (e.g., a photo file), a file information area 7046 for a third file (e.g., a virtual chair object), and a file information area 7048 for a fourth file (e.g., a PDF file). The third file information region 7046 includes a virtual object indicator 7050 displayed adjacent to the file preview object 7045 of the file information region 7046, the virtual object indicator indicating that the third file corresponds to a virtual three-dimensional object. The first file information region 7042, the second file information region 7044, and the fourth file information region 7048 are displayed without virtual object indicators because the files corresponding to these file information regions do not have corresponding virtual three-dimensional objects that are visible in the augmented reality environment.
Fig. 7P illustrates an email user interface 7052 that includes an email navigation control 7054, an email information region 7056, and an email content region 7058 that includes a representation of a first attachment 7060 and a representation of a second attachment 7062. The representation of the first attachment 7060 includes a virtual object indicator 7064 that indicates that the first attachment is a virtual three-dimensional object that is visible in the augmented reality environment. The second attachment 7062 is displayed without a virtual object indicator because the second attachment is not a virtual three-dimensional object visible in the augmented reality environment.
Fig. 8A-8E are flow diagrams illustrating a method 800 of displaying a representation of a virtual object when switching from displaying a first user interface region to displaying a second user interface region, according to some embodiments. Method 800 is performed at an electronic device (e.g., device 300 in fig. 3 or portable multifunction device 100 in fig. 1A) having a display, a touch-sensitive surface, and one or more cameras (e.g., one or more rear-facing cameras on a side of the device opposite the display and touch-sensitive surface). In some embodiments, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations in method 800 are optionally combined, and/or the order of some operations is optionally changed.
The method 800 involves detecting input by contact at a touch-sensitive surface of a device for displaying a representation of a virtual object in a first user interface area. In response to the input, the device uses criteria to determine whether to continuously display the representation of the virtual object while replacing the display of at least a portion of the first user interface area with a field of view of one or more cameras of the device. Using criteria to determine whether to continuously display a representation of a virtual object while replacing display of at least a portion of the first user interface area with a field of view of one or more cameras enables a plurality of different types of operations to be performed in response to an input. Enabling a plurality of different types of operations to be performed in response to an input (e.g., by replacing display of at least a portion of the user interface with a field of view of one or more cameras, or by maintaining display of the first user interface region without replacing display of at least a portion of the first user interface region with a representation of the field of view of one or more cameras) increases the efficiency with which a user can perform such operations, thereby enhancing operability of the device, which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
The device displays (802) a representation of a virtual object (e.g., a graphical representation of a three-dimensional object, such as a virtual chair 5020, a virtual light 5084, a shoe, furniture, a hand tool, a decorative item, a person, an emoticon, a game character, virtual furniture, etc.) in a first user interface area (e.g., a two-dimensional graphical user interface or a portion thereof (e.g., a browsable list of furniture images, an image containing one or more selectable objects, etc.) on the display 112. For example, the first user interface area is instant messaging user interface 5008 as shown in fig. 5B or internet browser user interface 5060 as shown in fig. 5 AE. In some embodiments, the first user interface region includes a background in addition to the image of the physical environment surrounding the device (e.g., the background of the first user interface region is a preselected background color/pattern or background image that is different from output images simultaneously captured by the one or more cameras and different from real-time content in the field of view of the one or more cameras).
While the first representation of the virtual object is displayed in the first user interface area on the display, the device detects (804) a first input by contact at a location on the touch-sensitive surface 112 that corresponds to the representation of the virtual object on the display (e.g., a contact is detected on the first representation of the virtual object on the touch screen display, or a contact is detected on an affordance displayed in the first user interface area concurrently with the first representation of the virtual object, the affordance configured to trigger display of an AR view of the virtual object when invoked by the contact). For example, the first input is an input through the contact 5020 as described with reference to fig. 5C-5F or an input through the contact 5086 as described with reference to fig. 5 AF-5 AL.
In response to detecting the first input by contact (806), in accordance with a determination that the first input by contact satisfies a first (e.g., AR-trigger) criterion (e.g., the AR-trigger criterion is configured to beA criterion to recognize a swipe input, a touch-hold input, a press input, a tap input, a hard press with an intensity above a predefined intensity threshold, or another type of predefined input gesture, the criterion associated with triggering activation of a camera, display of an Augmented Reality (AR) view of a physical environment surrounding the device, placement of a three-dimensional representation of a virtual object inside the augmented reality view of the physical environment, and/or a combination of two or more of the above): the device displays a second user interface region on the display, including replacing display of at least a portion of the first user interface region with a representation of a field of view of the one or more cameras, and the device continuously displays the representation of the virtual object when switching from displaying the first user interface region to displaying the second user interface region. For example, the second user interface region on the display is the field of view 5034 of the camera in the face 5030 as described with reference to fig. 5H or the field of view 5034 of the camera as described with reference to fig. 5 AH. In FIGS. 5C-5I, the input made through the contact 5026 has an increase above the deep press intensity threshold IT as determined byDWhen switching from displaying the first user interface region (instant messaging user interface 5008) to displaying the second user interface region, i.e., the display of a portion of the instant messaging user interface 5008 replaced with the field of view 5034 of the camera in the face 5030, the virtual chair object 5020 is continuously displayed. In FIGS. 5 AF-5 AH, an input made through contact 5086 has an increase above the deep press intensity threshold IT upon determinationDThe feature strength of (a), when switching from displaying the first user interface area (the internet browser user interface 5060) to displaying the second user interface area, i.e., the display of a portion of the internet browser user interface 5060 replaced with the field of view 5034 of the camera, the virtual light object 5084 is continuously displayed.
In some embodiments, continuously displaying the representation of the virtual object includes maintaining display of the representation of the virtual object or displaying an animated transition in which a first representation of the virtual object changes to a second representation of the virtual object (e.g., a view of the virtual object that is of a different size, from a different perspective, with a different rendering style, or at a different location on the display). In some embodiments, the field of view 5034 of the one or more cameras displays a real-time image of the physical environment 5002 around the device that is updated in real-time as the position and orientation of the device relative to the physical environment changes (e.g., as shown in fig. 5K-5L). In some embodiments, the second user interface region completely replaces the first user interface on the display.
In some embodiments, the second user interface area covers a portion of the first user interface area (e.g., a portion of the first user interface area is shown along an edge of the display or around a border of the display). In some embodiments, the second user interface area pops up next to the first user interface area. In some embodiments, the background within the first user interface region is replaced by the contents of the field of view 5034 of the camera. In some embodiments, the device displays an animated transition showing the virtual object moving and rotating from a first orientation as shown in the first user interface area (e.g., as shown in fig. 5E-5I) to a second orientation (e.g., an orientation predefined relative to a current orientation of a portion of the physical environment captured in the field of view of the one or more cameras). For example, the animation includes a transition from displaying a two-dimensional representation of the virtual object while displaying the first user interface region to displaying a three-dimensional representation of the virtual object while displaying the second user interface region. In some embodiments, the three-dimensional representation of the virtual object has an anchor plane that is predefined based on the shape and orientation of the virtual object as shown in the two-dimensional graphical user interface (e.g., the first user interface region). When transitioning to an augmented reality view (e.g., a second user interface region), the three-dimensional representation of the virtual object is moved, resized, and reoriented such that the virtual object arrives from an original location on the display to a new location on the display (e.g., to a center of the augmented reality view, or to another predefined location in the augmented reality view), and, during the movement or at the end of the movement, reoriented such that the three-dimensional representation of the virtual object is at a predefined position and/or orientation relative to a predefined plane identified in the field of view of the one or more cameras (e.g., a physical surface, such as a vertical wall or a horizontal floor surface, that may serve as a support plane for the three-dimensional representation of the virtual object).
In some implementations, the first criteria include (808) criteria that are met when (e.g., as determined from below) the contact remains at a location on the touch-sensitive surface corresponding to the representation of the virtual object for less than a threshold amount of movement for at least a predefined amount of time (e.g., a long press time threshold). In some embodiments, in accordance with a determination that the contact satisfies criteria for recognizing another type of gesture (e.g., a tap), the device also performs another predefined function other than triggering the AR user interface while maintaining the display of the virtual object. In accordance with whether the contact remains at a location on the touch-sensitive surface that corresponds to the representation of the virtual object with movement less than a threshold amount of movement for at least a predefined amount of time, determining whether to continuously display the representation of the virtual object while replacing the display of at least a portion of the first user interface area with the field of view of the camera enables a plurality of different types of operations to be performed in response to the input. Enabling multiple different types of operations to be performed in response to an input increases the efficiency with which a user can perform the operations, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first criterion includes (810) when (e.g., as determined below) a characteristic intensity of the contact increases above a first intensity threshold (e.g., a light press intensity threshold IT)LOr deep press intensity threshold ITD) The criteria met. For example, as described with reference to fig. 5C-5F, when the characteristic intensity of the contact 5026 increases above the deep press intensity threshold ITDWhen the criterion is met, as indicated by the intensity level meter 5028. In some embodiments, in accordance with a determination that the contact satisfies criteria for recognizing another type of gesture (e.g., a tap), the device also performs another predefined function other than triggering the AR user interface while maintaining the display of the virtual object. In some embodiments, the first criterion requires that the first input is not lightA tap input (e.g., an input having a duration between a touch down of a contact and a lift off of the contact, the duration being greater than a tap time threshold). Determining whether to continuously display the representation of the virtual object while replacing the display of at least a portion of the first user interface region with the field of view of the camera in accordance with whether the characteristic intensity of the contact increases above a first intensity threshold enables a plurality of different types of operations to be performed in response to the input. Enabling multiple different types of operations to be performed in response to an input increases the efficiency with which a user can perform the operations, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first criteria include (812) a criterion that is met when movement of the contact satisfies a predefined movement criterion (e.g., the contact moves beyond a predefined threshold position on the touch-sensitive surface (e.g., a position corresponding to a boundary of the first user interface region, a position a threshold distance from an original position of the contact, etc.), the contact moves at a speed greater than a predefined threshold speed, the movement of the contact ends under a press input, etc.). fig.. In some embodiments, during an initial portion of the movement of the contact, dragging the representation of the virtual object by the contact, and when the movement of the contact is about to satisfy a predefined defined movement criterion, the virtual object stops moving under the contact to indicate that the first criterion is about to be satisfied; and, if the movement of the contact continues and the continued movement of the contact satisfies the predefined movement criteria, begin transitioning to display the second user interface region and display the virtual object within the augmented reality view. In some embodiments, when the virtual object is dragged during the initial portion of the first input, the object size and viewing perspective do not change, and once the augmented reality view is displayed and the virtual object falls to a position in the augmented reality view, the virtual object is displayed with a size and viewing perspective that depends on the physical position represented by the falling position of the virtual object in the augmented reality view. Determining whether to continuously display the representation of the virtual object while replacing the display of the at least a portion of the first user interface area with the field of view of the camera in accordance with whether the movement of the contact satisfies predefined movement criteria enables a plurality of different types of operations to be performed in response to the input. Enabling multiple different types of operations to be performed in response to an input increases the efficiency with which a user can perform the operations, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting the first input by contact, in accordance with a determination that the first input by contact has satisfied the first criterion, the device with one or more tactile output generators 167 outputs (814) a tactile output indicating that the first input satisfies the first criterion (e.g., tactile output 5032 as described with reference to fig. 5F or tactile output 5088 as described with reference to fig. 5 AH). In some embodiments, the tactile sensation is generated before the field of view of the one or more cameras appears on the display. For example, the tactile indication triggers activation of the one or more cameras and subsequently triggers the first criterion of plane detection in the field of view of the one or more cameras to be satisfied. Due to the time required to activate the camera and make the field of view displayable, this tactile sensation acts as a non-visual signal to the user indicating that the necessary input has been detected by the device and that the augmented reality user interface is presented as soon as the device is ready.
Outputting a tactile output indicating that a criterion (e.g., for replacing a display of at least a portion of a user interface with a field of view of a camera) is satisfied provides feedback to a user indicating that the provided input satisfies the criterion. Providing improved haptic feedback enhances the operability of the device (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting at least an initial portion of the first input (e.g., including detecting contact, or detecting input by contact that satisfies respective predefined criteria but does not satisfy the first criteria, or detecting input that satisfies the first criteria), the device analyzes (816) the field of view of the one or more cameras to detect one or more planes (e.g., floor surface 5038, desktop 5046, walls, etc.) in the field of view of the one or more cameras. In some embodiments, one or more cameras are activated in response to detecting at least an initial portion of the first input, and plane detection is initiated while the cameras are activated. In some embodiments, the display of the field of view of the one or more cameras is delayed after activating the one or more cameras (e.g., from the time the one or more cameras are activated to the time at which at least one plane is detected in the field of view of the cameras). In some embodiments, display of the field of view of the one or more cameras is initiated at the time the one or more cameras are activated, and the plane detection is completed after the field of view has been visible on the display (e.g., in the second user interface area). In some embodiments, after detecting the respective planes in the field of view of the one or more cameras, the device determines a size and/or position of the representation of the virtual object based on a position of the respective planes relative to the field of view of the one or more cameras. In some embodiments, as the electronic device moves, the size and/or position of the representation of the virtual object is updated as the position of the field of view of the one or more cameras relative to the respective plane changes (e.g., as described with reference to fig. 5K-5L). Determining the size and/or position of the representation of the virtual object based on the position of the respective plane detected in the field of view of the camera (e.g., without further user input to size and/or position the virtual object relative to the field of view of the camera) enhances operability of the device, which in turn reduces power usage and extends battery life of the device by enabling a user to more quickly and efficiently use the device.
In some embodiments, responsive to detecting contact at a location on the touch-sensitive surface that corresponds to a representation of a virtual object on the display (e.g., responsive to detecting contact 5026 at a location on the touch screen 112 that corresponds to a virtual chair 5020), initiating (818) one or more faciesAnalysis of the machine's field of view detects one or more planes in the field of view of the one or more cameras. For example, before the first input satisfies the first criterion (e.g., the characteristic intensity at the contact 5026 increases to above the deep press intensity threshold IT)DPreviously, as described with reference to fig. 5F), and prior to displaying the second user interface region, activation of the camera and detection of a plane in the field of view of the camera is commenced. By starting to detect the plane when any interaction with the virtual object is detected, plane detection can be done before the AR trigger criteria are met, so there is no visual delay for the user to see the following process: when the first input satisfies the AR trigger criteria, the virtual object transitions into an augmented reality view. In response to detecting contact at the location of the representation of the virtual object, initiating analysis to detect one or more planes in the field of view of the camera (e.g., without further user input to initiate analysis of the field of view of the camera) improves the efficiency of the device, which in turn reduces power usage and extends the battery life of the device by enabling a user to use the device more quickly and efficiently.
In some embodiments, the first criterion is satisfied in response to detecting that the first input by contact (e.g., in response to detecting that the characteristic intensity of the contact 5026 increases above a deep press intensity threshold IT)DAs described with reference to fig. 5F), an analysis of the field of view of the one or more cameras is initiated (820) to detect one or more planes in the field of view of the one or more cameras. For example, when the first input satisfies a first criterion, activation of the camera and detection of a plane in the field of view of the camera is initiated, and the field of view of the camera is displayed before plane detection is complete. By starting camera activation and flat detection when the AR trigger criteria are met, the camera and flat detection are not unnecessarily activated and kept running, which saves battery power and extends battery life and camera life.
In some embodiments, in response to detecting that the initial portion of the first input satisfies the plane detection trigger criteria but not the first criteria, initiating (822) analysis of the field of view of the one or more cameras to detect one or more planes in the field of view of the one or more cameras. For example, when the initial portion of the first input meets some criteria (e.g., a criterion less stringent than the AR trigger criterion), activation of the camera and detection of a plane in the field of view of the camera is initiated, and the field of view of the camera is optionally displayed before plane detection is complete. By initiating camera activation and flat plane detection after certain criteria are met, rather than when contact is detected, the camera and flat plane detection are not unnecessarily activated and remain operational, which saves battery power and extends battery life and camera life. By starting camera activation and plane detection before the AR trigger criteria are satisfied, a delay (caused by camera activation and plane detection) for displaying a virtual object transitioning into an augmented reality view when the first input satisfies the AR trigger criteria is reduced.
In some embodiments, the device displays (824) a representation of the virtual object in the second user interface region in a corresponding manner such that the virtual object (e.g., the virtual chair 5020) is oriented at a predefined angle relative to the respective plane detected in the field of view 5034 of the one or more cameras (e.g., such that there is no distance (or a minimum distance) between the undersides of the four legs of the virtual chair 5020 and the floor surface 5038). For example, the orientation and/or position of the virtual object relative to the respective plane is predefined based on the shape and orientation of the virtual object as shown in the two-dimensional graphical user interface (e.g., the respective plane corresponds to a horizontal physical surface that may serve as a support surface for a three-dimensional representation of the virtual object in the augmented reality view (e.g., a horizontal desktop for supporting a vase), or the respective plane is a vertical physical surface that may serve as a support surface for a three-dimensional representation of the virtual object in the augmented reality view (e.g., a vertical wall for hanging a virtual picture frame)). In some embodiments, the orientation and/or position of the virtual object is defined by a respective surface or boundary (e.g., bottom surface, bottom boundary point, side surface, and/or side boundary point) of the virtual object. In some embodiments, the anchor plane corresponding to the respective plane is a property in a set of properties of the virtual object, and the anchor plane is specified according to a property of the physical object that the virtual object should represent. In some embodiments, the virtual object is placed at a predefined orientation and/or position relative to a plurality of planes detected in the field of view of the one or more cameras (e.g., a plurality of respective sides of the virtual object are associated with respective planes detected in the field of view of the cameras). In some embodiments, if the horizontal base plane relative to the virtual object is defined as a predefined orientation and/or position of the virtual object, the base plane of the virtual object is displayed on a floor plane detected in the field of view of the camera (e.g., the horizontal base plane of the virtual object is parallel to the floor plane and its distance from the floor plane is zero). In some embodiments, if the vertical back plane relative to the virtual object is defined as a predefined orientation and/or position of the virtual object, the back surface of the virtual object is placed against a wall plane detected in the field of view of the one or more cameras (e.g., the vertical back plane of the virtual object is parallel to the wall plane and its distance from the wall plane is zero). In some embodiments, the virtual object is placed at a fixed distance from the respective plane or at an angle other than zero or right angle relative to the respective plane. Displaying a representation of the virtual object relative to a plane detected in the field of view of the camera (e.g., without further user input to display the virtual object relative to the plane in the field of view of the camera) enhances operability of the device, which in turn reduces power usage and extends battery life of the device by enabling a user to use the device more quickly and efficiently.
In some embodiments, in response to detecting the respective plane in the field of view of the one or more cameras, the device with the one or more tactile output generators 167 outputs (826) a tactile output indicating that the respective plane is detected in the field of view of the one or more cameras. In some embodiments, a respective haptic output is generated for each plane (e.g., floor surface 5038 and/or tabletop 5046) detected in the field of view of the camera. In some embodiments, the tactile output is generated upon completion of the plane detection. In some embodiments, the haptic output is accompanied by a visual indication showing the field of view plane in the field of view in the second user interface portion (e.g., a momentary highlighting of the detected field of view plane). Outputting a haptic output indicating that a plane is detected in the field of view of the camera provides feedback to the user indicating that the plane has been detected. Providing improved haptic feedback enhances the operability of the device (e.g., by helping the user provide appropriate input and reducing unnecessary additional input for placing virtual objects), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, upon switching from displaying the first user interface region to displaying the second user interface region, the device displays (828) an animation in which the representation of the virtual object transitions (e.g., moves, rotates, resizes, and/or re-renders in a different style, etc.) to a predefined position in the second user interface region relative to the respective plane (e.g., as shown in figures 5F-5I), and in conjunction with displaying representations of virtual objects at predefined angles relative to the respective planes (e.g., at predefined orientations and/or positions relative to the respective planes, and their sizes, rotation angles, and appearances to the final state to be shown in the augmented reality view), a device having one or more tactile output generators 167 outputs tactile outputs, the tactile output indicates that the virtual object is displayed in the second user interface area at a predefined angle relative to the respective plane. For example, as shown in fig. 5I, in conjunction with displaying a virtual chair 5020 at a predefined angle relative to the floor surface 5038, the device outputs a haptic output 5036. In some embodiments, the generated haptic output is configured to have characteristics (e.g., frequency, number of cycles, modulation, amplitude, accompanying audio waves, etc.) that reflect the following properties of the virtual object or the physical object represented by the virtual object: weight (e.g., heavy and light), material (e.g., metal, cotton, wood, marble, liquid, rubber, glass), size (e.g., large and small), shape (e.g., thin and thick, long and short, round and sharp, etc.), elasticity (e.g., elasticity and rigidity), properties (e.g., pretty and solemn, mild and strong, etc.), and other attributes. For example, the haptic output uses one or more of the haptic output modes shown in fig. 4F-4K. In some embodiments, the preset distribution comprising one or more changes in the one or more features over time corresponds to a virtual object (e.g., an emoticon). For example, a "bouncing" haptic output distribution is provided for a "smiley" emoticon virtual object. Outputting a haptic output indicating placement of the representation of the virtual object relative to the respective plane provides feedback to the user indicating that the representation of the virtual object has been automatically placed relative to the respective plane. Providing improved haptic feedback enhances the operability of the device (e.g., by helping the user provide appropriate input and reducing unnecessary additional input for placing virtual objects), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments (830), the haptic output has a haptic output distribution corresponding to a characteristic of the virtual object (e.g., a simulated physical property such as size, density, mass, and/or material). In some embodiments, the haptic output distribution has a characteristic (e.g., frequency, number of cycles, modulation, amplitude, accompanying audio waves, etc.) that varies based on one or more characteristics (e.g., weight, material, size, shape, and/or elasticity) of the virtual object. For example, the haptic output uses one or more of the haptic output modes shown in fig. 4F-4K. In some embodiments, as the size, weight, and/or mass of the virtual object increases, the magnitude and/or duration of the haptic output also increases. In some embodiments, the haptic output mode is selected based on the virtual material that makes up the virtual object. Outputting a haptic output having a distribution corresponding to a characteristic of the virtual object provides the user with feedback indicating information about the characteristic of the virtual object. Providing improved haptic feedback enhances the operability of the device (e.g., by helping the user provide appropriate input, by reducing unnecessary additional input for placing virtual objects, and by providing a user interface that allows the user to perceive features of the virtual object, but does not confuse the user interface with displayed information about those features), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, while displaying the representation of the virtual object in the second user interface area, the device detects (832) device movement (e.g., lateral movement and/or rotation of the device) that adjusts the field of view 5034 of the one or more cameras (e.g., as shown in fig. 5K-5L), and in response to detecting the movement of the device, while adjusting the field of view of the one or more cameras, the device adjusts the virtual object in the second user interface area (e.g., the device detects, adjusts the virtual object in the second user interface area (e.g., virtual chair 5020). For example, in fig. 5K-5L, as the device 100 moves, the virtual chair 5020 in the second user interface region, which includes the field of view 5034 of the camera, remains in a fixed orientation and position relative to the floor surface 5038. In some embodiments, the virtual object appears to be stationary and unchanged relative to the surrounding physical environment 5002, that is, the size, position, and/or orientation of the representation of the virtual object on the display changes as the device position and/or orientation changes as the field of view of the one or more cameras changes as the device moves relative to the surrounding physical environment. Adjusting the representation of the virtual object according to the fixed relationship between the virtual object and the respective plane (e.g., without further user input to maintain the position of the virtual object relative to the respective plane) enhances the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling a user to use the device more quickly and efficiently.
In some embodiments, (e.g., at a time corresponding to replacing display of at least a portion of the first user interface region with a representation of a field of view of one or more cameras), the device displays (834) an animation (e.g., moving, rotating about one or more axes, and/or zooming) that continuously displays the representation of the virtual object (e.g., virtual chair 5020) while switching from displaying the first user interface region to displaying the second user interface region (e.g., as shown in fig. 5F-5I). For example, the animation includes a transition from displaying a two-dimensional representation of the virtual object while displaying the first user interface region to displaying a three-dimensional representation of the virtual object while displaying the second user interface region. In some embodiments, the three-dimensional representation of the virtual object has an orientation that is predefined relative to a current orientation of a portion of the physical environment captured in the field of view of the one or more cameras. In some embodiments, when transitioning to the augmented reality view, the representation of the virtual object is moved, resized, and reoriented to bring the virtual object from an initial position on the display to a new position on the display (e.g., a center of the augmented reality view or another predefined position in the augmented reality view), and during or at the end of the movement, the virtual object is reoriented such that the virtual object is at a fixed angle relative to a plane detected in the field of view of the camera (e.g., a physical surface that can support the representation of the virtual object, such as a vertical wall or a horizontal floor surface). In some embodiments, when an animated transition occurs, the illumination of the virtual object and/or the shadow cast by the virtual object is adjusted (e.g., to match ambient illumination detected in the field of view of one or more cameras). Displaying the animation when the representation of the virtual object switches from displaying the first user interface region to displaying the second user interface region provides feedback to the user indicating that the first input satisfies the first criterion. Providing improved feedback enhances the operability of the device (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, while displaying the second user interface area on the display, the device detects (836) a second input by a second contact (e.g., contact 5040), wherein the second input includes (optionally, a press or touch input by the second contact selecting a representation of a virtual object and) movement of the second contact along a first path on the display (e.g., as shown in fig. 5N-5P), and in response to detecting the second input by the second contact, the device moves the representation of the virtual object in the second user interface area (e.g., virtual chair 5020) along a second path that corresponds to (e.g., is the same as or constrained by) the first path. In some embodiments, the second contact is different from the first contact and is detected after the first contact lifts off (e.g., as shown by contact 5040 in fig. 5N-5P, which is detected after the contact 5026 in fig. 5C-5F lifts off). In some embodiments, the second contact is the same as the first contact that is continuously maintained on the touch-sensitive surface (e.g., as shown by input made through contact 5086, which satisfies the AR trigger criteria, and then moves on touch screen 112 to move virtual light 5084). In some embodiments, a swipe input on the virtual object rotates the virtual object, while movement of the virtual object is optionally constrained by a plane in the field of view of the camera (e.g., the swipe input rotates a representation of a chair on a floor plane in the field of view of the camera). The representation of moving the virtual object in response to detecting the input provides the user with feedback indicating that the position of the displayed virtual object is movable in response to the user input. Providing improved feedback enhances the operability of the device (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the device adjusts (838) a size of the representation of the virtual object (e.g., based on a virtual distance from the representation of the virtual object to the user to maintain an accurate perspective of the virtual object in the field of view) as the representation of the virtual object moves along the second path based on the movement of the contact and the respective plane corresponding to the virtual object. For example, in fig. 5N-5P, as the virtual chair moves deeper into the field of view 5034 of the camera, away from the device 100, and toward the table 5004, the size of the virtual chair 5020 decreases. Adjusting the size of the representation of the virtual object as it moves based on the contact and the plane corresponding to the virtual object moves along the second path (e.g., without further user input to adjust the size of the representation of the virtual object to maintain the representation of the virtual object at a realistic size relative to the environment in the field of view of the camera) enhances the operability of the device, and in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the device maintains (840) a first size of the representation of the virtual object (e.g., virtual light 5084) as the representation of the virtual object moves along the second path (e.g., as shown in fig. 5 AI-5 AL), the device detects termination of the second input by the second contact (e.g., including detecting liftoff of the second contact, as shown in fig. 5 AL-5 AM), and in response to detecting termination of the second input by the second contact, the device places the representation of the virtual object at a drop position (e.g., on desktop 5046) in the second user interface area, and displaying a representation of the virtual object in a second size at the drop position in the second user interface area, the second size being different from the first size (e.g., the size of the virtual light 5084 after termination of input by contact 5086 in fig. 5AM is different from the size of the virtual light 5084 before termination of input by contact 5086 in fig. 5 AL). For example, the size and viewing perspective of the object do not change when dragged by the contact, and when the object falls at a final position in the augmented reality view, the object is displayed with a size and viewing perspective determined based on a physical position in the physical environment corresponding to a falling position of a virtual object shown in the field of view of the camera, such that the object has a second size in accordance with the determination that the falling position is a first position in the field of view of the camera, and the object has a third size different from the second size in accordance with the determination that the falling position is a second position in the field of view of the camera, wherein the second size and the third size are selected based on a distance between the falling position and the one or more cameras. Displaying a representation of the virtual object with the changed size in response to detecting termination of the second input moving the virtual object (e.g., without requiring further user input to resize the virtual object to maintain the virtual object at a realistic size relative to the environment in the field of view of the camera) enhances operability of the device, which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in accordance with a determination that movement of the second contact along the first path on the display satisfies a second criterion (e.g., at an end of the first path, the contact is within a threshold distance, or outside an edge (e.g., a bottom edge, a top edge, and/or a side edge) of the display or an edge of the second user interface area), the device (842): the display of the second user interface region including a representation of the field of view of the one or more cameras is stopped, and the (full) first user interface region with a representation of the virtual object is redisplayed (e.g., if a portion of the first user interface region was previously displayed concurrently with the second user interface region, the device displays the full first user interface region after the second user interface region is no longer displayed). For example, in response to movement of the contact 5054 dragging the virtual chair 5054 to the edge of the touchscreen 112, as shown in fig. 5V-5X, the field of view 5034 of the camera stops being displayed and the full instant messaging user interface 5008 is redisplayed, as shown in fig. 5Y-5 AD. In some embodiments, as the contact approaches an edge of the display or an edge of the second user interface area, the second user interface area fades out (e.g., as shown in fig. 5X-5Y) and/or the first user interface area (which is not displayed or blocked portion) fades in (e.g., as shown in fig. 5Z-5 AA). In some implementations, the gesture for transitioning from the non-AR view (e.g., the first user interface region) to the AR view (e.g., the second user interface region) is the same as the gesture for transitioning from the AR view to the non-AR view. For example, a drag gesture on the virtual object that exceeds a threshold location in the currently displayed user interface (e.g., within a threshold distance of, or exceeds a boundary of the currently displayed user interface region) causes a transition from the currently displayed user interface region to the corresponding user interface region (e.g., a transition from displaying the first user interface region to displaying the second user interface region, or alternatively, a transition from displaying the second user interface region to displaying the first user interface region). In some embodiments, the visual indication is shown before the first/second criteria are satisfied (e.g., fading out a currently displayed user interface area and fading in a corresponding user interface), and is reversible if the input continues and the first/second criteria are not satisfied before termination of the input (e.g., liftoff of the contact) is detected. Redisplaying the first user interface in response to detecting input that satisfies the input criteria provides additional control options without cluttering a second user interface having additional displayed controls (e.g., controls for displaying the first user interface from the second user interface). Providing additional control options without cluttering the second user interface with the additionally displayed controls enhances operability of the device, which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, at a time corresponding to redisplaying the first user interface region, the device displays (844) an animated transition (e.g., movement, rotation about one or more axes, and/or zooming) from displaying a representation of the virtual object in the second user interface region to displaying a representation of the virtual object in the first user interface region (e.g., as illustrated by the animation of the virtual chair 5020 in fig. 5AB through 5 AD). Displaying an animated transition from displaying a representation of the virtual object in the second user interface to displaying a representation of the virtual object in the first user interface (e.g., without requiring further user input to reposition the virtual object in the first user interface) enhances operability of the device, which in turn reduces power usage and extends battery life of the device by enabling a user to use the device more quickly and efficiently.
In some embodiments, as the second contact moves along the first path, the device changes (846) visual appearance of (e.g., highlights, marks, outlines, and/or otherwise visually changes appearance of) one or more respective planes identified in the field of view of the one or more cameras that correspond to the current location of the contact. For example, as the contact 5042 drags the virtual chair 5020 along a path as shown by arrows 5042 and 5044 in fig. 5O-5P, the floor surface 5038 is highlighted (e.g., prior to movement of the contact 5042 as compared to fig. 5M). In some implementations, in accordance with a determination that the contact is at a location corresponding to a first plane detected in a field of view of the camera, the first plane is highlighted. In accordance with a determination that the contact has moved to a location corresponding to a second plane detected in the field of view of the camera (e.g., as shown in fig. 5S-5U), highlighting the first plane (e.g., floor surface 5038) is stopped and the second plane (e.g., desktop 5046) is highlighted. In some embodiments, multiple planes are highlighted simultaneously. In some embodiments, a first plane of the plurality of visually altered planes is visually altered in a manner different from the manner in which the other planes are visually altered to indicate that the contact is at a location corresponding to the first plane. Changing the visual appearance of one or more respective planes identified in the field of view of the camera provides feedback to the user indicating that the plane has been identified (e.g., a virtual object may be positioned relative to the plane). Providing improved visual feedback enhances the operability of the device (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting the first input by the contact, in accordance with a determination that the first input by the contact satisfies a third (e.g., a landing user interface display) criterion (e.g., the landing user interface display criterion is a criterion configured to recognize a swipe input, a touch-hold input, a press input, a tap input, or a hard press with an intensity above a predefined intensity threshold), the device displays (848) a third user interface region on the display that includes a display that replaces at least a portion of the first user interface region (e.g., a 3D model of a virtual object that includes a 2D image that replaces the virtual object). In some embodiments, while displaying the staging user interface (e.g., the staging user interface 6010 as described with reference to fig. 6I), the device updates the appearance of the representation of the virtual object based on the detected input corresponding to the staging user interface (e.g., as described in more detail below with reference to method 900). In some embodiments, when another input is detected while the virtual object is displayed in the staging user interface and the input satisfies the criteria for transitioning to displaying the second user interface region, the device replaces the display of the staging user interface with the second user interface region while continuously displaying the virtual object. More details are described with respect to method 900. Displaying the third user interface in accordance with a determination that the first input satisfies the third criterion provides additional control options without cluttering the first user interface with additional displayed controls (e.g., controls for displaying the third user interface from the first user interface). Providing additional control options without cluttering the second user interface with the additionally displayed controls enhances operability of the device, which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in accordance with a determination that a first input by contact (e.g., a swipe input corresponding to scrolling the first user interface region or a tap input corresponding to a request to display a web page or email corresponding to content in the first user interface region) does not satisfy a first (e.g., AR trigger) criterion, the device maintains (850) display of the first user interface region without replacing display of at least a portion of the first user interface region with a representation of a field of view of one or more cameras (e.g., as described with reference to fig. 6B-6C). Using the first criteria to determine whether to maintain the display of the first user interface area or whether to continuously display the representation of the virtual object when replacing the display of at least a portion of the first user interface area with the field of view of the one or more cameras enables a plurality of different types of operations to be performed in response to the input. Enabling a plurality of different types of operations to be performed in response to an input (e.g., by replacing display of at least a portion of the user interface with a field of view of one or more cameras, or by maintaining display of the first user interface region without replacing display of at least a portion of the first user interface region with a representation of the field of view of one or more cameras) increases the efficiency with which a user can perform such operations, thereby enhancing operability of the device, which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
It should be understood that the particular order of operations that have been described in fig. 8A through 8E is merely exemplary and is not intended to suggest that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods 900 and 1000) also apply in a similar manner to method 800 described above with respect to fig. 8A-8E. For example, the contact, input, virtual object, user interface region, intensity threshold, tactile output, field of view, movement, and/or animation described above with reference to method 800 optionally has one or more of the features of the contact, input, virtual object, user interface region, intensity threshold, tactile output, field of view, movement, and/or animation described herein with reference to other methods described herein (e.g., methods 900, 1000, 16000, 17000, 18000, 19000, and 20000). For the sake of brevity, these details are not repeated here.
Fig. 9A-9D are flow diagrams illustrating a method 900 for displaying a first representation of a virtual object in a first user interface area, displaying a second representation of the virtual object in a second user interface area, and displaying a third representation of the virtual object with a representation of a field of view of one or more cameras, according to some embodiments. Method 900 is performed at an electronic device (e.g., device 300 in fig. 3 or portable multifunction device 100 in fig. 1A) having a display, a touch-sensitive surface, and one or more cameras (e.g., one or more rear-facing cameras on a side of the device opposite the display and touch-sensitive surface). In some embodiments, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations in method 900 are optionally combined, and/or the order of some operations is optionally changed.
As described below, the method 900 involves detecting input by contact at a touch-sensitive surface of a device for displaying a representation of a virtual object in a first user interface (e.g., a two-dimensional graphical user interface). In response to the first input, the device uses the criteria to determine whether to display the second representation of the virtual object in a second user interface (e.g., a staging user interface in which the three-dimensional representation of the virtual object may be moved, resized, and/or reoriented). While displaying the second representation of the virtual object in the second user interface, in response to the second input, the device changes a display property of the second representation of the virtual object based on the second input or displays a third representation of the virtual object in a third user interface that includes fields of view of one or more cameras of the device. Enabling multiple different types of operations to be performed in response to input (e.g., by changing display properties of virtual objects or displaying virtual objects in a third user interface) increases the efficiency with which a user can perform these operations, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
The device displays (902) a first representation of a virtual object (e.g., a graphical representation of a three-dimensional object, such as a virtual chair 5020, a virtual light 5084, a shoe, furniture, a hand tool, an ornament, a person, an emoticon, a game character, a virtual furniture, etc.) in a first user interface area (e.g., a two-dimensional graphical user interface or a portion thereof (e.g., a browsable list of furniture images, an image containing one or more selectable objects, etc.) on the display 112. For example, the first user interface area is instant messaging user interface 5008 as shown in fig. 6A. In some embodiments, the first user interface region includes a background in addition to the image of the physical environment surrounding the device (e.g., the background of the first user interface region is a preselected background color/pattern or background image that is different from output images simultaneously captured by the one or more cameras and different from real-time content in the field of view of the one or more cameras).
When a first representation of a virtual object is displayed in a first user interface area on the display, the device detects (904) a first input by a first contact at a location on the touch-sensitive surface that corresponds to the first representation of the virtual object on the display (e.g., the first contact is detected on the first representation of the virtual object on the touch-sensitive surface, or the first contact is detected on an affordance (e.g., toggle control 6018) that is displayed in the first user interface area concurrently with the first representation of the virtual object, the affordance configured to trigger, when invoked by the first contact, display of an AR view (e.g., field of view 6036 of the camera) and/or a landing user interface 6010 that includes a representation of the virtual object (e.g., virtual chair 5020). For example, the first input is an input through contact 6006 as described with reference to fig. 6E-6I.
In response to detecting the first input by the first contact, and in accordance with a determination that the first input by the first contact satisfies a first (e.g., stage trigger) criterion (e.g., the stage trigger criterion is configured to identify a swipe input, a touch-hold input, a press input, a tap input, a downward touch of the contact, an initial movement of the contact, or another type of predefined input gesture, the stage trigger criterion being associated with activation of a trigger camera and/or detection of a field of view plane in a field of view of the trigger camera), the device displays (906) a second representation of the virtual object in a second user interface region, the second user interface region being different from the first user interface region (e.g., the second user interface region is a stage user interface 6010 that does not include the field of view of the camera and includes a simulated three-dimensional space in which the first input can be manipulated in response to the user input (e.g., rotate or move) a three-dimensional representation of a virtual object). For example, in fig. 6E-6H, the input by contact 6006 has an increase above deep press intensity threshold IT as determined byDCharacteristic strength ofThe virtual chair object 5020 is displayed in a staging user interface 6010 (e.g., as shown in fig. 6I), which is different from the instant messaging user interface 5008 (e.g., as shown in fig. 6E).
In some embodiments, in response to detecting the first input, and in accordance with a determination that the first input satisfies the landing trigger criteria, the device displays a first animated transition that shows a three-dimensional representation of the virtual object that is moved from a first orientation as shown in the first user interface region (e.g., the first orientation of the virtual chair 5020 as shown in the instant messaging user interface 5008 in fig. 6E) and re-oriented to a second orientation (e.g., a second orientation of the virtual chair 5020 that is determined based on the gantry plane 6014, as shown in fig. 6I), wherein the second orientation is determined based on a virtual plane on the display that is oriented independently of a current orientation of the device relative to a physical environment surrounding the device. For example, the three-dimensional representation of the virtual object has a predefined orientation and/or distance from the plane (e.g., based on the shape and orientation of the virtual object as shown in the two-dimensional graphical user interface), and when transitioning to a staging view (e.g., the staging user interface 6010), the three-dimensional representation is moved, resized, and reoriented such that the virtual object reaches a new location on the display (e.g., the center of the virtual gantry 6014) from an original location on the display, and either during the movement or at the end of the movement, the three-dimensional representation is reoriented such that the virtual object is at a fixed angle relative to the predefined staging virtual plane 6014, which is defined independently of the physical environment surrounding the device.
While the second representation of the virtual object is displayed in the second user interface area, the device detects (908) a second input (e.g., an input through a contact 6034 as shown in fig. 6Q-6T). In some embodiments, detecting the second input comprises: detecting one or more second contacts on the touch screen at locations corresponding to the second representation of the virtual object; detecting a second contact on an affordance configured to trigger display of an augmented reality view of a physical environment surrounding a device when invoked by the second contact; detecting movement of the second contact; and/or detecting lift-off of the second contact. In some embodiments, the second input is a continuation of the first input by the same contact (e.g., the second input is an input by contact 6034 as shown in fig. 6Q-6T (e.g., contact not lifted off) after the first input by contact 6006 as shown in fig. 6E-6I), either an independent input by a completely different contact (e.g., the second input is an input by contact 6034 as shown in fig. 6Q-6T (e.g., contact liftoff) after the first input by contact 6006 as shown in fig. 6E-6I), or a continuation of an input by another contact (e.g., the second input is an input by contact 6006 as shown in fig. 6J-6L after the first input by contact 6006 as shown in fig. 6E-6I). For example, the second input may be a continuation of the swipe input, a second tap input, a second press input, a press input subsequent to the first input, a second touch hold input, a sustained touch continuing from the first input, and so forth.
In response to detecting the second input (910): in accordance with a determination that the second input corresponds to a request to manipulate the virtual object in the second user interface region (e.g., without transitioning to the augmented reality view), the device changes display properties of the second representation of the virtual object within the second user interface region based on the second input, and in accordance with a determination that the second input corresponds to a request to display the virtual object in the augmented reality environment, the device displays a third representation of the virtual object having a representation of the field of view of the one or more cameras (e.g., the device displays a third user interface that includes the field of view 6036 of the one or more cameras, and places the three-dimensional representation of the virtual object (e.g., the virtual chair 5020) on a virtual plane (e.g., the floor surface 5038) detected within the field of view of the camera that corresponds to a physical plane (e.g., the floor) in the device's surrounding physical environment 5002).
In some embodiments, the second input corresponding to the request to manipulate the virtual object in the second user interface region is a pinch or swipe by a second contact at a location on the touch-sensitive surface corresponding to the second representation of the virtual object in the second user interface region. For example, the second input is an input through contact 6006 as shown in fig. 6J-6L or through contacts 6026 and 6030 as shown in fig. 6N-6O.
In some embodiments, the second input corresponding to the request to display the virtual object in the augmented reality environment is a tap input, a press input, or a touch hold or press input followed by a drag input at or from a location on the touch-sensitive surface corresponding to the representation of the virtual object in the second user interface area. For example, the second input is a deep press input by the contact 6034 as shown in fig. 6Q to 6T.
In some embodiments, changing the display properties of the second representation of the virtual object within the second user interface region based on the second input includes rotating about one or more axes (e.g., by swiping vertically and/or horizontally), resizing (e.g., pinching to resize), tilting about one or more axes (e.g., by tilting the device), changing the viewing angle (e.g., by moving the device horizontally, which in some embodiments is used to analyze the field of view of one or more cameras to detect one or more planes of the field of view), and/or changing the color of the representation of the virtual object. For example, changing the display properties of the second representation of the virtual object includes rotating the virtual chair 5020 in response to a horizontal swipe gesture through contact 6006 as shown in fig. 6J-6K; rotate the virtual chair 5020 in response to the diagonal swipe gesture through the contact 6006 as shown in fig. 6K-6L; or increase the size of the virtual chair 5020 in response to a separation gesture made through contacts 6026 and 6030 as shown in fig. 6N-6O. In some embodiments, the amount of change in the display attribute of the second representation of the virtual object is associated with an amount of change in an attribute of the second input (e.g., a distance or speed of movement of the contact, intensity of the contact, duration of the contact, etc.).
In some embodiments, in accordance with a determination that the second input corresponds to a request to display the virtual object in the augmented reality environment (e.g., in the field of view 6036 of the one or more cameras, as described with reference to fig. 6T), the device displays a second animated transition showing a three-dimensional representation of the virtual object reoriented from a respective orientation relative to a virtual plane on the display (e.g., the orientation of the virtual chair 5020 shown in fig. 6R) to a third orientation (e.g., the orientation of the virtual chair 5020 shown in fig. 6T) that is determined based on a current orientation of a portion of the physical environment captured in the field of view of the one or more cameras. For example, the three-dimensional representation of the virtual object is reoriented such that the three-dimensional representation of the virtual object is at a fixed angle relative to a predefined plane (e.g., floor surface 5038) identified in a real-time image of the physical environment 5002 (e.g., a physical surface that may support the three-dimensional representation of the virtual object, such as a vertical wall or a horizontal floor surface) captured in the field of view of the camera. In some embodiments, in at least one aspect, the orientation of the virtual object in the augmented reality view is constrained by the orientation of the virtual object in the staging user interface. For example, when transitioning the virtual object from the staging user interface to the augmented reality view, the angle of rotation of the virtual object about at least one axis of the three-dimensional coordinate system is maintained (e.g., the rotation of the virtual chair 5020 as described with reference to fig. 6Q-6U, as described with reference to fig. 6J-6K, is maintained). In some embodiments, the light source projected on the representation of the virtual object in the second user interface area is a virtual light source. In some embodiments, the third representation of the virtual object in the third user interface region is illuminated by the real-world light source (e.g., as detected in and/or determined by the field of view of the one or more cameras).
In some embodiments, the first criteria include (912) a criterion (e.g., an indicator of a representation of a virtual object, such as an icon, on an overlay and/or proximity display) that is satisfied when the first input includes a tap input by the first contact at a location on the touch-sensitive surface that corresponds to the virtual object indicator 5022 (e.g., determined from the following). For example, virtual object indicator 5022 provides an indication that a virtual object corresponding to the virtual object indicator is visible in a stage view (e.g., stage user interface 6010) and an augmented reality view (e.g., field of view 6036 of a camera) (e.g., as described in more detail below with reference to method 1000). Determining whether to display the second representation of the virtual object in the second user interface area based on whether the first input includes a tap input enables a plurality of different types of operations to be performed in response to the first input. Enabling multiple different types of operations to be performed in response to an input increases the efficiency with which a user can perform the operations, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, the first criteria include (914) a criterion that is met when (e.g., determined according to) the first contact remains at a location on the touch-sensitive surface corresponding to the first representation of the virtual object for less than a threshold amount of movement for at least a predefined threshold amount of time (e.g., a long press time threshold). For example, the hold input by touch satisfies a first criterion. In some embodiments, the first criteria include a criterion that requires the first contact to be moved so as to satisfy the criteria after the first contact remains at a location on the touch-sensitive surface corresponding to the representation of the virtual object with a movement less than a threshold amount of movement for at least a predefined threshold amount of time. For example, the first criterion is satisfied by a touch hold input followed by a drag input. Determining whether to display a second representation of the virtual object in the second user interface area in accordance with whether the contact remains at a location on the touch-sensitive surface that corresponds to the representation of the virtual object with movement less than a threshold amount of movement for at least a predefined amount of time enables a plurality of different types of operations to be performed in response to the first input. Enabling multiple different types of operations to be performed in response to an input increases the efficiency with which a user can perform the operations, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first criterion includes (916) when (e.g., as determined below) the characteristic intensity of the first contact increases to be highAt a first intensity threshold (e.g., deep press intensity threshold IT)D) The criteria met. For example, as described with reference to fig. 6Q-6T, as the characteristic intensity of the contact 6034 increases above the deep press intensity threshold ITDWhen the criterion is met, as indicated by the intensity level meter 5028. In some embodiments, in accordance with a determination that the contact satisfies criteria for recognizing another type of gesture (e.g., a tap), the device also performs another predefined function other than triggering a second (e.g., landing) user interface while maintaining the display of the virtual object. In some embodiments, the first criterion requires that the first input is not a tap input (e.g., an forceful tap input that reaches an intensity above a threshold intensity before liftoff of the contact is detected within a tap time threshold of an initial down-touch of the contact). In some embodiments, the first criterion includes a criterion that requires the first contact to be moved after the intensity of the first contact exceeds a first intensity threshold in order to satisfy the criterion. For example, the first criterion is satisfied by a press input followed by a drag input. Determining whether to display the virtual object in the second user interface area enables a plurality of different types of operations to be performed in response to the first input based on whether the characteristic intensity of the contact increases above the first intensity threshold. Enabling multiple different types of operations to be performed in response to an input increases the efficiency with which a user can perform the operations, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting the first input by the first contact, and in accordance with a determination that the first input by the first contact satisfies a second criterion (e.g., interface scrolling criteria), the device scrolls (918) the first user interface region (and the representation of the virtual object) in a direction corresponding to the direction of movement of the first contact (e.g., the first criterion is not satisfied and the representation of the virtual object is forgotten to be displayed in the second user interface region), wherein the second criterion requires that the first input include movement of the first contact in a direction across the touch-sensitive surface by more than a threshold distance (e.g., the second criterion is satisfied by a swipe gesture, such as a vertical swipe or a horizontal gesture). For example, as described with reference to fig. 6B-6C, the upward vertical swipe gesture by contact 6002 scrolls instant messaging user interface 5008 and virtual chair 5020 upward. In some embodiments, the first criterion further requires that the first input includes movement of the first contact by more than a threshold distance in order to satisfy the first criterion, and the device determines whether the first input satisfies the first criterion (e.g., a landing trigger criterion) or the second criterion (e.g., an interface scroll criterion) based on whether an initial portion of the first input (e.g., a touch hold or press on the representation of the virtual object) satisfies the object selection criterion. In some embodiments, a swipe input initiated at a touch location other than the virtual object and the location of the AR icon of the virtual object satisfies the second criterion. Determining whether to scroll the first user interface region in response to the first input based on whether the first input satisfies a second criterion enables a plurality of different types of operations to be performed in response to the first input. Enabling multiple different types of operations to be performed in response to an input increases the efficiency with which a user can perform the operations, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting the first input by the first contact, and in accordance with a determination that the first input by the first contact satisfies a third (e.g., AR trigger) criterion, the device displays (920) a third representation of the virtual object with a representation of the field of view of the one or more cameras. For example, as described with reference to fig. 6 AD-6 AG, a long touch input by the contact 6044 and a subsequent upward drag input by the contact 6044 dragging the virtual chair 5020 cause the field of view 6036 of the camera to display the virtual chair 5020.
In some embodiments, the third criteria include a criterion that is determined to be met, for example, according to: one or more cameras are in an active state; the device orientation falls within a defined range (e.g., a range of defined angles of rotation about one or more axes from a defined original orientationSurround); the input by contact includes a selection input (e.g., a long touch) and a subsequent drag input (movement of the contact that moves a virtual object on the display) (e.g., movement into a range of a predetermined distance from an edge of the display); the characteristic intensity of the contact increases above the AR trigger intensity threshold (e.g., the light press threshold IT)LOr deep press threshold ITD) (ii) a The duration of contact increases to greater than an AR trigger duration threshold (e.g., a long press threshold); and/or the distance of contact movement increases to be greater than an AR trigger distance threshold (e.g., a long swipe threshold). In some embodiments, a control (e.g., toggle control 6018) for displaying a representation of a virtual object in a second user interface region (e.g., stage user interface 6010) is displayed in a user interface (e.g., a third user interface region replacing at least a portion of the second user interface region) that includes the representation of the virtual object and a field of view 6036 of one or more cameras.
In some embodiments, when transitioning directly from the first user interface region (e.g., non-AR, non-landing, touchscreen UI view) to the third user interface region (e.g., augmented reality view), the device displays an animated transition showing a three-dimensional representation of the virtual object reoriented from a respective orientation represented in the touchscreen UI on the display (e.g., non-AR, non-landing view) to an orientation predefined relative to a current orientation of a portion of the physical environment captured in the field of view of the one or more cameras. For example, as shown in fig. 6 AD-6 AJ, when transitioning directly from the first user interface region (e.g., the instant messaging user interface 5008, as shown in fig. 6 AD) to the third user interface region (e.g., the augmented reality user interface including the field of view 6036 of the camera, as shown in fig. 6 AJ), the virtual chair 5020 changes from the first orientation, as shown in fig. 6 AD-6 AH, to a predefined orientation (e.g., as shown in fig. 6 AJ) relative to the floor surface 5038 in the physical environment 5002 as captured in the field of view 6036 of the camera. For example, the three-dimensional representation of the virtual object is reoriented such that the three-dimensional representation of the virtual object is at a fixed angle relative to a predefined plane identified in the real-time image of the physical environment 5002 (e.g., a physical surface that can support the three-dimensional representation of the virtual object, such as a vertical wall or a horizontal floor surface (e.g., floor surface 5038)). Determining whether to display a third representation of the virtual object having the field of view of the camera in response to the first input in dependence on whether the first input meets a third criterion enables a plurality of different types of operations to be performed in response to the first input. Enabling multiple different types of operations to be performed in response to an input increases the efficiency with which a user can perform the operations, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting the first input by the first contact, the device determines (922) a current device orientation of the device (e.g., an orientation relative to a physical environment surrounding the device) by one or more device orientation sensors, and a third criterion (e.g., an AR trigger criterion) requires the current device orientation to be within the first orientation range in order to satisfy the third criterion (e.g., the second criterion is satisfied when an angle between the device and the ground is less than a threshold angle, indicating that the device is sufficiently parallel to the ground (to bypass the clearance state)). In some embodiments, the first criterion (e.g., a stage trigger criterion) requires that the current equipment orientation be within the second orientation range in order to satisfy the first criterion (e.g., when the angle between the equipment and the ground is within a threshold and up to 90 degrees, the first criterion is satisfied, indicating that the equipment is sufficiently vertical relative to the ground to enter the clearance state first). Determining whether to display a third representation of the virtual object having the field of view of the camera in response to the first input in dependence on whether the device orientation is within the orientation range enables a plurality of different types of operations to be performed in response to the first input. Enabling multiple different types of operations to be performed in response to an input increases the efficiency with which a user can perform the operations, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, at least one display attribute (e.g., size, shape, respective angles about yaw, pitch, and roll axes, etc.) of the second representation of the virtual object is applied (924) to a third representation of the virtual object. For example, as described with reference to fig. 6Q-6U, when the third representation of the virtual chair 5020 is displayed in an augmented reality view that includes the field of view 6036 of the camera (e.g., as shown in fig. 6U), the rotation of the second representation of the virtual chair 5020 as applied in the staging user interface 6010 as described with reference to fig. 6J-6K is maintained. In some embodiments, in at least one aspect, the orientation of the virtual object in the augmented reality view is constrained by the orientation of the virtual object in the staging user interface. For example, when transitioning the virtual object from the staging view to the augmented reality view, the angle of rotation of the virtual object about at least one axis (e.g., yaw, pitch, and roll axes) of the predefined three-dimensional coordinate system is maintained. In some embodiments, the at least one display attribute of the second representation of the virtual object is only applied to the third representation of the virtual object if the second representation of the virtual object has been manipulated in some manner (e.g., changed in size, shape, texture, orientation, etc.) by the user input. In other words, changes made in the staging view are preserved when objects are shown in the augmented reality view or used to constrain the appearance of objects in the augmented reality view in one or more ways. Applying at least one display attribute of the second representation of the virtual object to the third representation of the virtual object (e.g., without further user input to apply the same display attribute to the second representation of the virtual object and the third representation of the virtual object) enhances operability of the device (e.g., by allowing a user to apply a rotation to the second virtual object when a large version of the virtual object is displayed in the second user interface and to the third representation of the displayed virtual object having a representation of the field of view of the one or more cameras), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting at least an initial portion of the first input by the first contact (926) (e.g., including detecting the first contact; or detecting input by the first contact that satisfies respective predefined criteria but does not satisfy the first criteria; or detecting input that satisfies the first criteria): the device activates one or more cameras (e.g., activates the cameras without immediately displaying the camera's field of view on the display), and the device analyzes the field of view of the one or more cameras to detect one or more planes in the field of view of the one or more cameras. In some embodiments, the display of the field of view 6036 of the one or more cameras is delayed after the one or more cameras are activated (e.g., until a second input corresponding to a request to display a virtual object in the augmented reality environment is detected, until at least one field of view plane is detected, or until a field of view plane corresponding to an anchor plane defined for the virtual object is detected). In some implementations, the field of view 6036 of the one or more cameras is displayed at a time corresponding to activation of the one or more cameras (e.g., while the one or more cameras are activated). In some embodiments, the field of view 6036 of the one or more cameras is displayed before the plane is detected in the field of view of the one or more cameras (e.g., in response to detecting the first input by the contact and according to the determination, the field of view of the one or more cameras is displayed). Activating the camera and detecting one or more planes of the field of view by analyzing the field of view of the camera in response to detecting an initial portion of the first input (e.g., prior to displaying the third representation of the virtual object with the representation of the field of view of the one or more cameras) improves the efficiency of the device (e.g., by reducing the amount of time required to determine the position and/or orientation of the third representation of the virtual object relative to the corresponding plane in the field of view of the camera), which in turn reduces power usage and extends the battery life of the device by enabling a user to more quickly and efficiently use the device.
In some embodiments, in response to detecting a respective plane (e.g., floor surface 5038) in the field of view of the one or more cameras, the device with one or more tactile output generators 167 outputs (928) a tactile output indicating that the respective plane was detected in the field of view of the one or more cameras. In some implementations, the field of view 6036 may be shown before the field of view plane is identified. In some embodiments, additional user interface controls and/or icons are overlaid on the real world image in the field of view after at least one field of view plane is detected or after all field of view planes are identified. Outputting a haptic output indicating that a plane is detected in the field of view of the camera provides feedback to the user indicating that the plane has been detected. Providing improved haptic feedback enhances the operability of the device (e.g., by helping the user provide appropriate input and reducing unnecessary additional input for placing virtual objects), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the size of the third representation of the virtual object on the display is determined (930) based on the simulated real-world dimensions of the virtual object and a distance between the one or more cameras and a location in the field of view 6036 of the one or more cameras having a fixed spatial relationship to the third representation of the virtual object (e.g., the plane to which the virtual object is attached, such as the ground surface 5038). In some embodiments, the size of the third representation of the virtual object is constrained such that a proportion of the size of the third representation of the virtual object relative to the field of view of the one or more cameras is maintained. In some embodiments, one or more physical dimension parameters (e.g., length, width, depth, and/or radius) are defined for the virtual object. In some embodiments, in the second user interface (e.g., a staging user interface), the virtual object is not constrained by its defined physical size parameters (e.g., the size of the virtual object may change in response to user input). In some embodiments, the third representation of the virtual object is constrained by the size parameters it defines. When user input is detected to change the position of a virtual object in an augmented reality view relative to a physical environment represented in a field of view, or when user input is detected to change the zoom level of the field of view, or when user input is detected to move relative to the physical environment surrounding the device, the appearance (e.g., size, viewing perspective) of the virtual object will change in a manner that is constrained by: a fixed spatial relationship between the virtual object and the physical environment (e.g., as represented by a fixed spatial relationship between an anchor plane of the virtual object and a plane in the augmented reality environment) and a fixed ratio based on a predefined size parameter of the virtual object and an actual size of the physical environment. Determining the size of the third representation of the virtual object based on the simulated real-world dimensions of the virtual object and the distance between the one or more cameras and the location in the field of view of the cameras (e.g., without further user input to resize the third representation of the virtual object to simulate the real-world dimensions of the virtual object) enhances the operability of the device, and in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the second input corresponding to the request to display the virtual object in the augmented reality environment includes (932) an input to (select and) drag a second representation of the virtual object (e.g., drag to a distance exceeding a distance threshold, drag to a position beyond a defined boundary, and/or drag to a position within a threshold distance of an edge (e.g., a bottom edge, a top edge, and/or a side edge) of the display or the second user interface area). In response to detecting the second input corresponding to the request to display the virtual object in the augmented reality environment, a third representation of the virtual object is displayed with a representation of a field of view of the camera, which provides additional control options without cluttering a second user interface having additional displayed controls (e.g., controls for displaying the augmented reality environment from the second user interface). Providing additional control options without cluttering the second user interface with the additionally displayed controls enhances operability of the device, which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, when a second representation of the virtual object is displayed in a second user interface region (e.g., staging user interface 6010 as shown in FIG. 6Z), the device detects (934) a fourth input that satisfies respective criteria for redisplaying the first user interface region (e.g., a tap, hard press, or touch hold and drag input at a location on the touch-sensitive surface that corresponds to the second representation of the virtual object or at another location on the touch-sensitive surface (e.g., a bottom or edge of the second user interface region), and/or an input at a location on the touch-sensitive surface that corresponds to a control for returning to the first user interface region), and in response to detecting the fourth input, the device ceases to display the second representation of the virtual object in the second user interface area and the device redisplays the first representation of the virtual object in the first user interface area. For example, as shown in fig. 6Z-6 AC, in response to an input through a contact 6042 at a location corresponding to a back control 6016 displayed in the landing user interface 6010, the device stops displaying the second representation of the virtual chair 5020 in the second user interface area (e.g., the landing user interface 6010) and the device again displays the first representation of the virtual chair 5020 in the first user interface area (e.g., the instant messaging user interface 5008). In some embodiments, the first representation of the virtual object is displayed in a first user interface area having the same appearance, position, and/or orientation as those shown prior to the transition to the staging view and/or the augmented reality view. For example, in fig. 6AC, virtual chair 5020 is displayed in instant messaging user interface 5008, which has the same orientation as virtual chair 5020 in instant messaging user interface 5008 shown in fig. 6A. In some embodiments, the device continuously displays the virtual object on the screen when transitioning back to displaying the virtual object in the first user interface area. For example, in fig. 6Y to 6C, the virtual chair 5020 is continuously displayed during the transition from displaying the staging user interface 6010 to displaying the instant messaging user interface 5008. Determining whether to redisplay the first representation of the virtual object in the first user interface based on whether a fourth input detected while displaying the second representation of the virtual object in the second user interface satisfies criteria for redisplaying the first user interface enables a plurality of different types of operations to be performed in response to the fourth input. Enabling multiple different types of operations to be performed in response to an input increases the efficiency with which a user can perform the operations, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, when the third representation of the virtual object is displayed with the representation of the field of view 5036 of the one or more cameras (e.g., as shown in fig. 6U), the device detects (936) a fifth input (e.g., a tap, hard press, or touch and drag input at a location on the touch-sensitive surface corresponding to the third representation of the virtual object or at another location on the touch-sensitive surface, and/or an input at a location on the touch-sensitive surface corresponding to a control for returning to displaying the second user interface region) that satisfies respective criteria for re-displaying the second user interface region, and in response to detecting the fifth input, the device stops displaying the third representation of the virtual object and the representation of the field of view of the one or more cameras and re-displays the second representation of the virtual object in the second user interface region. For example, as shown in fig. 6V to 6Y, in response to an input by a contact 6040 at a position corresponding to a switching control 6018 displayed in a third user interface including a field of view 6036 of the camera, the apparatus stops displaying the field of view 6036 of the camera and redisplays the landing user interface 6010. In some embodiments, the second representation of the virtual object is displayed in the second user interface region with the same orientation as the second representation of the virtual object shown in the augmented reality view. In some embodiments, the device continuously displays the virtual object on the screen when transitioning back to displaying the virtual object in the second user interface area. For example, in fig. 6V-6Y, the virtual chair 5020 is continuously displayed during the transition from displaying the camera's field of view 6036 to displaying the staging user interface 6010. Determining whether to redisplay the second representation of the virtual object in the second user interface based on whether a fifth input detected while displaying the third representation of the virtual object with the field of view of the camera satisfies criteria for redisplaying the second user interface enables a plurality of different types of operations to be performed in response to the fifth input. Enabling multiple different types of operations to be performed in response to an input increases the efficiency with which a user can perform the operations, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, while displaying the third representation of the virtual object with the representation 6036 of the field of view of the one or more cameras, the device detects (938) a sixth input that satisfies respective criteria for redisplaying the first user interface area (e.g., the instant messaging user interface 5008), and in response to detecting the sixth input, the device ceases to display the third representation of the virtual object (e.g., the virtual chair 5020) and the representation 6036 of the field of view of the one or more cameras (e.g., as shown in fig. 6U), and the device redisplays the first representation of the virtual object in the first user interface area (e.g., as shown in fig. 6 AC). In some embodiments, the sixth input is, for example, a tap, a hard press, or a touch and drag input at a location on the touch-sensitive surface that corresponds to the third representation of the virtual object or at another location on the touch-sensitive surface, and/or an input at a location on the touch-sensitive surface that corresponds to a control for returning to displaying the first user interface region. In some embodiments, a first representation of a virtual object is displayed in a first user interface area having the same appearance and location as those shown prior to transitioning to the staging view and/or the augmented reality view. In some embodiments, the device continuously displays the virtual object on the screen when transitioning back to displaying the virtual object in the first user interface area. Determining whether to redisplay the first representation of the virtual object in the first user interface based on whether a sixth input detected while displaying the third representation of the virtual object with the field of view of the camera satisfies criteria for redisplaying the first user interface enables a plurality of different types of operations to be performed in response to the sixth input. Enabling multiple different types of operations to be performed in response to an input increases the efficiency with which a user can perform the operations, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting the first input by the first contact, and in accordance with a determination that the input by the first contact satisfies the first criterion, the device continuously displays (940) the virtual object while transitioning from displaying the first user interface region (e.g., instant messaging user interface 5008) to displaying the second user interface region (e.g., landing user interface 6010), which includes displaying an animation (e.g., movement, rotation about one or more axes, and/or zooming) that transitions the first representation of the virtual object in the first user interface region to the second representation of the virtual object in the second user interface region. For example, in fig. 6E-6I, during the transition from displaying the instant messaging user interface 5008 to displaying the landing user interface 6010, the virtual chair 5020 is continuously displayed and animated (e.g., the orientation of the virtual chair 5020 changes). In some embodiments, the virtual object has an orientation, position, and/or distance defined relative to a plane in the field of view of the camera (e.g., defined based on the shape and orientation of the first representation of the virtual object as shown in the first user interface region), and when transitioning to the second user interface region, the first representation of the virtual object is moved, resized, and/or reoriented to a second representation of the virtual object at a new location on the display (e.g., the center of the virtual stage plane in the second user interface region), and during or at the end of the movement, the virtual object is reoriented such that the virtual object is at a predetermined angle relative to a predefined virtual stage plane defined independent of the physical environment surrounding the device. Displaying the animation when the first representation of the virtual object in the first user interface transitions to the second representation of the virtual object in the second user interface provides feedback to the user indicating that the first input satisfies the first criterion. Providing improved feedback enhances the operability of the device (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting the second input by the second contact, and in accordance with a determination that the second input by the second contact corresponds to a request to display the virtual object in the augmented reality environment, the device continuously displays (942) the virtual object while transitioning from displaying the second user interface region (e.g., the staging user interface 6010) to displaying a third user interface region that includes a field of view 6036 of the one or more cameras, which includes displaying a transition of the second representation of the virtual object in the second user interface region to an animation (e.g., a movement, a rotation about one or more axes, and/or a zoom) of the third representation of the virtual object in the third user interface region that includes the field of view of the one or more cameras. For example, in fig. 6Q-6U, the virtual chair 5020 is continuously displayed and animated (e.g., the position and size of the virtual chair 5020 changes) during the transition from displaying the staging user interface 6010 to displaying the camera's field of view 6036. In some embodiments, the virtual object is reoriented such that the virtual object is at a predefined orientation, position, and/or distance relative to a field of view plane (e.g., a physical surface that may support a three-dimensional representation of the user interface object, such as a vertical wall or a horizontal floor surface) detected in the field of view of the one or more cameras. Displaying the animation when the second representation of the virtual object in the second user interface transitions to the third representation of the virtual object in the third user interface provides feedback to the user indicating that the second input corresponds to a request to display the virtual object in the augmented reality environment. Providing improved visual feedback to the user enhances the operability of the device (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
It should be understood that the particular order of operations that have been described in fig. 9A through 9D is merely exemplary and is not intended to suggest that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods 800, 900, 16000, 17000, 18000, 19000, and 20000) also apply in a similar manner to method 900 described above with respect to fig. 9A-9D. For example, the contacts, inputs, virtual objects, user interface regions, intensity thresholds, fields of view, tactile outputs, movements, and/or animations described above with reference to method 900 optionally have one or more of the features of the contacts, inputs, virtual objects, user interface regions, intensity thresholds, fields of view, tactile outputs, movements, and/or animations described herein with reference to other methods described herein (e.g., methods 800, 900, 16000, 17000, 18000, 19000, and 20000). For the sake of brevity, these details are not repeated here.
Fig. 10A-10D are flow diagrams illustrating a method 1000 of displaying an item with a visual indication indicating that the item corresponds to a virtual three-dimensional object, according to some embodiments. Method 1000 is performed at an electronic device (device 300 in fig. 3, or portable multifunction device 100 in fig. 1A) having a display and a touch-sensitive surface (e.g., a touch screen display that acts as both a display and a touch-sensitive surface). In some embodiments, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations in method 1000 are optionally combined, and/or the order of some operations is optionally changed.
As described below, the method 1000 involves displaying items in a first user interface and a second user interface. Each item displayed has a visual indication indicating that the item corresponds to a virtual three-dimensional object or does not have the visual indication, depending on whether the item corresponds to a respective virtual three-dimensional object. Providing the user with an indication of whether the item is a virtual three-dimensional object increases the efficiency with which the user can perform an operation on the first item (e.g., by helping the user provide appropriate input depending on whether the item is a virtual three-dimensional object), thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
The device receives (1002) a request to display a first user interface including a first item (e.g., an icon, a thumbnail, an image, an emoticon, an attachment, a sticker, an application icon, an avatar, etc.). For example, in some embodiments, the request is an input (e.g., as described with reference to fig. 7A) for opening a user interface (e.g., internet browser user interface 5060, as shown in fig. 7B) for displaying a representation of the first item in a predefined environment associated with the first item. The predefined environment is optionally a user interface of an application (e.g., an email application, an instant messaging application, a browser application, a word processing application, an e-reader application, etc.) or a system user interface (e.g., a lock screen, a notification interface, a suggestion interface, a control panel user interface, a home screen user interface, etc.).
In response to the request to display the first user interface, the device displays (1004) the first user interface (e.g., an internet browser user interface 5060 as shown in fig. 7B) with a representation of the first item. In accordance with a determination that the first item corresponds to the respective virtual three-dimensional object, the device displays a representation of the first item with a visual indication (e.g., an image, such as an icon and/or background panel, an outline, and/or text, displayed at a location corresponding to the representation of the first item) indicating that the first item corresponds to the first respective virtual three-dimensional object. In accordance with a determination that the first item does not correspond to the respective virtual three-dimensional object, the device displays a representation of the first item without the visual indication. For example, as shown in FIG. 7B, in an Internet browser user interface 5060, a displayed network object 5068 (including a representation of a virtual three-dimensional light object 5084) has a visual indication (virtual object indicator 5080) that the virtual light 8084 is a virtual three-dimensional object, and a displayed network object 5074 does not have a visual object indicator because the network object 5074 does not include an item corresponding to the virtual three-dimensional object.
After displaying the representation of the first item, the device receives (1006) a request (e.g., an input as described with reference to fig. 7H-7L) to display a second user interface (e.g., instant message user interface 5008 as shown in fig. 7M) that includes a second item (e.g., an icon, a thumbnail, an image, an emoticon, an attachment, a sticker, an application icon, an avatar, etc.). The second item is different from the first item and the second user interface is different from the first user interface. For example, in some embodiments, the request is another input for opening a user interface for displaying a representation of the second item in a predefined environment associated with the second item. The predefined environment is optionally a user interface of an application other than the application used to show the first item (e.g., an email application, an instant messaging application, a browser application, a word processing application, an e-reader application, etc.) or a system user interface other than the system user interface used to show the first item (e.g., a lock screen, a notification interface, a suggestion interface, a control panel user interface, a home screen user interface, etc.).
In response to the request to display the second user interface, the device displays (1008) the second user interface (e.g., instant messaging user interface 5008 as shown in fig. 7M) having a representation of the second item. In accordance with a determination that the second item corresponds to the respective virtual three-dimensional object, the device displays a representation of the second item with a visual indication indicating that the second item corresponds to the second corresponding virtual three-dimensional object (e.g., the same visual indication as indicates that the first item corresponds to the virtual three-dimensional object). In accordance with a determination that the second item does not correspond to the respective virtual three-dimensional object, the device displays a representation of the second item without the visual indication. For example, as shown in fig. 7M, in the instant messaging user interface 5008, the displayed virtual three-dimensional chair object 5020 has a visual indication (virtual object indicator 5022) that the virtual chair 5020 is a virtual three-dimensional object, and the displayed emoticon 7020 does not have a visual object indicator because the emoticon 7020 does not include an item corresponding to the virtual three-dimensional object.
In some embodiments, displaying the first item (e.g., virtual light 5084) with the visual indication (e.g., virtual object indicator 5080) indicating that the first item corresponds to the first respective virtual three-dimensional object comprises (1010): in response to detecting device movement (e.g., as detected by an orientation sensor (e.g., one or more accelerometers 168 of device 100)) that results in a change from the first device orientation to the second device orientation, movement of the first item (e.g., tilt of the first item and/or movement of the first item relative to the first user interface) corresponding to the change from the first device orientation to the second device orientation is displayed. For example, the first device is oriented as the device 100 shown in fig. 7F1, and the second device is oriented as the device 100 shown in fig. 7G 1. In response to the movement shown in fig. 7F 1-7G 1, the first item (e.g., virtual light 5084) tilts (e.g., as shown in fig. 7F 2-7G 2). In some embodiments, if the second object corresponds to the virtual three-dimensional object, the second object also responds to movement of the detection device in the manner described above (e.g., to indicate that the second object also corresponds to the virtual three-dimensional object).
Displaying movement of the first item corresponding to a change from the first device orientation to the second device orientation provides visual feedback to the user indicating behavior of the virtual three-dimensional object. Providing the user with improved visual feedback enhances the operability of the device (e.g., by allowing the user to view the virtual three-dimensional object in various orientations without providing further input), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, displaying the representation of the first item with the visual indication indicating that the first item corresponds to the first respective virtual three-dimensional object comprises (1012): in response to detecting a first input to scroll the first user interface while the representation of the first item is displayed in the first user interface by the first contact (e.g., a swipe input in a first direction on the first user interface, or a touch-hold input on a scroll button on an end of a scroll bar): the device translates a representation of the first item on the display in accordance with the scrolling of the first user interface (e.g., moves an anchor position of the first item a distance based on an amount of scrolling of the first user interface in an opposite direction to the scrolling (e.g., as the first user interface is dragged upwards by a contact moving on the touch-sensitive surface, the representation of the first item moves upwards on the display with the first user interface)), and the device rotates the representation of the first item relative to a plane defined by the first user interface (or display) in accordance with the direction of the scrolling of the first user interface. For example, as shown in fig. 7C-7D, in response to detecting an input via the contact 7002 to scroll the internet browser user interface 5060 while the representation of the virtual light 5084 is displayed in the internet browser user interface 5060, the virtual light 5084 is translated in accordance with the scrolling of the internet browser user interface 5060 and the virtual light 5084 is rotated relative to the display 112 in accordance with the direction of the path of movement of the contact 7002. In some embodiments, in accordance with a determination that the first user interface is dragged upwards, the representation of the first item moves upwards with the first user interface, and the viewing perspective of the first item as shown on the first user interface changes as if the user were viewing the first item from a different perspective (e.g., a lower perspective). In some embodiments, in accordance with a determination that the second user interface is dragged upwards, the representation of the second item moves upwards with the second user interface, and the viewing perspective of the second item as shown on the second user interface changes as if the user were viewing the second item from a different perspective (e.g., a lower perspective).
Displaying movement of the item corresponding to a change from the first device orientation to the second device orientation provides visual feedback to the user indicating the change in device orientation. Providing the user with improved visual feedback enhances the operability of the device (e.g., by allowing the user to view the virtual three-dimensional object in various orientations without providing further input), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, when a representation of a first item (e.g., light object 5084) having a visual indication (e.g., virtual object indicator 5080) is displayed in a first user interface (e.g., internet browser user interface 5060, as shown in fig. 7B), the device displays (1014) a representation of a third item, wherein the displayed representation of the third item does not have the visual indication so as to indicate that the third item does not correspond to a virtual three-dimensional object (e.g., the third item does not correspond to any three-dimensional object that may be rendered in an augmented reality environment). For example, as shown in FIG. 7B, in an Internet browser user interface 5060, network objects 5074, 5070, and 5076 are displayed without visual object indicators because network objects 5074, 5070, and 5076 do not correspond to virtual three-dimensional objects.
Displaying the first item with the visual indication indicating that the first item is a virtual three-dimensional object and the third item without the visual indication in the first user interface increases the efficiency with which a user can perform an operation using the first user interface (e.g., by helping the user provide appropriate input depending on whether the item with which the user interacts is a virtual three-dimensional object), thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, when a representation of a second item (e.g., virtual chair 5020) with a visual indication (e.g., virtual object indicator 5022) is displayed in a second user interface (e.g., instant messaging user interface 5008, as shown in fig. 7M), the device displays (1016) a representation of a fourth item (e.g., emoticon 7020), wherein the displayed representation of the fourth item does not have the visual indication so as to indicate that the fourth item does not correspond to a corresponding virtual three-dimensional object.
Displaying the second item with the visual indication indicating that the second item is a virtual three-dimensional object and the fourth item without the visual indication in the second user interface increases the efficiency with which the user can perform operations using the second user interface (e.g., by helping the user provide appropriate input depending on whether the item with which the user is interacting is a virtual three-dimensional object), thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments (1018), the first user interface (e.g., internet browser user interface 5060, shown in fig. 7B) corresponds to a first application (e.g., internet browser application), the second user interface (e.g., instant messaging user interface 5008, shown in fig. 7M) corresponds to a second application (e.g., instant messaging application) different from the first application, and the displayed representation of the first item (e.g., light object 5084) with the visual indication (e.g., virtual object indicator 5080) and the displayed representation of the second item (e.g., virtual chair 5020) with the visual indication (e.g., virtual object indicator 5022) share a set of predefined visual and/or behavioral characteristics (e.g., use of the same indicator icon, or have the same texture or rendering style and/or behavior when invoked by a predefined type of input). For example, the icon for virtual object indicator 5080 and the icon for virtual object indicator 5022 comprise the same symbols.
Displaying a first item with a visual indication in a first user interface of a first application and a second item with a visual indication in a second user interface of a second application, such that the visual indication of the first item and the visual indication of the second item share a predefined set of visual and/or behavioral features, increases the efficiency with which a user can perform operations using the second user interface (e.g., by helping the user provide appropriate input depending on whether the item with which the user is interacting is a virtual three-dimensional object), thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first user interface is (1020) an internet browser application user interface (e.g., internet browser user interface 5060, as shown in fig. 7B), and the first item is an element of a web page (e.g., the first item is represented in the web page as an embedded image, a hyperlink, an applet, an emoticon, an embedded media object, etc.). For example, the first item is a virtual light object 5084 of the network object 5068.
Displaying web page elements with visual indications indicating that the web page elements are virtual three-dimensional objects increases the efficiency with which a user can perform operations using an internet browser application (e.g., by helping the user provide appropriate input depending on whether the web page elements with which the user interacts are virtual three-dimensional objects), thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first user interface is (1022) an email application user interface (e.g., email user interface 7052, as shown in fig. 7P), and the first item is an attachment to an email (e.g., attachment 7060).
Displaying email attachments with visual indications indicating that the email attachment is a virtual three-dimensional object increases the efficiency with which a user can perform operations using an email application user interface (e.g., by helping the user provide appropriate input depending on whether the email attachment with which the user is interacting is a virtual three-dimensional object), thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to more quickly and efficiently use the device.
In some embodiments, the first user interface is (1024) an instant messaging application user interface (e.g., instant messaging user interface 5008, shown in fig. 7M), and the first item is an attachment or element in a message (e.g., virtual chair 5020) (e.g., the first item is an image, a hyperlink, a mini-program, an emoticon, a media object, etc.).
Displaying a message attachment or element with a visual indication that the message attachment or element is a virtual three-dimensional object increases the efficiency with which a user can perform operations using the instant messaging user interface (e.g., by helping the user provide appropriate input depending on whether the message attachment or element with which the user is interacting is a virtual three-dimensional object), thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to more quickly and efficiently use the device.
In some embodiments, the first user interface is (1026) a file management application user interface (e.g., file management user interface 7036, shown in fig. 7O), and the first item is a file preview object (e.g., file preview object 7045 in file information area 7046).
Displaying the file preview object with a visual indication indicating that the file preview object is a virtual three-dimensional object increases the efficiency with which a user can perform operations using the file management application user interface (e.g., by helping the user provide appropriate input depending on whether the file preview object with which the user is interacting is a virtual three-dimensional object), thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to more quickly and efficiently use the device.
In some embodiments, the first user interface is (1028) a map application user interface (e.g., map application user interface 7024), and the first item is a representation of a point of interest (e.g., point of interest object 7028) in the map (e.g., a three-dimensional representation of a feature corresponding to a location on the map (e.g., a three-dimensional representation including terrain and/or structures corresponding to a location on the map), or a control that, when actuated, causes the three-dimensional representation of the map to be displayed).
Displaying representations of points of interest in a map with visual indications indicative of the points of interest as virtual three-dimensional objects increases the efficiency with which a user can perform operations using the map application user interface (e.g., by helping the user provide appropriate input depending on whether the representation of the point of interest with which the user interacts is a virtual three-dimensional object), thereby enhancing operability of the device, which in turn reduces power usage and extends battery life of the device by enabling the user to more quickly and efficiently use the device.
In some embodiments, the visual indication that the first item corresponds to the respective virtual three-dimensional object includes (1030) an animation of the first item that occurs without requiring input involving a representation of the respective three-dimensional object (e.g., a visual effect (e.g., flashing, blinking, etc.) that applies to continuous movement or change of the first item over time).
Displaying the animation of the first item that occurs without input involving the representation of the respective three-dimensional object enhances the operability of the device (e.g., by reducing the number of inputs required by the user to view three-dimensional aspects of the first item), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, while displaying the representation of the second item (e.g., the virtual chair 5020) with the visual indication (e.g., the virtual object indicator 5022) indicating that the second item corresponds to the respective virtual three-dimensional object, the device detects (1032) a second input (e.g., an input as described with reference to fig. 5C-5F) by a second contact at a location on the touch-sensitive surface that corresponds to the representation of the second item, and in response to detecting the second input by the second contact, and in accordance with a determination that the second input by the second contact satisfies the first (e.g., AR trigger) criteria, the device displays a third user interface region on the display that includes replacing the display of at least a portion of the second user interface (e.g., the instant messaging user interface 5008) with a representation of the field of view 5036 of one or more cameras (e.g., described with reference to fig. 5F-5I) and continuously displaying the second virtual three-dimensional object when switching from displaying the second user interface to displaying the third user interface area. (e.g., as described in more detail herein with reference to method 800). In some embodiments, the device displays an animation that continuously displays the representation of the virtual object while switching from displaying the portion of the second user interface having the representation of the field of view of the one or more cameras (e.g., as described in more detail herein with reference to operation 834).
Using the first criteria to determine whether to display the third user interface region enables a plurality of different types of operations to be performed in response to the second input. Enabling multiple different types of operations to be performed in response to an input increases the efficiency with which a user can perform the operations, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, when a second item (e.g., a virtual chair 5020) is displayed with a visual indication (e.g., a virtual object indicator 5022) indicating that the second item corresponds to a respective virtual three-dimensional object (e.g., as described in more detail herein with reference to method 900), the device detects (1034) a third input (e.g., an input as described with reference to FIGS. 6E-6I) by a third contact at a location on the touch-sensitive surface that corresponds to the representation of the second item, and in response to detecting the third input by the third contact, and in accordance with a determination that the third input by the third contact satisfies the first (e.g., landing trigger) criteria, the device displays the second virtual three-dimensional object in a fourth user interface, the fourth user interface being different from the second user interface (e.g., landing user interface 6010 as described in more detail with reference to method 900). In some embodiments, while the second virtual three-dimensional object is displayed in a fourth user interface (e.g., staging user interface 6010, as shown in fig. 6I), the device detects a fourth input, and in response to detecting the fourth input: in accordance with a determination that the fourth input corresponds to a request to manipulate the second virtual three-dimensional object in the fourth user interface, the device changes display properties of the second virtual three-dimensional object within the fourth user interface based on the fourth input (e.g., as described with reference to fig. 6J-6M and/or as described with reference to fig. 6N-6P), and in accordance with a determination that the fourth input corresponds to a request to display the second virtual object in the augmented reality environment (e.g., a tap input, a press input, or a touch hold or press input and a subsequent drag input at or from a location on the touch-sensitive surface that corresponds to a representation of the virtual object in the second user interface region), the device displays the second virtual three-dimensional object with a representation of a field of view of one or more cameras (e.g., as described with reference to fig. 6Q-6U).
When the second three-dimensional object is displayed in a fourth user interface (e.g., the staging user interface 6010), in response to a fourth input, the device changes display properties of the second three-dimensional object based on the fourth input or displays the second three-dimensional object with a representation of a field of view of one or more cameras of the device. Enabling a plurality of different types of operations to be performed in response to an input (e.g., by changing display properties of the second three-dimensional object or displaying the second three-dimensional object with a representation of a field of view of one or more cameras of the device) increases the efficiency with which a user can perform these operations, thereby enhancing operability of the device, which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
It should be understood that the particular order in which the operations in fig. 10A-10D are described is merely an example and is not intended to suggest that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods 800, 900, 16000, 17000, 18000, 19000, and 20000) also apply in a similar manner to method 1000 described above with respect to fig. 10A-10D. For example, the contacts, inputs, virtual objects, user interfaces, user interface regions, fields of view, movements, and/or animations described above with reference to method 1000 optionally have one or more of the features of the contacts, inputs, virtual objects, user interfaces, user interface regions, fields of view, movements, and/or animations described herein with reference to other methods described herein (e.g., methods 800, 900, 16000, 17000, 18000, 19000, and 20000). For the sake of brevity, these details are not repeated here.
11A-11V illustrate example user interfaces for displaying virtual objects having different visual properties depending on whether object placement criteria are satisfied. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 8A to 8E, 9A to 9D, 10A to 10D, 16A to 16G, 17A to 17D, 18A to 18I, 19A to 19H, and 20A to 20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having touch-sensitive display system 112. In such embodiments, the focus selector is optionally: a respective finger or stylus contact, a representative point corresponding to the finger or stylus contact (e.g., a center of gravity of or a point associated with the respective contact), or a center of gravity of two or more contacts detected on touch-sensitive display system 112. However, similar operations are optionally performed on a device having a display 450 and a separate touch-sensitive surface 451, in response to detecting a contact on the touch-sensitive surface 451 while displaying the user interface on the display 450 and the focus selector shown in the figures.
Fig. 11A to 11E illustrate an input of displaying a virtual object in a staging view. For example, an input is detected when a two-dimensional (e.g., abbreviated) representation of a three-dimensional object is displayed in a user interface (e.g., email user interface 7052, file management user interface 7036, map user interface 7022, instant messaging user interface 5008, internet browser user interface 5060, or third party application user interface).
In FIG. 11A, the Internet browser user interface 5060 includes a two-dimensional representation of a three-dimensional virtual object 11002 (chair). An input (e.g., a tap input) by the contact 11004 is detected at a position corresponding to the virtual object 11002. In response to the tap input, the display of the staging user interface 6010 replaces the display of the Internet browser user interface 5060.
Fig. 11B-11E illustrate transitions that occur when the display of the staging user interface 6010 replaces the display of the internet browser user interface 5060. In some embodiments, during the transition, the virtual object 10002 gradually fades into view and/or the controls of the staging user interface 6010 (e.g., the back control 6016, the toggle control 6018, and/or the share control 6020) gradually fade into view. For example, after the virtual object 11002 fades into view, the controls of the staging user interface 6010 fade into view (e.g., to delay the display of the controls for a period of time necessary to render a three-dimensional representation of the virtual object 11002 on the display). In some embodiments, "fading" of the virtual object 11002 includes displaying a low resolution, two-dimensional, and/or holographic version of the virtual object 11002, followed by displaying a final three-dimensional representation of the virtual object 11002. Fig. 11B to 11D show gradual fade-in of the virtual object 11002. In fig. 11D, a shadow 11006 of a virtual object 11002 is displayed. Fig. 11D to 11E show gradual fade-ins of the controls 6016, 6018, and 6020.
11F-11G illustrate inputs that cause a three-dimensional representation of the virtual object 11002 to be displayed in a user interface that includes a field of view 6036 of one or more cameras of the device 100. In fig. 11F, an input by a contact 11008 is detected at a position corresponding to the switching control 6018. In response to the input, the display of the user interface including the field of view 6036 of the camera replaces the display of the staging user interface 6010, as shown in fig. 11G.
As shown in fig. 11G-11H, when the field of view 6036 of the camera is initially displayed, a semi-transparent representation of the virtual object may be displayed (e.g., when no plane corresponding to the virtual object is detected in the field of view 6036 of the camera).
11G-11H illustrate semi-transparent representations of a virtual object 11002 displayed in a user interface that includes a field of view 6036 of a camera. A semi-transparent representation of the virtual object 11002 is displayed at a fixed position relative to the display 112. For example, from fig. 11G to 11H, as the device 100 moves relative to the physical environment 5002 (e.g., as indicated by the changing position of the table 5004 in the camera's field of view 6036), the virtual object 11002 remains at a fixed position relative to the display 112.
In some embodiments, in accordance with a determination that a plane corresponding to the virtual object has been detected in the field of view 6036 of the camera, the virtual object is placed on the detected plane.
In fig. 11I, a plane corresponding to the virtual object 11002 has been detected in the field of view 6036 of the camera, and the virtual object 11002 is placed on the detected plane. The device has generated a tactile output as shown at 11010 (e.g., to indicate that at least one plane (e.g., floor surface 5038) has been detected in the field of view 6036 of the camera). When the virtual object 11002 is placed at a position relative to a plane detected in the field of view 6036 of the camera, the virtual object 11002 remains at a fixed position relative to the physical environment 5002 captured by the one or more cameras. From fig. 11I to 11J, as the device 100 moves relative to the physical environment 5002 (e.g., as indicated by the changed position of the table 5004 in the field of view 6036 of the camera shown), the virtual object 11002 remains at a fixed position relative to the physical environment 5002.
In some embodiments, while the field of view 6036 of the camera is displayed, the display of controls (e.g., the back control 6016, the toggle control 6018, and/or the share control 6020) is stopped (e.g., according to a determination that a period of time has elapsed in which no input was received). In fig. 11J-11L, the controls 6016, 6018, and 6020 fade out gradually (e.g., as shown in fig. 11K), which increases the portion of the display 112 that displays the field of view 6036 of the camera (e.g., as shown in fig. 11L).
Fig. 11M to 11S illustrate inputs for manipulating the virtual object 11002 when displayed in a user interface including a field of view 6036 of a camera.
In fig. 11M to 11N, an input (e.g., a spread gesture) by the contacts 11012 and 11014 for changing the simulated physical size of the virtual object 11002 is detected. In response to detecting the input, controls 6016, 6018, and 6020 are redisplayed. When the contact 11012 moves along the path indicated by the arrow 11016 and the contact 11014 moves along the path indicated by the arrow 11018, the size of the virtual object 11002 increases.
In fig. 11N to 11P, an input (e.g., a pinch gesture) by the contacts 11012 to 1104 for changing the simulated physical size of the virtual object 11002 is detected. When the contact 11012 moves along the path indicated by the arrow 11020 and the contact 11014 moves along the path indicated by the arrow 11022, the size of the virtual object 11002 decreases (as shown in fig. 11N to 11O and 11O to 11P). As shown in fig. 11O, when the size of the virtual object 11002 is adjusted to its original size relative to the physical environment 5002 (e.g., the size of the virtual object 11002 when initially placed on a plane detected in the physical environment 5002, as shown in fig. 11I), a haptic output occurs (as shown at 11024) (e.g., to provide feedback indicating that the virtual object 11002 has returned to its original size). In FIG. 11Q, contacts 11012 and 11014 have been lifted off of touch screen display 112.
In fig. 11R, an input (e.g., a double-click input) to return the virtual object 11002 to its original size relative to the physical environment 5002 is detected. Input is detected at a location corresponding to the virtual object 11002, as indicated by the contact 11026. In response to this input, the virtual object 11002 is adjusted from the reduced size shown in fig. 11R to the original size of the virtual object 11002 as indicated in fig. 11S. As shown in fig. 11S, upon resizing the virtual object 11002 to its original size relative to the physical environment 5002, a haptic output occurs (as shown at 11028) (e.g., to provide feedback indicating that the virtual object 11002 has returned to its original size).
In fig. 11T, an input by the contact 11030 is detected at a position corresponding to the switching control 6018. In response to the input, the staging user interface 6010 replaces the display of the user interface that includes the field of view 6036 of the camera, as shown in fig. 11U.
In fig. 11U, an input by a contact 11032 is detected at a position corresponding to the back control 6016. In response to this input, the Internet browser user interface 5060 replaces the display of staging user interface 6010, as shown in FIG. 11V.
12A-12L illustrate example user interfaces for displaying calibration user interface objects that are dynamically animated according to movement of one or more cameras of a device. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 8A to 8E, 9A to 9D, 10A to 10D, 16A to 16G, 17A to 17D, 18A to 18I, 19A to 19H, and 20A to 20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having touch-sensitive display system 112. In such embodiments, the focus selector is optionally: a respective finger or stylus contact, a representative point corresponding to the finger or stylus contact (e.g., a center of gravity of or a point associated with the respective contact), or a center of gravity of two or more contacts detected on touch-sensitive display system 112. However, similar operations are optionally performed on a device having a display 450 and a separate touch-sensitive surface 451, in response to detecting a contact on the touch-sensitive surface 451 while displaying the user interface on the display 450 and the focus selector shown in the figures.
According to some embodiments, a calibration user interface object is displayed when a request to display a virtual object in a user interface including a field of view of one or more cameras is received but additional data is needed for calibration of the device.
Fig. 12A illustrates an input requiring a virtual object 11002 to be displayed in a user interface including a field of view 6036 of one or more cameras. An input by contact 12002 is detected at a position corresponding to toggle control 6018. In response to the input, the display of the user interface including the field of view 6036 of the camera replaces the display of the staging user interface 6010, as shown in fig. 12B. A semi-transparent representation of the virtual object 11002 is displayed in a user interface that includes a field of view 6036 of the camera. When calibration is needed (e.g., because no plane corresponding to the virtual object 11002 is detected in the camera's field of view 6036), the camera's field of view 6036 is blurred (e.g., to emphasize the behavior of the cueing and/or calibration object, as described below).
12B-12D illustrate animated images and text prompting the user to move the device (e.g., displayed in accordance with a determination that calibration is required). The animated image includes a representation 12004 of the device 100, arrows 12006 and 12008 indicating that the device 100 is required to move left and right, a representation 12010 of a plane (e.g., for indicating that the device 100 must move relative to the plane in order to detect the plane corresponding to the virtual object 11002). Text prompt 12012 provides information about the movement of device 100 required for calibration. In fig. 12B-12C and 12C-12D, the representation 12004 and the arrow 12006 of the device 100 are adjusted relative to the representation 12010 of the plane to provide an indication of the movement of the device 100 required for calibration. From fig. 12C to 12D, device 100 moves relative to physical environment 5002 (e.g., as indicated by the changing position of table 5004 in field of view 6036 of the camera). As a result of the detection of the movement of the apparatus 100, the calibration user interface object 12014 (the outline of the cube) is displayed, as indicated in fig. 12E-1.
12E-1 through 12I-1 illustrate the behavior of the calibration user interface object 12014, which corresponds to the movement of the device 100 relative to the physical environment 5002, as illustrated in FIGS. 12E-2 through 12I-2, respectively. In response to movement (e.g., lateral movement) of the device 100, the user interface object 12014 is animated (e.g., cube outline rotation) (e.g., to provide feedback to the user regarding the movement that facilitates calibration). In FIG. 12E-1, a calibration user interface object 12014 having a first angle of rotation is shown in a user interface that includes a field of view 6036 of a camera of the device 100. In fig. 12E-2, the device 100 held by the user's hand 5006 is shown at a first location relative to the physical environment 5002. From figure 12E-2 to figure 12F-2, the device 100 is moving laterally (to the right) with respect to the physical environment 5002. As a result of the movement, the field of view 6036 of the camera as displayed by the device 100 is updated and the calibration user interface object 12014 has rotated (relative to its position in FIG. 12E-1), as shown in FIG. 12F-1. From figure 12F-2 to figure 12G-2, device 100 continues to move to the right relative to physical environment 5002. As a result of the movement, the field of view 6036 of the camera as displayed by the device 100 is again updated and the calibration user interface object 12014 is further rotated, as shown in FIG. 12G-1. From figure 12G-2 to figure 12H-2, device 100 is moving upward relative to physical environment 5002. As a result of the movement, the field of view 6036 of the camera as displayed by the device 100 is updated. As shown in fig. 12G-1-12H-1, the calibration user interface object 12014 does not rotate in response to the upward movement of the device shown in fig. 12G-2-12H-2 (e.g., to provide an indication to the user that vertical movement of the device does not affect calibration). From figure 12H-2 to figure 12I-2, device 100 is moved further to the right relative to physical environment 5002. As a result of the movement, the field of view 6036 of the camera as displayed by the device 100 is updated again and the calibration user interface object 12014 is rotated, as shown in FIG. 12I-1.
In fig. 12J, the movement of the device 100 (e.g., as shown in fig. 12E-12I) has satisfied the required calibration (e.g., and a plane corresponding to the virtual object 11002 has been detected in the field of view 6036 of the camera). The virtual object 11002 is placed on the detected plane and the field of view 6036 of the camera stops blurring. The tactile output generator outputs tactile output (as shown at 12016) indicating that a plane (e.g., floor surface 5038) has been detected in the field of view 6036 of the camera. The floor surface 5038 is highlighted to provide an indication that a flat has been detected.
When the virtual object 11002 has been placed at a position relative to a plane detected in the field of view 6036 of the camera, the virtual object 11002 remains at a fixed position relative to the physical environment 5002 captured by the one or more cameras. As the device 100 moves relative to the physical environment 5002 (as shown in FIGS. 12K-2 through 12L-2), the virtual object 11002 remains in a fixed position relative to the physical environment 5002 (as shown in FIGS. 12K-1 through 12L-1).
Fig. 13A-13M illustrate example user interfaces for constraining rotation of a virtual object about an axis. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 8A to 8E, 9A to 9D, 10A to 10D, 16A to 16G, 17A to 17D, 18A to 18I, 19A to 19H, and 20A to 20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having touch-sensitive display system 112. In such embodiments, the focus selector is optionally: a respective finger or stylus contact, a representative point corresponding to the finger or stylus contact (e.g., a center of gravity of or a point associated with the respective contact), or a center of gravity of two or more contacts detected on touch-sensitive display system 112. However, similar operations are optionally performed on a device having a display 450 and a separate touch-sensitive surface 451, in response to detecting a contact on the touch-sensitive surface 451 while displaying the user interface on the display 450 and the focus selector shown in the figures.
In fig. 13A, a virtual object 11002 is shown in a staging user interface 6010. The x, y, and z axes are shown relative to a virtual object 11002.
Fig. 13B to 13C show an input of rotating the virtual object 11002 around the y axis indicated in fig. 13A. In fig. 13B, an input by the contact 13002 is detected at a position corresponding to the virtual object 11002. The input moves a distance d along the path indicated by arrow 130041. As the input moves along the path, the virtual object 11002 rotates (e.g., rotates 35 degrees) around the y-axis to the point indicated in FIG. 13BThe position of (a). In the staging user interface 6010, a shadow 13006 corresponding to the virtual object 11002 is displayed. From fig. 13B to fig. 13C, the shadow 13006 changes according to the changed position of the virtual object 11002.
After the contact 13002 lifts off the touch screen 112, the virtual object 11002 continues to rotate, as shown in fig. 13C-13D (e.g., in accordance with the "momentum" imparted by the movement of the contact 13002 to provide the impression that the virtual object 11002 appears like a physical object).
Fig. 13E to 13F show an input of rotating the virtual object 11002 around the x axis indicated in fig. 13A. In fig. 13E, an input by the contact 13008 is detected at a position corresponding to the virtual object 11002. The input moves a distance d along the path indicated by arrow 130101. As the input moves along the path, the virtual object 11002 rotates (e.g., by 5 degrees) around the x-axis to the position indicated in fig. 13F. Although contact 13008 moves a distance d along the x-axis in fig. 13E-13F1The same distance that the contact 13002 moves as in fig. 13B-13C, but the angle that the virtual object 11002 rotates about the x-axis in fig. 13E-13F is smaller than the angle that the virtual object 11002 rotates about the y-axis in fig. 13B-13C.
Fig. 13F-13G show further inputs to rotate the virtual object 11002 about the x-axis indicated in fig. 13A. In FIG. 13F, contact 13008 continues its movement and moves a distance d along the path indicated by arrow 130122(greater than distance d)1). As the input moves along the path, the virtual object 11002 rotates (rotates 25 degrees) around the x-axis to the position indicated in fig. 13G. As shown in fig. 13E to 13G, the distance d of movement of the contact 130081+d2So that the virtual object 11002 rotates by 30 degrees around the x-axis, whereas in fig. 13B to 13C, the movement distance d of the contact 130041Such that the virtual object 11002 is rotated 35 degrees around the y-axis.
After the contact 13008 lifts off the touch screen 112, the virtual object 11002 rotates in a direction opposite to the direction of rotation caused by the movement of the contact 13008, as shown in fig. 13G-13H (e.g., to indicate that the movement of the contact 13008 causes the virtual object 11002 to rotate an amount that exceeds the limit of rotation).
In fig. 13G to 13I, the shadow 13006 is not shown (for example, because the virtual object 11002 does not cast a shadow when viewing the object from below).
In fig. 13I, an input (e.g., a double-click input) for returning the virtual object 11002 to the perspective from which it was initially displayed (e.g., as indicated in fig. 13A) is detected. Input occurs at a location corresponding to the virtual object 11002, as indicated by the contact 13014. In response to this input, the virtual object 11002 rotates about the y-axis (to reverse the rotation occurring in fig. 13E to 13H) and rotates about the x-axis (to reverse the rotation occurring in fig. 13B to 13D). In fig. 13J, input by contact 13016 has caused the virtual object 11002 to return to the perspective from which it was originally displayed.
In some embodiments, input to adjust the size of the virtual object 11002 is received while the staging user interface 6010 is displayed. For example, the input to adjust the size of the virtual object 11002 is an expand gesture (e.g., as described with reference to fig. 6N-6O) to increase the size of the virtual object 11002 or a pinch gesture to decrease the size of the virtual object 11002.
In fig. 13J, an input is received to replace the display of the staging user interface 6010 with the display of the user interface that includes the field of view 6036 of the camera. An input by the contact 13016 is detected at a position corresponding to the toggle control 6018. In response to this input, a user interface including the field of view 6036 of the camera replaces the display of the staging user interface 6010, as shown in fig. 13K.
In fig. 13K, a virtual object 11002 is displayed in a user interface that includes a field of view 6036 of a camera. A tactile output occurs (as shown at 13018) indicating that a plane corresponding to the virtual object 11002 has been detected in the field of view 6036 of the camera. The rotation angle of the virtual object 11002 in the user interface including the field of view 6036 of the camera corresponds to the rotation angle of the virtual object 11002 in the staging user interface 6010.
When the user interface including the field of view 6036 of the camera is displayed, the input including the lateral movement causes the virtual object 11002 in the user interface including the field of view 6036 of the camera to move laterally as shown in fig. 13L to 13M. In fig. 13L, a contact 13020 is detected at a position corresponding to the virtual object 11002, and the contact moves along a path indicated by an arrow 13022. As the contact moves, the virtual object 11002 moves from a first position (as shown in fig. 13L) to a second position (as shown in fig. 13M) along a path corresponding to the movement of the contact 13020.
In some embodiments, the input provided while displaying the user interface including the field of view 6036 of the camera may cause the virtual object 11002 to move from a first plane (e.g., the floor plane 5038) to a second plane (e.g., the desktop plane 5046), as described with reference to fig. 5 AJ-5 AM.
Fig. 14A-14Z illustrate example user interfaces for increasing a second threshold amount of movement required for a second object manipulation behavior in accordance with a determination that the first object manipulation behavior satisfies the first threshold amount of movement. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 8A to 8E, 9A to 9D, 10A to 10D, 14AA to 14AD, 16A to 16G, 17A to 17D, 18A to 18I, 19A to 19H, and 20A to 20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having touch-sensitive display system 112. In such embodiments, the focus selector is optionally: a respective finger or stylus contact, a representative point corresponding to the finger or stylus contact (e.g., a center of gravity of or a point associated with the respective contact), or a center of gravity of two or more contacts detected on touch-sensitive display system 112. However, similar operations are optionally performed on a device having a display 450 and a separate touch-sensitive surface 451, in response to detecting a contact on the touch-sensitive surface 451 while displaying the user interface on the display 450 and the focus selector shown in the figures.
In fig. 14A, a virtual object 11002 is displayed in a user interface that includes a field of view 6036 of a camera. As further described with reference to fig. 14B-14Z, the translational motion meter 14002, the zoom motion meter 14004, and the rotational motion meter 14006 are used to indicate respective movement magnitudes (e.g., a translation operation, a zoom operation, and/or a rotation operation) corresponding to an object manipulation behavior. The translational movement meter 14002 indicates the magnitude of lateral (e.g., left or right) movement of a set of contacts on the touch screen display 112. The zoom movement meter 14004 indicates the magnitude of the increasing or decreasing distance between contacts in a set of contacts on the touch screen display 112 (e.g., the magnitude of a pinch or spread gesture). The rotational movement meter 14006 indicates the magnitude of rotational movement of a set of contacts on the touch screen display 112.
Fig. 14B-14E illustrate inputs for rotating the virtual object 11002 in a user interface that includes a field of view 6036 of one or more cameras. The input to rotate the virtual object 11002 includes a gesture in which the first contact 14008 moves rotationally in a clockwise direction along a path indicated by arrow 14010 and the second contact 14012 moves rotationally in a clockwise direction along a path indicated by arrow 14014. In fig. 14B, contacts 14008 and 14012 with the touch screen 112 are detected. In fig. 14C, contact 14008 moves along the path indicated by arrow 14010 and contact 14012 moves along the path indicated by arrow 14012. Since the magnitude of the rotational movement of contact 14008 and contact 14012 has not reached threshold RT in fig. 14C, virtual object 11002 has not rotated in response to the input. In fig. 14D, the magnitude of the rotational movement of contact 14008 and contact 14012 has increased above threshold RT and virtual object 11002 has rotated in response to the input (relative to the position of virtual object 11002 shown in fig. 14B). When the magnitude of the rotational movement increases above the threshold RT, the magnitude of the movement required to zoom the virtual object 11002 increases (e.g., the zoom threshold ST has increased from ST to ST ', as indicated by the zoom movement meter 14004), and the magnitude of the movement required to translate the virtual object 11002 increases (e.g., the translation threshold TT has increased from TT to TT', as indicated by the translation movement meter 14002). In fig. 14E, contact 14008 and contact 14012 continue to move along the rotational paths indicated by arrow 14010 and arrow 14014, respectively, and virtual object 11002 continues to rotate in response to the input. In FIG. 14F, contacts 14008 and 14012 have been lifted off of the touch screen 112.
Fig. 14G-14I illustrate inputs for scaling (e.g., increasing the size) a virtual object 11002 in a user interface that includes a field of view 6036 of one or more cameras. The input to increase the size of the virtual object 11002 includes a gesture in which the first contact 14016 moves along the path indicated by arrow 14018 and the second contact 14020 moves along the path indicated by arrow 14022 (e.g., such that the distance between the contact 14016 and the contact 14020 increases). In FIG. 14G, contacts 14016 and 14020 with the touch screen 112 are detected. In fig. 14H, contact 14016 moves along a path indicated by arrow 14018 and contact 14020 moves along a path indicated by arrow 14022. Since the magnitude of the movement of the contact 14016 away from the contact 14020 has not reached the threshold ST in fig. 14H, the size of the virtual object 11002 is not adjusted in response to the input. In fig. 14I, the magnitude of the zoom movement of contact 14016 and contact 14020 has increased above the threshold ST and the size of virtual object 11002 has increased (relative to the size of virtual object 11002 shown in fig. 14H) in response to the input. When the magnitude of the zoom movement increases above the threshold ST, the magnitude of the movement required to rotate the virtual object 11002 increases (e.g., the rotation threshold RT has increased from RT to RT ', as indicated by the rotational movement meter 14006), and the magnitude of the movement required to translate the virtual object 11002 increases (e.g., the translation threshold TT has increased from TT to TT', as indicated by the translational movement meter 14002). In FIG. 14J, contacts 14016 and 14020 have been lifted off of touch screen 112.
Fig. 14K-14M illustrate inputs for panning a virtual object 11002 (e.g., moving the virtual object 11002 to the left) in a user interface that includes a field of view 6036 of one or more cameras. The input to move the virtual object 11002 includes a gesture in which the first contact 14024 moves along a path indicated by arrow 14026 and the second contact 14028 moves along a path indicated by arrow 1430 (e.g., such that both the contact 14024 and the contact 14028 move to the left). In FIG. 14K, contacts 14024 and 14028 with the touch screen 112 are detected. In fig. 14L, contact 14024 moves along the path indicated by arrow 14026 and contact 14028 moves along the path indicated by arrow 14030. Since the magnitude of the leftward movement of contacts 14024 and 14028 has not reached the threshold TT in fig. 14L, the virtual object 11002 has not moved in response to the input. In fig. 14M, the magnitude of the leftward movement of contact 14024 and contact 14028 has increased above threshold TT and the virtual object 11002 has moved in the direction of movement of contacts 14024 and 14028. When the magnitude of the translational movement increases above the threshold TT, the magnitude of the movement required to zoom the virtual object 11002 increases (e.g., the zoom threshold ST has increased from ST to ST ', as indicated by the zoom movement meter 14004), and the magnitude of the movement required to rotate the virtual object 11002 increases (e.g., the rotation threshold RT has increased from RT to RT', as indicated by the rotation movement meter 14006). In FIG. 14N, contacts 14024 and 14028 have been lifted off of touch screen 112.
Fig. 14O-14Z show inputs including gestures for translating the virtual object 11002 (e.g., moving the virtual object 11002 to the right), scaling the virtual object 11002 (e.g., increasing the size of the virtual object 11002), and rotating the virtual object 11002. In FIG. 14O, contacts 14032 and 14036 with the touch screen 112 are detected. In fig. 14O-14P, contact 14032 moves along the path indicated by arrow 14034 and contact 14036 moves along the path indicated by arrow 14038. The magnitude of the right movement of the contacts 14032 and 14036 has increased above the threshold TT and the virtual object 11002 has moved in the direction of movement of the contacts 14032 and 14036. Since the movement of contacts 14032 and 14036 satisfies threshold TT, the magnitude of the movement required to zoom virtual object 11002 is increased to ST ', and the magnitude of the movement required to rotate virtual object 11002 is increased to RT'. After the threshold TT is met (as indicated by the high water mark 14043 shown as the translational motion meter 14002 in fig. 14Q), any lateral movement of the contacts 14032 and 14036 will cause lateral movement of the virtual object 11002.
In fig. 14Q-14R, contact 14032 moves along the path indicated by arrow 14040 and contact 14036 moves along the path indicated by arrow 14042. In fig. 14R, the magnitude of the contact 14032 moving away from the contact 14036 has exceeded the original zoom threshold ST, but has not yet reached the increased zoom threshold ST'. When the increased zoom movement threshold ST 'is in effect, zooming does not occur until the magnitude of the contact 14032 movement away from the contact 14036 increases above the increased zoom movement threshold ST', so the size of the virtual object 11002 does not change from fig. 14Q to fig. 14R. In fig. 14R-14S, as contact 14032 moves along the path indicated by arrow 14044 and contact 14036 moves along the path indicated by arrow 14046, the distance between contact 14032 and contact 14046 continues to increase. In fig. 14S, the magnitude of the contact 14032 moving away from the contact 14036 has exceeded the increased scaling threshold ST' and the size of the virtual object 11002 has increased. After the threshold ST' is met (as indicated by the high water mark 14047 shown by the zoom movement meter 14004 in fig. 14T), any zoom movement of the contacts 14032 and 14036 will cause a zoom of the virtual object 11002.
In fig. 14T-14U, contact 14032 moves along the path indicated by arrow 14048 and contact 14036 moves along the path indicated by arrow 14050. Since the threshold TT has been met (as indicated by the high water mark 14043 shown by the translational motion meter 14002), the virtual object 11002 is free to move in the direction of lateral movement of the contacts 14032 and 14036.
In fig. 14V-14W, contact 14032 moves along the path indicated by arrow 14052 and contact 14036 moves along the path indicated by arrow 14054. The movement of contacts 14032 and 14036 includes a translational movement (leftward movement of contacts 14032 and 14036) and a zooming movement (movement that reduces the distance between contact 14032 and contact 14036 (e.g., a pinch gesture)). Because the translation threshold TT has been met (as indicated by the high water mark 14043 shown by the translational motion meter 14002), the virtual object 11002 is free to move in the direction of lateral movement of the contacts 14032 and 14036, and because the increased zoom threshold ST' has been met (as indicated by the high water mark 14047 shown by the zoom motion meter 14004), the virtual object 11002 is free to zoom in response to movement of the contact 14032 toward the contact 14036. From fig. 14V to fig. 14W, the size of the virtual object 11002 has decreased, and the virtual object 11002 moves to the left in response to movement of the contact 14032 along the path indicated by arrow 14052 and movement of the contact 14036 along the path indicated by arrow 14054.
In fig. 14X-14Z, contact 14032 moves rotationally in a counterclockwise direction along the path indicated by arrow 14056 and contact 14036 moves rotationally in a counterclockwise direction along the path indicated by arrow 14058. In fig. 14Y, the magnitude of the rotational movement of contact 14032 and contact 14036 has exceeded the original scaling threshold RT, but has not yet reached the increased scaling threshold RT'. When the increased zoom movement threshold RT 'is active, rotation of the virtual object 11002 does not occur until the magnitude of the rotational movement of the contacts 14032 and 14036 increases above the increased rotational movement threshold RT', so from fig. 14X to 14Y, the virtual object 11002 is not rotated. In fig. 14Y-14Z, contacts 14032 and 14046 continue rotational movement in a counterclockwise direction as contact 14032 moves along the path indicated by arrow 14060 and contact 14036 moves along the path indicated by arrow 14062. In fig. 14Z, the magnitude of the rotational movement of contact 14032 and contact 14036 has exceeded the increased scaling threshold RT', and the virtual object 11002 has rotated in response to the input.
Fig. 14 AA-14 AD are flowcharts illustrating operations for increasing a second threshold movement value required for a second object manipulation behavior in accordance with a determination that a first object manipulation behavior satisfies a first threshold movement value. The operations described with reference to fig. 14 AA-14 AD are performed at an electronic device (e.g., device 300 of fig. 3 or portable multifunction device 100 of fig. 1A) having a display generating component (e.g., a display, a projector, a heads-up display, etc.) and a touch-sensitive surface (e.g., a touch-sensitive surface or a touch-screen display that serves as both a display generating component and a touch-sensitive surface). Some of the operations described with reference to fig. 14 AA-14 AD are optionally combined, and/or the order of some of the operations is optionally changed.
At operation 14066, a first portion of the user input comprising movement of the one or more contacts is detected. At operation 14068, it is determined whether the movement of one or more contacts (e.g., at a location corresponding to the virtual object 11002) increased above an object rotation threshold (e.g., a rotation threshold RT indicated by the rotational movement meter 14006). In accordance with a determination that the movement of the one or more contacts increases above the object rotation threshold (e.g., as described with reference to fig. 14B-14D), flow proceeds to operation 14070. In accordance with a determination that the movement of the one or more contacts has not increased above the object rotation threshold, flow proceeds to operation 14074.
At operation 14070, an object (e.g., virtual object 11002) is rotated based on the first portion of the user input (e.g., as described with reference to fig. 14B-14D). At operation 14072, the object translation threshold is increased (e.g., from TT to TT ', as described with reference to fig. 14D), and the object scaling threshold is increased (e.g., from ST to ST', as described with reference to fig. 14D). From operation 14072, flow proceeds to operation 14086 of fig. 14AB, as indicated at a.
At operation 14074, it is determined whether the movement of the one or more contacts (e.g., at the location corresponding to the virtual object 11002) increased above an object translation threshold (e.g., translation threshold TT indicated by translation movement meter 14002). In accordance with a determination that the movement of the one or more contacts increases above the object translation threshold (e.g., as described with reference to fig. 14K-14M), flow proceeds to operation 14076. In accordance with a determination that the movement of the one or more contacts has not increased above the object translation threshold, flow proceeds to operation 14080.
At operation 14076, the object (e.g., virtual object 11002) is translated based on the first portion of the user input (e.g., as described with reference to fig. 14K-14M). At operation 14078, the object rotation threshold is increased (e.g., from RT to RT ', as described with reference to fig. 14M) and the object scaling threshold is increased (e.g., from ST to ST', as described with reference to fig. 14M). Flow proceeds from operation 14078 to operation 14100 of fig. 14AC, as indicated at B.
At operation 14080, it is determined whether the movement of one or more contacts (e.g., at locations corresponding to the virtual object 11002) increases above an object zoom threshold (e.g., a zoom threshold ST indicated by the zoom movement meter 14004). In accordance with a determination that the movement of the one or more contacts increases above the object scaling threshold (e.g., as described with reference to fig. 14G-14I), flow proceeds to operation 14082. In accordance with a determination that the movement of the one or more contacts has not increased above the object scaling threshold, flow proceeds to operation 14085.
At operation 14082, an object (e.g., virtual object 11002) is scaled based on the first portion of the user input (e.g., as described with reference to fig. 14G-14I). At operation 14084, the object rotation threshold is increased (e.g., from RT to RT ', as described with reference to fig. 14I) and the object translation threshold is increased (e.g., from TT to TT', as described with reference to fig. 14I). Flow proceeds from operation 14084 to operation 14114 of fig. 14AD, as indicated at C.
At operation 14085, additional portions of the user input including movement of the one or more contacts are detected. Flow proceeds from operation 14086 to operation 14066.
In fig. 14AB, at operation 14086, additional portions of the user input including movement of one or more contacts are detected. Flow proceeds from operation 14086 to operation 14088.
At operation 14088, it is determined whether the movement of the one or more contacts is a rotational movement. In accordance with a determination that the movement of the one or more contacts is a rotational movement, flow proceeds to operation 14090. In accordance with a determination that the movement of the one or more contacts is not a rotational movement, flow proceeds to operation 14092.
At operation 14090, an object (e.g., virtual object 11002) is rotated based on the additional portion of the user input (e.g., as described with reference to fig. 14D-14E). Since the rotation threshold has been previously met, the object is free to rotate according to the further rotational input.
At operation 14092, it is determined whether the movement of the one or more contacts has increased above an increased object translation threshold (e.g., translation threshold TT' as indicated by translation movement meter 14002 in fig. 14D). In accordance with a determination that the movement of the one or more contacts increases above the increased object translation threshold, flow proceeds to operation 14094. In accordance with a determination that the movement of the one or more contacts has not increased above the increased object translation threshold, flow proceeds to operation 14096.
At operation 14094, the object (e.g., virtual object 11002) is translated based on the additional portion of the user input.
At operation 14096, a determination is made as to whether the movement of the one or more contacts increases above an increased object zoom threshold (e.g., zoom threshold ST' indicated by zoom movement meter 14004 in fig. 14D). In accordance with a determination that the movement of the one or more contacts increases above the increased object scaling threshold, flow proceeds to operation 14098. In accordance with a determination that the movement of the one or more contacts has not increased above the increased object scaling threshold, flow returns to operation 14086.
At operation 14098, the object (e.g., virtual object 11002) is scaled based on the additional portion of the user input.
In FIG. 14AC, at operation 14100, additional portions of the user input including movement of the one or more contacts are detected. Flow proceeds from operation 14100 to operation 14102.
At operation 14102, it is determined whether the movement of the one or more contacts is a translational movement. In accordance with a determination that the movement of the one or more contacts is a translational movement, flow proceeds to operation 140104. In accordance with a determination that the movement of the one or more contacts is not a translational movement, flow proceeds to operation 14106.
At operation 14104, the object (e.g., virtual object 11002) is translated based on the additional portion of the user input. Since the translation threshold has been previously met, the object is free to translate according to the further translation input.
At operation 14106, it is determined whether the movement of the one or more contacts increases above an increased object rotation threshold (e.g., rotation threshold RT' as indicated by rotational movement meter 14006 in fig. 14M). In accordance with a determination that the movement of the one or more contacts increases above the increased object rotation threshold, flow proceeds to operation 14108. In accordance with a determination that the movement of the one or more contacts has not increased above the increased object rotation threshold, flow proceeds to operation 14110.
At operation 14108, the object (e.g., virtual object 11002) is rotated based on the additional portion of the user input.
At operation 14110, it is determined whether the movement of the one or more contacts increases above an increased object zoom threshold (e.g., zoom threshold ST' as indicated by zoom movement meter 14004 in fig. 14M). In accordance with a determination that the movement of the one or more contacts increases above the increased object scaling threshold, flow proceeds to operation 14112. In accordance with a determination that the movement of the one or more contacts has not increased above the increased object scaling threshold, flow returns to operation 14100.
At operation 14112, an object (e.g., virtual object 11002) is scaled based on the additional portion of the user input.
In fig. 14AD, at operation 14114, additional portions of the user input including movement of the one or more contacts are detected. Flow proceeds from operation 14114 to operation 14116.
At operation 14116, it is determined whether the movement of the one or more contacts is a zoom movement. In accordance with a determination that the movement of the one or more contacts is a zoom movement, flow proceeds to operation 140118. In accordance with a determination that the movement of the one or more contacts is not a zoom movement, flow proceeds to operation 14120.
At operation 14118, an object (e.g., virtual object 11002) is scaled based on the additional portion of the user input. Since the zoom threshold has been previously met, the object is free to zoom according to the further zoom input.
At operation 14120, it is determined whether the movement of the one or more contacts has increased above an increased object rotation threshold (e.g., a rotation threshold RT' indicated by a rotational movement meter 14006 in fig. 14I). In accordance with a determination that the movement of the one or more contacts has increased above the increased object rotation threshold, flow proceeds to operation 14122. In accordance with a determination that the movement of the one or more contacts has not increased above the increased object rotation threshold, flow proceeds to operation 14124.
At operation 14122, the object (e.g., virtual object 11002) is rotated based on the additional portion of the user input.
At operation 14124, it is determined whether the movement of the one or more contacts has increased above an increased object translation threshold (e.g., translation threshold TT' as indicated by translation movement meter 14002 in fig. 14I). In accordance with a determination that the movement of the one or more contacts increases above the increased object translation threshold, flow proceeds to operation 14126. In accordance with a determination that the movement of the one or more contacts has not increased above the increased object translation threshold, flow proceeds to operation 14114.
15A-15 AI illustrate example user interfaces for generating an audio alert in accordance with a determination that movement of a device moves a virtual object out of a field of view of a displayed one or more device cameras. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 8A to 8E, 9A to 9D, 10A to 10D, 16A to 16G, 17A to 17D, 18A to 18I, 19A to 19H, and 20A to 20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having touch-sensitive display system 112. In such embodiments, the focus selector is optionally: a respective finger or stylus contact, a representative point corresponding to the finger or stylus contact (e.g., a center of gravity of or a point associated with the respective contact), or a center of gravity of two or more contacts detected on touch-sensitive display system 112. However, similar operations are optionally performed on a device having a display 450 and a separate touch-sensitive surface 451, in response to detecting a contact on the touch-sensitive surface 451 while displaying the user interface on the display 450 and the focus selector shown in the figures.
Fig. 15A-15 AI illustrate user interface and device operations that occur when the accessibility feature is active. In some embodiments, the accessibility features include modes in which device features may be accessed using a reduced number of inputs or alternative inputs (e.g., such that device features may be more easily accessed by users with limited ability to provide the above-described input gestures). For example, the accessibility mode is a toggle control mode in which a first input gesture (e.g., a swipe input) is used to advance or reverse available device operations, and a selection input (e.g., a double-tap input) is used to perform the currently indicated operation. As the user interacts with the device, an audio alert is generated (e.g., to provide feedback to the user indicating that an operation has been performed, to indicate a current display state of the virtual object 11002 relative to a field of view of a landing user interface or one or more cameras of the device, etc.).
In fig. 15A, instant messaging user interface 5008 includes a two-dimensional representation of a three-dimensional virtual object 11002. A selection cursor 15001 is shown surrounding the three-dimensional virtual object 11002 (e.g., to indicate that the currently selected operation is an operation to be performed on the virtual object 11002). An input (e.g., a double-click input) by the contact 15002 for performing an operation of the current instruction (e.g., displaying a three-dimensional representation of the virtual object 11002 in the staging user interface 6010) is detected. In response to the input, the display of the staging user interface 6010 replaces the display of the instant messaging user interface 5060, as shown in fig. 15B.
In fig. 15B, a virtual object 11002 is displayed in the staging user interface 6010. An audio alert is generated (e.g., through the device speaker 111) as indicated at 15008 to indicate the status of the device. For example, the audio alert 15008 includes a notification as indicated at 15010: "the chair is now shown in the staging view".
In fig. 15B, a selection cursor 15001 is shown surrounding the sharing control 6020 (e.g., to indicate that the currently selected operation is a sharing operation). An input by contact 15004 (e.g., a rightward swipe along the path indicated by arrow 15006) is detected. In response to the input, the selected operation proceeds to the next operation.
In fig. 15C, a tilt-up control 15012 is displayed (e.g., to indicate that the currently selected operation is an operation for tilting the displayed virtual object 11002 upward). An audio alert is generated as indicated at 15014 to indicate the status of the device. For example, the audio alert includes a notification as indicated at 15016: "select: tilt the button up ". An input through the contact 15018 is detected (e.g., a rightward swipe along the path indicated by arrow 15020). In response to the input, the selected operation proceeds to the next operation.
In fig. 15D, a tilt down control 15022 is displayed (e.g., to indicate that the currently selected operation is an operation for tilting down the displayed virtual object 11002). An audio alert is generated as indicated at 15024 to indicate the status of the device. For example, the audio alert includes a notification as indicated at 15026: "select: downward tilt button ". An input (e.g., a double-tap input) by the contact 15028 is detected. In response to the input, the selected operation is performed (e.g., tilt down the virtual object 11002 in the staging view).
In fig. 15E, the virtual object 11002 is tilted downward in the staging view. An audio alert is generated as indicated at 15030 to indicate the status of the device. For example, the audio alert includes a notification as indicated at 15032: "the chair is tilted 5 degrees downwards and the chair is now tilted 10 degrees towards the screen".
In fig. 15F, an input by the contact 15034 is detected (e.g., a rightward swipe along the path indicated by the arrow 15036). In response to the input, the selected operation proceeds to the next operation.
In fig. 15G, a clockwise rotation control 15038 is displayed (e.g., to indicate that the currently selected operation is an operation for rotating the displayed virtual object 11002 clockwise). The audio alert 15040 includes a notification as indicated at 15042: "select: rotate the button clockwise ". An input through the contact 15044 is detected (e.g., a rightward swipe along the path indicated by arrow 15046). In response to the input, the selected operation proceeds to the next operation.
In fig. 15H, a counterclockwise rotation control 15048 is displayed (e.g., to indicate that the currently selected operation is an operation for rotating the displayed virtual object 11002 counterclockwise). The audio alert 15050 includes a notification as indicated at 15052: "select: rotate the button counterclockwise ". An input (e.g., a double-click input) by the contact 15054 is detected. In response to the input, the selected operation is performed (e.g., the virtual object 11002 is rotated counterclockwise in the staging view, as indicated in fig. 15I).
In fig. 15I, the audio alert 15056 includes a notification as indicated at 15058: "the chair has rotated 5 degrees counter clockwise. The chair is now rotated 5 degrees away from the screen.
In fig. 15J, an input by the contact 15060 is detected (e.g., a rightward swipe along the path indicated by the arrow 15062). In response to the input, the selected operation proceeds to the next operation.
In fig. 15K, a zoom control 15064 is displayed (e.g., to indicate that the currently selected operation is an operation for zooming the displayed virtual object 11002). The audio alert 15066 includes a notification as indicated at 15068: "ratio: adjustable ". The keyword "adjustable" along with the control name in the notification indicates that a swipe input (e.g., a vertical swipe input) is available to operate the control. For example, as contact 5070 moves upward along the path indicated by arrow 5072, the contact provides a swipe up input. In response to the input, a scaling operation is performed (e.g., the size of the virtual object 11002 is increased as indicated in fig. 15K to 15L).
In fig. 15L, the audio alert 15074 includes a notification as indicated at 15076: "the chair is now adjusted to 150% of the original size". Input to reduce the size of the virtual object 11002 (e.g., a swipe down input) is provided by a contact 5078 moving downward along a path indicated by an arrow 5078. In response to the input, a zoom operation is performed (e.g., the size of the virtual object 11002 is reduced, as indicated in fig. 15L to 15M).
In fig. 15M, the audio alert 15082 includes a notification as indicated at 15084: the chair is now adjusted to 100% of the original size. As the virtual object 11002 is resized to its size when initially displayed in the staging view 6010, a tactile output occurs (as shown at 15086) (e.g., to provide feedback indicating that the virtual object 11002 has returned to its original size).
In fig. 15N, an input through the contact 15088 is detected (e.g., a rightward swipe along the path indicated by the arrow 15090). In response to the input, the selected operation proceeds to the next operation.
In fig. 15O, a selection cursor 15001 is shown surrounding the back control 6016 (e.g., to indicate that the currently selected operation is an operation to return to the previous user interface). The audio alert 15092 includes a notification as indicated at 15094: "select: a return button ". An input through contact 15096 is detected (e.g., a rightward swipe along the path indicated by arrow 15098). In response to the input, the selected operation proceeds to the next operation.
In fig. 15P, a selection cursor 15001 is shown surrounding a switching control 6018 (e.g., to indicate that the operation of the current selection is an operation for switching between displaying a staging user interface 6010 and displaying a user interface including a field of view 6036 of a camera). Audio alert 15098 includes notifications as indicated at 50100: "select: world view/landing view switch. An input (e.g., a double-tap input) via contact 15102 is detected. In response to the input, the display of the user interface including the field of view 6036 of the camera replaces the display of the staging user interface 6010 (as indicated in fig. 15Q).
Fig. 15Q-15T illustrate a calibration sequence that occurs when the field of view 6036 of the camera is displayed (e.g., because a plane corresponding to the virtual object 11002 has not been detected in the field of view 6036 of the camera). During the calibration sequence, a semi-transparent representation of the virtual object 11002 is displayed, the field of view 6036 of the camera is obscured, and a prompt including animated images (including the representation 12004 of the device 100 and the representation 12010 of the plane) is displayed to prompt the user to move the device. In fig. 15Q, the audio alert 15102 includes a notification as indicated at 50104: "move device to detect plane". From fig. 15Q to fig. 15R, device 100 moves relative to physical environment 5002 (e.g., as indicated by the changing position of table 5004 in field of view 6036 of the camera). As a result of the detection of the movement of the apparatus 100, the calibration user interface object 12014 is displayed as indicated in fig. 15S.
In fig. 15S, the audio alert 15106 includes a notification as indicated at 50108: "move device to detect plane". In fig. 15S-15T, the calibration user interface object 12014 rotates as the device 100 moves relative to the physical environment 5002 (e.g., as indicated by the changing position of the table 5004 in the field of view 6036 of the camera). In fig. 15T, sufficient motion has occurred to detect a plane corresponding to the virtual object 11002 in the field of view 6036 of the camera, and the audio alert 15110 includes a notification as indicated at 50112: "plane detected". In fig. 15U to 15V, the translucency of the virtual object 11002 is reduced, and the virtual object 11002 is placed on the detected plane.
In fig. 15V, the audio alert 15114 includes a notification as indicated at 50116: "the chair is now projected in the world, 100% visible, occupying 10% of the screen". The tactile output generator outputs tactile outputs (as indicated at 15118) indicating that the virtual object 11002 has been placed on a plane. The virtual object 11002 is shown in a fixed position relative to the physical environment 5002.
In fig. 15V-15W, the device 100 moves relative to the physical environment 5002 (e.g., as indicated by the changed position of the table 5004 in the field of view 6036 of the camera) such that the virtual object 11002 is no longer visible in the field of view 6036 of the camera. As the virtual object 11002 moves out of the camera's field of view 6036, the audio alert 15122 includes a notification as indicated at 50124: "the chair is not on the screen".
In fig. 15W-15X, the device 100 has moved relative to the physical environment 5002 such that in fig. 15X the virtual object 11002 is again visible in the field of view 6036 of the camera. As the virtual object 11002 moves into the field of view 6036 of the camera, an audio alert 15118 is generated that includes a notification as indicated at 50120: "the chair is now projected in the world, 100% visible, occupying 10% of the screen".
In fig. 15X-15Y, the device 100 has moved relative to the physical environment 5002 (e.g., such that in fig. 15Y, the device 100 is "closer" to the virtual object 11002 as projected in the field of view 6036 of the camera, and the virtual object 11002 is partially visible in the field of view 6036 of the camera). As the virtual object 11002 moves partially out of the camera's field of view 6036, the audio alert 15126 includes a notification as indicated at 50128: "chair 90% visible, occupies 20% of the screen".
In some embodiments, the input provided at the location corresponding to the virtual object 11002 causes an audio message to be provided that includes verbal information about the virtual object 11002. In contrast, when an input is provided at a location remote from the virtual object 11002 and the control, an audio message including verbal information about the virtual object 11002 is not provided. In fig. 15Z, an audio output 15130 (e.g., a "click" or "buzz") occurs that indicates that the contact 15132 was detected at a location that does not correspond to the location of the control or virtual object 11002 in the user interface. In fig. 15AA, an input by the contact 15134 is detected at a position corresponding to the position of the virtual object 11002. In response to the input, an audio alert 15136 corresponding to the virtual object 11002 (e.g., indicating the state of the virtual object 11002) is generated that includes a notification as indicated at 50138: "chair 90% visible, occupies 20% of the screen".
Fig. 15AB to 15AI show inputs for selecting and performing an operation in the toggle control mode when a user interface including a field of view 6036 of a camera is displayed.
In fig. 15AB, an input through contact 15140 is detected (e.g., a rightward swipe along the path indicated by arrow 15142). In response to this input, an operation is selected, as indicated by fig. 15 AC.
In fig. 15AC, the right lateral movement control 15144 is displayed (e.g., to indicate that the currently selected operation is an operation for moving the virtual object 11002 to the right). The audio alert 15146 includes a notification as indicated at 15148: "select: move button to the right ". An input (e.g., a double-tap input) via contact 15150 is detected. In response to the input, the selected operation is performed (e.g., to move the virtual object 11002 to the right in the field of view 6036 of the camera, as indicated in fig. 15 AD).
In fig. 15AD, the movement of the virtual object 11002 is reported as an audio alert 15152, which includes a notification as indicated at 15154: "chair 100% visible, occupying 30% of the screen".
In fig. 15AE, an input by contact 15156 is detected (e.g., a rightward swipe along the path indicated by arrow 15158). In response to the input, the selected operation proceeds to the next operation.
In fig. 15AF, a left lateral movement control 15160 is displayed (e.g., to indicate that the currently selected operation is an operation for moving the virtual object 11002 left). The audio alert 15162 includes a notification as indicated at 15164: "select: move to the left ". An input is detected via contact 15166 (e.g., a rightward swipe along the path indicated by arrow 15168). In response to the input, the selected operation proceeds to the next operation.
In fig. 15AG, a clockwise rotation control 15170 is displayed (e.g., to indicate that the currently selected operation is an operation for rotating the virtual object 11002 clockwise). The audio alert 15172 includes a notification as indicated at 15174: "select: clockwise rotation ". An input is detected via contact 15176 (e.g., a rightward swipe along the path indicated by arrow 15178). In response to the input, the selected operation proceeds to the next operation.
In fig. 15AH, a counterclockwise rotation control 15180 is displayed (e.g., to indicate that the currently selected operation is an operation for rotating the virtual object 11002 clockwise). The audio alert 15182 includes a notification as indicated at 15184: "select: counterclockwise rotation ". An input (e.g., a double-tap input) via contact 15186 is detected. In response to the input, the selected operation is performed (e.g., the virtual object 11002 is rotated counterclockwise as indicated in fig. 15 AI).
In fig. 15AI, the audio alert 15190 includes a notification as indicated at 15164: "the chair has rotated 5 degrees counter clockwise. The chair is now rotated zero degrees relative to the screen ".
In some implementations, reflections are generated on at least one surface (e.g., an underside surface) of an object (e.g., virtual object 11002). Reflections are generated using image data captured by one or more cameras of device 100. For example, the reflection is based on at least a portion of the captured image data (e.g., an image, a set of images, and/or video) corresponding to a horizontal plane (e.g., floor plane 5038) detected in the field of view 6036 of the one or more cameras. In some embodiments, generating the reflection includes generating a spherical model that includes the captured image data (e.g., by mapping the captured image data onto a model of a virtual sphere).
In some embodiments, the reflection generated on the surface of the object includes a reflection gradient (e.g., such that portions of the surface closer to the plane have a higher reflectivity magnitude than portions of the surface further from the plane). In some implementations, the reflectivity magnitude of the reflection generated on the surface of the object is based on the reflectivity value of the texture corresponding to the surface. For example, no reflection is generated at the non-reflective portion of the surface.
In some embodiments, the reflection is adjusted over time. For example, the reflection is adjusted upon receiving an input to move and/or zoom the object (e.g., the reflection of the object is adjusted to a portion of the reflection plane at a location corresponding to the object as the object moves). In some embodiments, the reflection is not adjusted while rotating the object (e.g., about the z-axis).
In some implementations, no reflection is produced on the surface of the object before the object is displayed at the determined location (e.g., on a plane corresponding to the object that is detected in the field of view 6036 of the camera). For example, when a semi-transparent representation of the object is displayed (e.g., as described with reference to fig. 11G-11H), and/or when calibration is performed (e.g., as described with reference to fig. 12B-12I), no reflection is generated on the surface of the object.
In some implementations, reflections of objects are generated on one or more planes detected in the field of view 6036 of the camera. In some implementations, no reflections of the object are generated in the field of view 6036 of the camera.
Fig. 16A-16G are flow diagrams illustrating a method 16000 of displaying virtual objects using different visual attributes in a user interface including a field of view of one or more cameras according to whether object placement criteria are satisfied. Method 16000 is performed at an electronic device (e.g., device 300 of fig. 3 or portable multifunction device 100 of fig. 1A) having a display generation component (e.g., a display, a projector, a heads-up display, etc.), one or more input devices (e.g., a touch-sensitive surface or a touch-screen display that acts as both a display generation component and a touch-sensitive surface), and one or more cameras (e.g., one or more rear-facing cameras on a side of the device opposite the display and the touch-sensitive surface). In some embodiments, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations in method 16000 are optionally combined, and/or the order of some operations is optionally changed.
The device receives (16002) (e.g., while displaying a staging user interface including a movable representation of a virtual object, and prior to displaying a field of view of a camera) a request to display a virtual object (e.g., a representation of a three-dimensional model) in a first user interface area (e.g., an augmented reality viewer interface), wherein the first user interface area includes at least a portion of the field of view of one or more cameras (e.g., the request is an input by a contact detected on a representation of the virtual object on a touchscreen display or by a contact detected on an affordance displayed concurrently with the representation of the virtual object (e.g., a tap on an "AR view" or "world view" button), wherein the affordance is configured to trigger display of an AR view when invoked by the first contact). For example, the request is an input to display the virtual object 11002 in the field of view 6036 of one or more cameras, as described with reference to fig. 11F.
In response to a request to display a virtual object in the first user interface area (e.g., a request to display a virtual object in a view of a physical environment surrounding the device), the device displays (16004), via the display generation component, a representation of the virtual object on at least a portion of a field of view of one or more cameras included in the first user interface area (e.g., displays a field of view of the one or more cameras in response to the request to display the virtual object in the first user interface area), wherein the field of view of the one or more cameras is a view of the physical environment in which the one or more cameras are located. For example, as described with reference to fig. 11G, the virtual object 11002 is displayed in the field of view 6036 of the one or more cameras, which is a view of the physical environment 5002 in which the one or more cameras are located. Displaying a representation of a virtual object includes: in accordance with a determination that object placement criteria have not been met, wherein the object placement criteria require that a placement location (e.g., a plane) of a virtual object be identifiable in a field of view of one or more cameras so that the object placement criteria are met (e.g., when the device has not identified a location or plane for placing the virtual object relative to the field of view of the one or more cameras in a first user interface area (e.g., plane recognition is still in progress, or sufficient image data does not exist to recognize the plane)), displaying a representation of the virtual object having a first set of visual attributes (e.g., at a first semi-transparent level, or a first brightness level, or a first saturation level, etc.) and having a first orientation that is independent of which portion of the physical environment is displayed in the field of view of the one or more cameras (e.g., the virtual object floats on the field of view of the camera, having an orientation relative to a predefined plane that is independent of the physical environment (e.g., an orientation set in a dock view) and independent of changes occurring in the field of view of the camera (e.g., changes due to movement of the device relative to the physical environment). For example, in fig. 11G to 11H, since the placement position of the virtual object 11002 has not been recognized in the field of view 6036 of the camera, a translucent version of the virtual object 11002 is displayed. As the device moves (as shown from fig. 11G to fig. 11H), the orientation of the virtual object 11002 does not change. In some embodiments, the object placement criteria include a requirement that the field of view be stable and provide a static view of the physical environment (e.g., the camera has moved less than a threshold amount during at least a threshold amount of time, and/or at least a predetermined amount of time has passed since the request was received, and/or the camera has been calibrated for plane detection if the device has previously moved sufficiently). In accordance with a determination that the object placement criteria are satisfied (e.g., when the device has not identified a location or plane for placing the virtual object relative to the field of view of the one or more cameras in the first user interface, the object placement criteria are satisfied), the device displays a representation of the virtual object having a second set of visual attributes (e.g., at a second translucency level, or a second brightness level, or a second saturation level, etc.) and having a second orientation, the second set of visual attributes being different from the first set of visual attributes, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras. For example, in fig. 11I, a non-translucent version of the virtual object 11002 is displayed because the placement location of the virtual object 11002 (e.g., the plane corresponding to the floor surface 5038 in the physical environment 5002) has been identified in the field of view 6036 of the camera. The orientation of the virtual object 11002 (e.g., the position on the touch screen display 112) has changed from the first orientation shown in fig. 11H to the second orientation shown in fig. 11I. As the device moves (as shown from fig. 11I to 11J), the orientation of the virtual object 11002 changes (as the virtual object 11002 is now displayed in a fixed orientation relative to the physical environment 5002). Displaying a virtual object having the first set of visual attributes or the second set of visual attributes according to whether the object placement criteria are met provides visual feedback to the user (e.g., to indicate that a request to display the virtual object has been received, but additional time and/or calibration information is required to place the virtual object in the field of view of the one or more cameras). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and avoiding attempting to provide input for manipulating the virtual object before placing the object in the second orientation corresponding to the plane), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, while displaying the representation of the virtual object in the first set of visual attributes and the first orientation, the device detects (16006) that the object placement criteria are satisfied (e.g., identifies a plane for placing the virtual object when the virtual object is suspended in a semi-transparent state over a view of a physical environment surrounding the device). Detecting that the object placement criteria are satisfied without requiring further user input to initiate detecting the object placement criteria while displaying the virtual object in the first set of visual properties (e.g., in a semi-transparent state) reduces the number of inputs required for object placement. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting that the object placement criteria are satisfied, the device displays (16008), via the display generation component, an animated transition that shows the representation of the virtual object moving (e.g., rotating, zooming, translating, and/or combinations thereof) from a first orientation to a second orientation and changing from having a first set of visual properties to having a second set of visual properties. For example, once a plane for placing a virtual object is identified in the field of view of the camera, the virtual object is placed on the plane with its orientation, size, and translucency (among other things) visually adjusted. Displaying an animated transition from a first orientation to a second orientation (e.g., without requiring further user input to reorient the virtual object in the first user interface) reduces the amount of input required for object placement. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, detecting that the object placement criteria are met includes one or more of the following operations (16010): detecting that a plane has been identified in the field of view of one or more cameras; detecting movement between the device and the physical environment for at least a threshold amount of time that is less than a threshold amount of movement (e.g., resulting in a substantially stationary view of the physical environment in a field of view of the camera); and detecting that at least a predetermined amount of time has elapsed since the request to display the virtual object in the first user interface area was received. Detecting that the object placement criteria are met (e.g., detecting a plane by detecting a plane in the field of view of one or more cameras without requiring user input) reduces the number of inputs required for object placement. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, while displaying a representation of a virtual object having a first set of visual properties and a first orientation on a first portion of a physical environment captured in a field of view of one or more cameras (e.g., a first portion of the physical environment that a user is visible through a translucent virtual object) (e.g., while the virtual object is hovering in a translucent state over a view of the physical environment surrounding the device), the device detects (16012) a first movement of the one or more cameras (e.g., a rotation and/or translation of the device relative to the physical environment surrounding the device). For example, in fig. 11G-11H, while displaying the semi-transparent representation of the virtual object 11002, one or more cameras move (as indicated, for example, by the changing position of the table 5004 in the field of view 6036 of the cameras). The walls and table of the physical environment captured in the camera's field of view 6036 and displayed in the user interface are visible through the semi-transparent virtual object 11002. In response to detecting the first movement of the one or more cameras, the device displays (16014) the virtual object having the first set of visual properties and the first orientation on a second portion of the physical environment captured in the field of view of the one or more cameras, wherein the second portion of the physical environment is different from the first portion of the physical environment. For example, when a semi-transparent version of a virtual object is displayed hovering over a physical environment shown in the field of view of the camera, as the device moves relative to the physical environment, the view of the physical environment within the field of view of the camera (e.g., behind the semi-transparent virtual object) shifts and zooms. Thus, during device movement, the semi-transparent version of the virtual object becomes overlaid over different portions of the physical environment represented in the field of view as a result of the translation and scaling of the view of the physical environment within the field of view of the camera. For example, in fig. 11H, the field of view 6036 of the camera shows a second portion of the physical environment 5002 that is different from the first portion of the physical environment 5002 shown in fig. 11G. The orientation of the semi-transparent representation of the virtual object 11002 does not change with the movement of the one or more cameras that occurred in fig. 11G-11H. Displaying the virtual object having the first orientation in response to detecting movement of the one or more cameras provides visual feedback to the user (e.g., to indicate that the virtual object has not been placed at a fixed position relative to the physical environment and thus does not move as the portion of the physical environment captured in the field of view of the one or more cameras changes in accordance with the movement of the one or more cameras). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user avoid attempting to provide input for manipulating the virtual object before placing the object in the second orientation corresponding to the plane), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the device detects (16016) a second movement (e.g., rotation and/or translation of the device relative to the physical environment surrounding the device) of the one or more cameras while displaying a representation of the virtual object having a second set of visual attributes and a second orientation on a third portion of the physical environment captured in the field of view of the one or more cameras (e.g., a direct view of the third portion of the physical environment (e.g., a portion of a detected plane supporting the virtual object) is blocked by the virtual object) (e.g., after the object placement criteria have been met and the virtual object has been placed on the detected plane in the physical environment in the field of view of the cameras). For example, in fig. 11I-11J, one or more cameras move (as indicated by, for example, a changed position of the table 5004 in the field of view 6036 of the cameras) while the non-translucent representation of the virtual object 11002 is displayed. In response to detecting the second movement of the device, the device maintains (16018) display of representations of virtual objects having the second set of visual properties and the second orientation on a third portion of the physical environment captured in the field of view of the one or more cameras as the physical environment captured in the field of view of the one or more cameras moves (e.g., shifts and zooms) in accordance with the second movement of the device and the second orientation continues to correspond to a plane in the physical environment detected in the field of view of the one or more cameras. For example, after the non-translucent version of the virtual object falls at a stationary position on a plane detected in the physical environment shown in the field of view of the camera, the position and orientation of the virtual object is fixed relative to the physical environment within the field of view of the camera, and as the device moves relative to the physical environment, the virtual object will shift and zoom with the physical environment in the field of view of the camera (e.g., when movement of one or more cameras occurs in fig. 11I-11J, the non-translucent representation of the virtual object 11002 remains fixed in an orientation relative to a floor plane in the physical environment 5002). Maintaining the display of the virtual object in the second orientation in response to detecting movement of the one or more cameras provides visual feedback to the user (e.g., to indicate that the virtual object has been placed at a fixed position relative to the physical environment and thus moved as the portion of the physical environment captured in the field of view of the one or more cameras changes in accordance with the movement of the one or more cameras). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input for virtual objects that have been placed in the second orientation corresponding to the plane), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in accordance with a determination that the object placement criteria are satisfied (e.g., when the device has not identified a location or plane for placing the virtual object relative to the field of view of the one or more cameras in the first user interface, the object placement criteria are satisfied), the device generates (16020) a tactile output (e.g., using one or more tactile output generators of the device), in conjunction with displaying the representation of the virtual object having the second set of visual attributes (e.g., at a reduced level of translucency, or a higher level of brightness, or a higher level of saturation, etc.) and having the second orientation, the second orientation corresponds to a plane in the physical environment detected in the field of view of the one or more cameras (e.g., generation of the haptic output is synchronized with completion of the conversion to the non-translucent appearance of the virtual object and completion of the rotation and translation of the virtual object to the landing position resting on the plane detected in the physical environment). For example, as shown in fig. 11I, a tactile output is generated as indicated at 11010 in conjunction with displaying a non-translucent representation of virtual object 11002 attached to a plane (e.g., floor surface 5038) corresponding to virtual object 11002. Generating a haptic output in accordance with a determination that the object placement criteria are satisfied provides improved haptic feedback to the user (e.g., indicating that an operation for placing the virtual object was successfully performed). Providing improved feedback to the user enhances the operability of the device (e.g., by providing sensory information that allows the user to perceive that object placement criteria have been met without cluttering the user interface with displayed information) and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in displaying the representation of the virtual object having the second set of visual attributes and having the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras, the device receives (16022) an update regarding at least the position or orientation of the plane in the physical environment detected in the field of view of the one or more cameras (e.g., the updated plane position and orientation is a result of a more accurate calculation or a more time consuming calculation method (e.g., less approximation, etc.) based on additional data accumulated after the initial plane detection results were used to place the virtual object). In response to receiving an update regarding at least a position or orientation of a plane in the physical environment detected in the field of view of the one or more cameras, the device adjusts (16024) at least the position and/or orientation of the representation of the virtual object in accordance with the update (e.g., gradually moving (e.g., translating and rotating) the virtual object closer to the updated plane). Adjusting the position and/or orientation of the virtual object in response to receiving the update with respect to the plane in the physical environment (e.g., without requiring user input for placing the virtual object relative to the plane) reduces the amount of input required to adjust the virtual object. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first set of visual attributes includes (16026) a first size and a first level of translucency (e.g., the object has a fixed size and a fixed high level of translucency relative to the display prior to falling into the AR view) and the second set of visual attributes includes (16028) a second size different from the first size (e.g., once in the AR view, the object is displayed with a simulated physical size related to the size and a landing position in the physical environment) and a second level of translucency lower than (e.g., more opaque than) the first level of translucency (e.g., the object is no longer translucent in the AR view). For example, in fig. 11H, a semi-transparent representation of virtual object 11002 is shown having a first size, and in fig. 11I, a non-semi-transparent representation of virtual object 11004 is shown having a second (smaller) size. Displaying a virtual object having a first size and a first level of translucency or a second size and a second level of translucency depending on whether the object placement criteria are met provides visual feedback to the user (e.g., to indicate that a request to display the virtual object has been received, but additional time and/or calibration information is required to place the virtual object in the field of view of one or more cameras). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and avoiding attempting to provide input for manipulating the virtual object before placing the object in the second orientation corresponding to the plane), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, a request to display a virtual object in a first user interface area (e.g., AR view) that includes at least a portion of the field of view of one or more cameras is received (16030) while the virtual object is displayed in a respective user interface (e.g., a staging user interface) that does not include at least a portion of the field of view of the one or more cameras (e.g., the virtual object is oriented relative to a virtual gantry having an orientation that is independent of the physical environment of the apparatus). The first orientation corresponds to an orientation of the virtual object when the virtual object is displayed in the respective user interface upon receiving the request. For example, as described with reference to fig. 11F, while the staging user interface 6010 (which does not include the field of view of the camera) is displayed, a request is received to display the virtual object 11002 in a user interface that includes the field of view 6036 of the camera. The orientation of the virtual object 11002 in fig. 11G, where the virtual object 11002 is displayed in the user interface including the field of view 6036 of the camera, corresponds to the orientation of the virtual object 11002 in fig. 11F, where the virtual object 11002 is displayed in the staging user interface 6010. Displaying the virtual object in an orientation corresponding to the orientation of the virtual object when displayed in a (previously displayed) interface (e.g., a landing user interface) provides visual feedback to the user in a first user interface (e.g., a displayed augmented reality view) (e.g., to indicate that object manipulation input provided when the landing user interface is displayed may be used to establish the orientation of the object in the AR view). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and avoiding attempting to provide input for manipulating the virtual object before placing the object in the second orientation corresponding to the plane), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first orientation corresponds to (16032) a predefined orientation (e.g., a default orientation, such as an orientation displayed when the virtual object is initially displayed in a respective user interface that does not include at least a portion of the field of view of the one or more cameras). Displaying the virtual object in the first user interface (e.g., the displayed augmented reality view) in the first set of visual attributes and in the predefined orientation reduces power usage and extends battery life of the device (e.g., by allowing display of a pre-generated semi-transparent representation of the virtual object instead of rendering the semi-transparent representation according to the orientation established in the staging user interface).
In some embodiments, while displaying the virtual object in the first user interface area (e.g., AR view) with the second set of visual properties and the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras, the device detects (16034) (e.g., as a result of a zoom input (e.g., a pinch or spread gesture directed to the virtual object) a request to change the simulated physical dimension of the virtual object from the first simulated physical dimension to a second simulated physical dimension relative to the physical environment captured in the field of view of the one or more cameras (e.g., from 80% of the default dimension to 120% of the default dimension, or vice versa). For example, the input for reducing the simulated physical size of the virtual object 11002 is a pinch gesture as described with reference to fig. 11N-11P. In response to detecting the request to change the simulated physical dimensions of the virtual object, the device gradually changes (16036) the display size of the representation of the virtual object in the first user interface area according to a gradual change in the simulated physical dimensions of the virtual object from the first simulated physical dimensions to the second simulated physical dimensions (e.g., the display size of the virtual object grows or shrinks while the display size of the physical environment captured in the field of view of the one or more cameras remains unchanged), and in the course of the gradual change in the display size of the representation of the virtual object in the first user interface area, the device generates a haptic output to indicate that the simulated physical dimensions of the virtual object have reached the predefined simulated physical dimensions (e.g., 100% of the default dimensions) in accordance with a determination that the simulated physical dimensions of the virtual object have reached the predefined simulated physical dimensions. For example, as described with reference to fig. 11N through 11P, the display size of the representation of the virtual object 11002 gradually decreases in response to the pinch gesture input. In fig. 11O, a haptic output as indicated at 11024 is generated when the displayed size of the representation of the virtual object 11002 reaches 100% of the size of the virtual object 11002 (e.g., the size of the virtual object 11002 when initially displayed in a user interface that includes the field of view 6036 of one or more cameras, as indicated in fig. 11I). Generating the tactile output in accordance with the determination that the simulated physical dimension of the virtual object has reached the predefined simulated physical dimension provides feedback to the user (e.g., indicating that no further input is required to return the simulated dimension of the virtual object to the predefined dimension). Providing improved haptic feedback enhances the operability of the device (e.g., by providing sensory information that allows a user to perceive that a predefined simulated physical size of a virtual object has been reached without cluttering the user interface with the displayed information), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, while the virtual object is displayed in a second simulated physical size of the virtual object in the first user interface area (e.g., AR view) that is different from the predefined simulated physical size (e.g., 120% of the default size, or 80% of the default size as a result of a zoom input (e.g., a pinch or expand gesture directed to the virtual object), the device detects (16038) a request to return the virtual object to the predefined simulated physical size (e.g., detects a tap or double tap on the touchscreen (e.g., on the virtual object, or alternatively, outside the virtual object)). For example, after the pinch input has caused the size of the virtual object 11002 to be reduced (as described with reference to fig. 11N to 11P), the double-click input is detected at the position corresponding to the virtual object 11002 (as described with reference to fig. 11R). In response to detecting the request to return the virtual object to the predefined simulated physical dimension, the device changes (16040) the display size of the representation of the virtual object in the first user interface area in accordance with the change in the simulated physical dimension of the virtual object to the predefined simulated physical dimension (e.g., the display size of the virtual object grows or shrinks while the display size of the physical environment captured in the field of view of the one or more cameras remains unchanged). For example, in response to the double-click input described with reference to fig. 11R, the size of the virtual object 11002 returns to the size of the virtual object 11002 when displayed in fig. 11I (the size of the virtual object 11002 when initially displayed in a user interface that includes the field of view 6036 of one or more cameras). In some embodiments, in accordance with a determination that the simulated physical dimension of the virtual object has reached the predefined simulated physical dimension (e.g., 100% of the default dimension), the device generates a tactile output to indicate that the simulated physical dimension of the virtual object has reached the predefined simulated physical dimension. Changing the display size of the virtual object to the predefined size in response to detecting the request to return the virtual object to the predefined simulated physical size (e.g., by providing an option to precisely resize the display to the predefined simulated physical size, rather than requiring the user to estimate when the input provided to resize the display is sufficient to cause the virtual object to be displayed at the predefined simulated physical size) reduces the amount of input required to display the object having the predefined size. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the device selects a plane for setting a second orientation of the representation of the virtual object having the second set of visual attributes according to respective positions and orientations of the one or more cameras relative to the physical environment (e.g., a current position and orientation when the object placement criteria are satisfied), wherein selecting the plane comprises (16042): in accordance with a determination that the object placement criteria are satisfied (e.g., as a result of the device being pointed in a first direction in the physical environment) when the representation of the virtual object is displayed on a first portion of the physical environment captured in the field of view of the one or more cameras (e.g., the base of the translucent object overlaps a plane in the first portion of the physical environment), selecting a first plane of the plurality of planes detected in the physical environment in the field of view of the one or more cameras (e.g., in accordance with the proximity between the base of the object and the first plane on the display being greater, the proximity between the first plane and the first portion of the physical environment in the physical world being greater) as a plane for setting a second orientation of the representation of the virtual object having a second set of visual attributes; and in accordance with a determination that the object placement criteria are satisfied (e.g., as a result of the device being pointed in a second direction in the physical environment) when displaying the representation of the virtual object on a second portion of the physical environment captured in the field of view of the one or more cameras (e.g., the base of the translucent object overlaps a plane in the second portion of the physical environment), selecting a second plane of the plurality of planes detected in the physical environment in the field of view of the one or more cameras (e.g., in accordance with the greater proximity on the display between the base of the object and the second plane, the greater proximity in the physical world between the second plane and the second portion of the physical environment) as a plane for setting a second orientation of the representation of the virtual object having the second set of visual attributes, wherein the first portion of the physical environment is different from the second portion of the physical environment and the first plane is different from the second plane. Selecting either the first plane or the second plane as the plane relative to which the virtual object is to be set (e.g., without requiring user input to specify which plane of the number of detected planes is to be the plane relative to which the virtual object is to be set) reduces the number of inputs required to select the planes. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the device displays (16044) a snapshot affordance (e.g., a camera shutter button) while displaying a virtual object having a second set of visual attributes and a second orientation in a first user interface area (e.g., an AR view). In response to activation of the snapshot affordance, the device captures (16046) a snapshot image including a current view of a representation of the virtual object, the representation of the virtual object being located at a placement position in the physical environment in the field of view of the one or more cameras, having a second set of visual attributes and a second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras. Displaying a snapshot affordance for capturing a snapshot image of a current view of an object reduces the number of inputs required to capture the snapshot image of the object. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the device displays (16048) one or more control affordances (e.g., an affordance for switching back to a landing user interface, an affordance for exiting an AR viewer, an affordance for capturing a snapshot, etc.) in the first user interface area along with a representation of a virtual object having the second set of visual attributes. For example, in fig. 11J, a set of controls including a back control 6016, a toggle control 6018, and a share control 6020 is displayed. While displaying the one or more control affordances along with the representation of the virtual object having the second set of visual attributes, the device detects (16050) that the control fade criteria are satisfied (e.g., that user input has not been detected on the touch-sensitive surface for a threshold amount of time (e.g., with or without movement of the device and an update to the field of view of the camera)). In response to detecting that the control fade criteria are satisfied, the device stops (16052) displaying the one or more control affordances while continuing to display representations of virtual objects having the second set of visual attributes in a first user interface area that includes a field of view of the one or more cameras. For example, as described with reference to fig. 11K-11L, when no user input is detected for a threshold amount of time, the controls 6016, 6018, and 6020 gradually fade out and stop displaying. In some embodiments, after the control affordance fades away, a tap input on the touch-sensitive surface or interaction with the virtual object causes the device to simultaneously re-display the control affordance along with the representation of the virtual object in the first user interface area. Automatically stopping the display control in response to determining that the control fade criterion is satisfied reduces the number of inputs required to stop the display control. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to a request to display a virtual object in the first user interface area: prior to displaying the representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, in accordance with a determination that the calibration criteria are not satisfied (e.g., because there are not a sufficient amount of images from different viewing angles to generate the size and spatial relationship data for the physical environment captured in the field of view of the one or more cameras), the device displays (16054) a prompt to the user for moving the device relative to the physical environment (e.g., displays a visual prompt for the mobile device, and optionally displays a calibration user interface object (e.g., an elastic wire frame sphere or cube that moves in accordance with movement of the device) in the first user interface area (e.g., the calibration user interface object is overlaid over a blurred image of the field of view of the one or more cameras), as described in more detail below with reference to method 17000). Displaying a prompt to the user for moving the device relative to the physical environment provides visual feedback to the user (e.g., to indicate that movement of the device is required to obtain information for placing the virtual object in the field of view of the camera). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide calibration inputs), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
It should be understood that the particular order in which the operations in fig. 16A-16G are described is merely an example and is not intended to suggest that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods 800, 900, 1000, 17000, 18000, 19000, and 20000) also apply in a similar manner to method 16000 described above with respect to fig. 16A-16G. For example, the contact, input, virtual object, user interface region, field of view, tactile output, movement, and/or animation described above with reference to method 16000 optionally has one or more of the features of the contact, input, virtual object, user interface region, field of view, tactile output, movement, and/or animation described herein with reference to other methods described herein (e.g., methods 800, 900, 1000, 17000, 18000, 19000, and 20000). For the sake of brevity, these details are not repeated here.
Fig. 17A-17D are flow diagrams illustrating a method 17000 of displaying a calibration user interface object that dynamically animates according to movement of one or more cameras of a device. The method 17000 is performed at an electronic device (e.g., the device 300 of fig. 3 or the portable multifunction device 100 of fig. 1A) having a display generation component (e.g., a display, a projector, a heads-up display, etc.), one or more input devices (e.g., a touch-sensitive surface or a touch-screen display that serves as both a display generation component and a touch-sensitive surface), one or more cameras (e.g., one or more rear-facing cameras on a side of the device opposite the display and the touch-sensitive surface), and one or more attitude sensors (e.g., an accelerometer, a gyroscope, and/or a magnetometer) for detecting changes in attitude (e.g., orientation (e.g., rotation, yaw, and/or tilt angle) and position relative to a surrounding physical environment) of the device including the one or more cameras. Some operations in the method 17000 are optionally combined, and/or the order of some operations is optionally changed.
The device receives (17002) a request to display an augmented reality view of a physical environment (e.g., a physical environment surrounding the device including one or more cameras) in a first user interface region that includes a representation of a field of view of the one or more cameras (e.g., the field of view captures at least a portion of the physical environment). In some embodiments, the request is a tap input detected on a button to switch from a staging view of the virtual object to an augmented reality view of the virtual object. In some embodiments, the request is a selection of an augmented reality affordance displayed next to a representation of a virtual object in a two-dimensional user interface. In some embodiments, the request is an activation of an augmented reality measurement application (e.g., a measurement application that facilitates measurement of a physical environment). For example, the request is a tap input detected at the switch 6018 to display the virtual object 11002 in the field of view 6036 of one or more cameras, as described with reference to fig. 12A.
In response to receiving a request to display an augmented reality view of a physical environment, the device displays (17004) a representation of the field of view of the one or more cameras (e.g., when the calibration criteria are not satisfied, the device displays a blurred version of the physical environment of the field of view of the one or more cameras). For example, the device displays a blurred representation of the field of view 6036 of one or more cameras, as shown in FIG. 12E-1. In accordance with a determination that the calibration criteria for the augmented reality view of the physical environment are not satisfied (e.g., because there is not a sufficient amount of image data (e.g., from different viewing angles) to generate size and spatial relationship data for the physical environment captured in the field of view of the one or more cameras, because no plane corresponding to the virtual object is detected in the field of view of the one or more cameras, and/or because there is not enough information to begin or continue plane detection based on image data available from the cameras), the device displays (e.g., via the display generation component, and in a first user interface region that includes a representation of the field of view of the one or more cameras (e.g., a blurred version of the field of view)) a calibration user interface object that is dynamically animated according to movement of the one or more cameras in the physical environment (e.g., scan hint objects such as elastic cube or wireframe objects). For example, in FIGS. 12E-1 through 12I-1, calibration user interface object 12014 is displayed. Animations of calibrating the movement of a user interface object according to one or more cameras are described with reference to, for example, fig. 12E-1 through 12F-1. In some embodiments, upon receiving an initial portion of the input corresponding to the request to display the representation of the augmented reality view, the field of view of the one or more cameras is analyzed to detect that one or more planes (e.g., floor, wall, table, etc.) in the field of view of the one or more cameras occur. In some embodiments, the analysis occurs prior to receiving the request (e.g., while the virtual object is displayed in the staging view). Displaying the calibration user interface object includes: while displaying the calibration user interface object, detecting, via one or more gesture sensors, a change in a gesture (e.g., a position and/or an orientation (e.g., rotation, tilt, yaw)) of one or more cameras in the physical environment; and, in response to detecting a change in pose of the one or more cameras in the physical environment, adjusting at least one display parameter (e.g., orientation, size, rotation, or position on the display) of a calibration user interface object (e.g., a scan cue object, such as an elastic cube or wireframe object) according to the detected change in pose of the one or more cameras in the physical environment. For example, fig. 12E-1 through 12F-1, which correspond to fig. 12E-2 through 12F-2, respectively, illustrate lateral movement of the device 100 relative to the physical environment 5002 and corresponding changes in the display field of view 6036 of one or more cameras of the device. In fig. 12E-2 through 12F-2, the calibration user interface object 12014 rotates in response to movement of one or more cameras.
The device detects (17006) that the calibration criterion is satisfied when displaying a calibration user interface object (e.g., a scan cue object, such as an elastic cube or wireframe object) that moves on the display according to detected changes in pose of the one or more cameras in the physical environment. For example, as described with reference to FIGS. 12E-12J, the device determines that the calibration criteria are satisfied in response to movement of the device occurring from 12E-1 to 12I-1.
In response to detecting that the calibration criteria are satisfied, the device stops (17008) displaying a calibration user interface object (e.g., a scan prompt object, such as an elastic cube or wireframe object). In some embodiments, after the device stops displaying the calibration user interface object, the device displays a representation of the field of view of the camera without blurring. In some embodiments, a representation of the virtual object is displayed on a non-blurred representation of the field of view of the camera. For example, in FIG. 12J, in response to the movement of the device described with reference to FIGS. 12E-1 through 12I-1, the calibration user interface object 12014 is no longer displayed, and the virtual object 11002 is displayed on the non-blurred representation 6036 of the field of view of the camera. Adjusting display parameters of a calibration user interface object according to movement of one or more cameras (e.g., device cameras that capture the physical environment of the device) provides visual feedback to the user (e.g., to indicate that movement of the device is required for calibration). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user move the device in a manner that provides the information needed to meet calibration standards), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the request to display an augmented reality view of a physical environment (e.g., a physical environment surrounding a device including one or more cameras) in a first user interface region including a representation of a field of view of the one or more cameras includes (17010) a request to display a representation of a virtual three-dimensional object (e.g., a virtual object having a three-dimensional model) in an augmented reality view of the physical environment. In some embodiments, the request is a tap input detected on a button to switch from a staging view of the virtual object to an augmented reality view of the virtual object. In some embodiments, the request is a selection of an augmented reality affordance displayed next to a representation of a virtual object in a two-dimensional user interface. For example, in fig. 12A, the input by the contact 12002 at the position corresponding to the toggle control 6018 is a request to display a virtual object 11002 in a user interface that includes a field of view 6036 of the camera, as shown in fig. 12B. Displaying an augmented reality view of a physical environment in response to a request to display a virtual object in an augmented reality view reduces the number of inputs required (e.g., to display both a view of the physical environment and the virtual object). Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide calibration inputs), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, after ceasing to display the calibration user interface object, the device displays (17012) a representation of the virtual three-dimensional object in a first user interface area that includes a representation of the field of view of the one or more cameras (e.g., after the calibration criteria are satisfied). In some embodiments, in response to a request, after calibration is completed and the field of view of the cameras is displayed in full clarity, the virtual object is dropped to a predefined position and/or orientation relative to a predefined plane (e.g., a physical surface such as a vertical wall or a horizontal floor of a support plane that can be used as a three-dimensional representation of the virtual object) identified in the field of view of one or more cameras. For example, in fig. 12J, the device has stopped displaying the calibration user interface object 12014 displayed in fig. 12E-12I, and the virtual object 11002 is displayed in the user interface including the field of view 6036 of the camera. Displaying the virtual object in the displayed augmented reality view after ceasing to display the calibration user interface object provides visual feedback (e.g., to indicate that the calibration criteria have been met). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and avoiding attempting to provide input for manipulating virtual objects before the calibration criteria are met), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the device displays (17014) a representation of the virtual three-dimensional object in the first user interface area (e.g., behind the calibration user interface object) while displaying the calibration user interface object (e.g., before the calibration criteria are satisfied), wherein the representation of the virtual three-dimensional object remains in a fixed position in the first user interface area (e.g., a position where the virtual three-dimensional object is not placed in the physical environment) during movement of the one or more cameras in the physical environment (e.g., while the calibration user interface object moves in the first user interface area in accordance with the movement of the one or more cameras). For example, in FIGS. 12E-1 through 12I-1, a representation of the virtual object 1102 is displayed while the calibration user interface object 12014 is displayed. When the device 100 including the one or more cameras is moved (e.g., as shown in fig. 12E-1 through 12F-1 and corresponding fig. 12E-2 through 12F-2), the virtual object 1102 remains in a fixed position in the user interface including the field of view 6036 of the one or more cameras. Displaying the virtual object while the calibration user interface object is displayed provides visual feedback (e.g., to indicate the object for which calibration is being performed). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide calibration input corresponding to the plane relative to which the virtual object will be placed), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the request to display an augmented reality view of a physical environment (e.g., a physical environment surrounding a device including one or more cameras) in a first user interface area including a representation of a field of view of the one or more cameras includes (17016) a request to display a representation of a field of view of the one or more cameras (e.g., to simultaneously display one or more user interface objects and/or controls (e.g., outlines of a plane, objects, pointers, icons, markers, etc.)), without requiring display of a representation of any virtual three-dimensional object (e.g., a virtual object having a three-dimensional model) in the physical environment captured in the field of view of the one or more cameras. In some embodiments, the request is a selection of an augmented reality affordance displayed next to a representation of a virtual object in a two-dimensional user interface. In some embodiments, the request is an activation of an augmented reality measurement application (e.g., a measurement application that facilitates measurement of a physical environment). Requesting display of a representation of the field of view of one or more cameras without requesting display of a representation of any virtual three-dimensional object provides feedback (e.g., by using the same calibration user interface object to indicate that calibration is required, regardless of whether the virtual object is displayed or not). Providing the user with improved feedback enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to receiving a request to display an augmented reality view of a physical environment, the device displays (17018) a representation of the field of view of one or more cameras (e.g., displays a blurred version of the physical environment in the field of view of the one or more cameras when the calibration criteria are not satisfied), and in accordance with determining that the calibration criteria for the augmented reality view of the physical environment are satisfied (e.g., because there is a sufficient amount of image data (e.g., from different viewing angles) to generate size and spatial relationship data for the physical environment captured in the field of view of the one or more cameras because a plane corresponding to a virtual object has been detected in the field of view of the one or more cameras, and/or because there is sufficient information to begin or continue plane detection based on image data available from the cameras), the device forgoes calibrating the user interface object (e.g., scan for display of cued objects such as elastic cube or wireframe objects). In some embodiments, scanning of the physical environment to detect planes is initiated while the virtual three-dimensional object is displayed in the staging user interface, which enables the device to detect one or more planes in the physical space in some cases (e.g., where the field of view of the camera has moved sufficiently to provide enough data to detect one or more planes in the physical space) before displaying the augmented reality view, such that a calibration user interface need not be displayed. Discarding display of the calibration user interface object in accordance with a determination that the calibration criteria for the augmented reality view of the physical environment are satisfied provides visual feedback to the user (e.g., absence of the calibration user interface object indicates that the calibration criteria have been satisfied and movement of the device is not required to perform the calibration). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user avoid unnecessary movement of the device for calibration purposes), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the device displays (17020) a text object (e.g., a text description describing a currently detected error state and/or a text prompt requesting user action (e.g., to correct a detected error state)) in the first user interface region while displaying the calibration user interface object (e.g., before the calibration criteria are satisfied), the text object providing information about actions that the user can take (e.g., next to the calibration user interface object) to improve calibration of the augmented reality view. In some embodiments, the text object provides a prompt to the user for movement of the device (e.g., with a currently detected error condition), such as "move too much," "detail is poor," "move closer to a point," and so forth. In some embodiments, the device updates the textual object according to the user's actions during the calibration process and the new error state detected based on the user actions. Displaying the text while displaying the calibration user interface object provides visual feedback to the user (e.g., providing verbal instructions as to the type of movement required for calibration). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., helps the user provide suitable input and reduces user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting that the calibration criteria are satisfied (e.g., the criteria are satisfied before the calibration user interface object is displayed, or the criteria are satisfied after the calibration user interface object is displayed and animated for a certain period of time), the device displays (17022) (e.g., after ceasing to display the calibration user interface object if the calibration user interface object is initially displayed) a visual indication of a plane detected in the physical environment captured in the field of view of the one or more cameras (e.g., displays a contour around the detected plane, or highlights the detected plane). For example, in fig. 12J, the plane (floor surface 5038) is highlighted to indicate that the plane has been detected in the physical environment 5002 captured in the display field of view 6036 of one or more cameras. Displaying a visual indication of the detected plane provides visual feedback (e.g., indicating that a plane has been detected in the physical environment captured by the device camera). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to receiving a request to display an augmented reality view of a physical environment: in accordance with a determination that the calibration criteria are not satisfied and prior to displaying the calibration user interface object, the device displays (17024) (e.g., via the display generation component, and in a first user interface region that includes a representation of a field of view of the one or more cameras (e.g., a blurred version of the field of view)) an animated cue object (e.g., a scan cue object, such as an elastic cube or wireframe object) that includes a representation of the device that moves relative to a representation of the plane (e.g., movement of the representation of the device relative to the representation of the plane indicates a desired device movement achieved by the user). For example, the animated cue object includes a representation 12004 of the device 100 moving relative to the representation 12010 of the plane, as described with reference to fig. 12B-12D. In some embodiments, when the device detects movement of the device, the device stops displaying the animated cue object (e.g., indicating that the user has begun moving the device in a manner that will cause calibration to continue). In some embodiments, when the device detects movement of the device and before calibration has been completed, the device replaces the display of the animated prompt object with a calibration user interface object to further guide the user with respect to the calibration of the device. For example, as described with reference to fig. 12C-12E, when movement of the device is detected (as shown in fig. 12C-12D), the animated prompt including the representation 12004 of the device 100 ceases to be displayed and the calibration user interface object 12014 is displayed in fig. 12E. Displaying an animated cueing object comprising a representation of the device moving relative to a representation of the plane provides visual feedback to the user (e.g. to show the type of movement of the device required for calibration). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user move the device in a manner that provides the information needed to meet calibration standards), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, adjusting at least one display parameter of the calibration user interface object in accordance with the detected change in pose of the one or more cameras in the physical environment includes (17026): moving the calibration user interface object by a first amount in accordance with a first amount of movement of the one or more cameras in the physical environment; and moving the calibration user interface object by a second amount according to a second amount of movement of the one or more cameras in the physical environment, wherein the first amount is different from (e.g., greater than) the second amount and the first amount of movement is different from (e.g., greater than) the second amount of movement (e.g., the first amount of movement and the second amount of movement are measured based on movement in the same direction in the physical environment). Moving the calibration user interface object by an amount corresponding to the magnitude of movement of one or more (device) cameras provides visual feedback (e.g., indicating to the user that the movement of the calibration user interface object is a guide for the device movement required for calibration). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, adjusting at least one display parameter of the calibration user interface object in accordance with the detected change in pose of the one or more cameras in the physical environment includes (17028): in accordance with a determination that the detected change in pose of the one or more cameras corresponds to a first type of movement (e.g., a lateral movement, such as a lateral movement to the left, right, or back and forth) (and does not correspond to a second type of movement (e.g., a vertical movement, such as a movement up, down, or up, down)), moving the calibration user interface object based on the first type of movement (e.g., moving the calibration user interface object in a first manner (e.g., rotating the calibration user interface object about a vertical axis that passes through the calibration user interface object)); and in accordance with a determination that the detected change in pose of the one or more cameras corresponds to the second type of movement (and does not correspond to the first type of movement), forgoing moving the calibration user interface object based on the second type of movement (e.g., forgoing moving the calibration user interface object in the first manner or holding the calibration user interface object stationary). For example, lateral movement of the device 100 including one or more cameras (e.g., as described with reference to fig. 12F-1-12G-1 and 12F-2-12G-2) causes the calibration user interface object 12014 to rotate, while vertical movement of the device 100 (e.g., as described with reference to fig. 12G-1-12H-1 and 12G-2-12H-2) does not cause the calibration user interface object 12014 to rotate. Forgoing calibrating movement of the user interface object in accordance with a determination that the detected change in pose of the device camera corresponds to the second type of movement provides visual feedback (e.g., indicating to the user that the second type of movement of the one or more cameras is not required to calibrate). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user avoid providing unnecessary input), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, adjusting at least one display parameter of the calibration user interface object in accordance with the detected change in pose of the one or more cameras in the physical environment includes (17030): the calibration user interface object is moved (e.g., rotated and/or tilted) according to the detected change in pose of the one or more cameras in the physical environment without changing a feature display position (e.g., a position of a geometric center, or an axis of the calibration user interface object on the display) of the calibration user interface object on the first user interface region (e.g., the calibration user interface object is anchored to a fixed position on the display while the physical environment moves under the calibration user interface object within a field of view of the one or more cameras). For example, in FIGS. 12E-1 through 12I-1, the calibration user interface object 12014 rotates while remaining in a fixed position relative to the display 112. Moving the calibration user interface object without changing the characteristic display position of the calibration user interface object provides visual feedback (e.g., indicating that the calibration user interface object is different from a virtual object placed at a location relative to the displayed augmented reality environment). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user input errors), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, adjusting at least one display parameter of the calibration user interface object in accordance with the detected change in pose of the one or more cameras in the physical environment includes (17032): rotating the calibration user interface object about an axis that is perpendicular to a direction of movement of one or more cameras in the physical environment (e.g., the calibration user interface object rotates about a z-axis when a device (e.g., including a camera) moves back and forth on an x-y plane, or the calibration user interface object rotates about a y-axis when the device (e.g., including a camera) moves edge-to-edge along the x-axis (e.g., the x-axis is defined as a horizontal direction relative to the physical environment and lies, for example, within a plane of the touch screen display). For example, in FIGS. 12E-1 through 12G-1, the calibration user interface object 12014 is rotated about a vertical axis that is perpendicular to the lateral movement of the device shown in FIGS. 12E-2 through 12G-2. Rotating the calibration user interface object about an axis perpendicular to the movement of the device camera provides visual feedback (e.g., indicating to the user that the movement of the calibration user interface object is a guide for the device movement required for calibration). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, adjusting at least one display parameter of the calibration user interface object in accordance with the detected change in pose of the one or more cameras in the physical environment includes (17034): the calibration user interface object is moved at a speed determined from the rate of change detected in the field of view of the one or more cameras (e.g., the speed of movement of the physical environment). Moving the calibration user interface object at a speed determined from the change in the pose of the device camera provides visual feedback (e.g., indicating to the user that the movement of the calibration user interface object is a guide for the device movement required for calibration). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, adjusting at least one display parameter of the calibration user interface object in accordance with the detected change in pose of the one or more cameras in the physical environment includes (17036): the calibration user interface object is moved in a direction determined from the direction of the detected change in the field of view of the one or more cameras (e.g., the speed of movement of the physical environment) (e.g., the calibration user interface object is rotated clockwise by the device for right-to-left movement of the device and counterclockwise for left-to-right movement of the device, or the calibration user interface object is rotated counterclockwise by the device for right-to-left movement of the device and clockwise for left-to-right movement of the device). Moving the calibration user interface object along a direction determined from the change in the pose of the device camera provides visual feedback (e.g., indicating to the user that the movement of the calibration user interface object is a guide for the device movement required for calibration). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
It should be understood that the particular order in which the operations in fig. 17A-17D are described is merely an example and is not intended to suggest that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods 800, 900, 1000, 16000, 18000, 19000, and 20000) also apply in a similar manner to method 17000 described above with respect to fig. 17A-17D. For example, the contact, input, virtual object, user interface region, field of view, tactile output, movement, and/or animation described above with reference to method 17000 optionally has one or more of the features of the contact, input, virtual object, user interface region, field of view, tactile output, movement, and/or animation described herein with reference to other methods described herein (e.g., methods 800, 900, 1000, 16000, 18000, 19000, and 20000). For the sake of brevity, these details are not repeated here.
18A-18I are flow diagrams illustrating a method 18000 of constraining rotation of a virtual object about an axis. Method 18000 is performed at an electronic device (e.g., device 300 of fig. 3 or portable multifunction device 100 of fig. 1A) having a display generation component (e.g., a display, a projector, a heads-up display, etc.), one or more input devices (e.g., a touch-sensitive surface or a touch-screen display that serves as both a display generation component and a touch-sensitive surface), one or more cameras (e.g., one or more rear-facing cameras on a side of the device opposite the display and the touch-sensitive surface), and one or more attitude sensors (e.g., an accelerometer, a gyroscope, and/or a magnetometer) for detecting changes in attitude (e.g., orientation (e.g., rotation, yaw, and/or tilt angle) and position relative to a surrounding physical environment) of the device including the one or more cameras. Some operations in method 18000 are optionally combined, and/or the order of some operations is optionally changed.
The device displays (18002) a representation (e.g., a staging user interface or an augmented reality user interface) of a first perspective of a virtual three-dimensional object in a first user interface region by a display generation component. For example, the virtual object 11002 is shown in the staging user interface 6010, as shown in FIG. 13B.
While displaying a representation of a first perspective of a virtual three-dimensional object in a first user interface area on a display, the device detects (18004) a first input (e.g., a swipe input (e.g., by one or two finger contacts), or a pivot input (e.g., two finger rotations, or one finger contact pivoting about the other finger contact) on a touch-sensitive surface) corresponding to a request to rotate the virtual three-dimensional object relative to the display (e.g., a display plane corresponding to a display generation component, such as the plane of a touch screen display) to display a portion of the virtual three-dimensional object that is not visible from the first perspective of the virtual three-dimensional object. For example, the request is an input as described with reference to fig. 13B to 13C or an input as described with reference to fig. 13E to 13F.
In response to detecting the first input (18006): in accordance with a determination that the first input corresponds to a request to rotate the three-dimensional object about a first axis (e.g., a first axis parallel to a display plane (e.g., an x-y plane) in a horizontal direction, such as an x-axis), the device rotates the virtual three-dimensional object by an amount relative to the first axis, the amount is determined based on a magnitude of the first input (e.g., a speed and/or distance of the swipe input along a vertical axis (e.g., a y-axis) of a touch-sensitive surface (e.g., an x-y plane corresponding to an x-y plane of the display), and the amount is constrained by a limit on movement that limits rotation of the virtual three-dimensional object relative to the first axis by more than a threshold amount of rotation (e.g., rotation about the first axis is limited to within +/-30 degrees of rotation about the first axis, rotation beyond the range is prohibited regardless of the magnitude of the first input). For example, as described with reference to fig. 13E to 13G, the rotation of the virtual object 11002 is restricted. In accordance with a determination that the first input corresponds to a request to rotate the three-dimensional object about a second axis different from the first axis (e.g., a second axis, such as a y-axis, that is parallel to a plane of the display (e.g., an x-y plane) in a vertical direction), the device rotates the virtual three-dimensional object relative to the second axis by an amount determined based on a magnitude of the first input (e.g., a speed and/or distance of a swipe input along a horizontal axis (e.g., an x-axis) of a touch-sensitive surface (e.g., an x-y plane corresponding to the x-y plane of the display)), wherein for inputs having magnitudes above respective thresholds, the device rotates the virtual three-dimensional object relative to the second axis by more than a threshold amount. In some embodiments, for rotation relative to the second axis, the device imposes a constraint on rotation that is greater than the constraint on rotation relative to the first axis (e.g., allowing the three-dimensional object to rotate 60 degrees instead of 30 degrees). In some embodiments, for rotation relative to the second axis, the device imposes no constraint on the rotation such that the three-dimensional object may rotate freely about the second axis (e.g., for inputs having a sufficiently high magnitude, such as a fast or long swipe input including movement of one or more contacts, the three-dimensional object may rotate more than 360 degrees relative to the second axis). For example, the amount of rotation of the virtual object 11002 occurring about the y-axis in response to the input described with reference to fig. 13B to 13C is larger than the amount of rotation of the virtual object 11002 occurring about the x-axis in response to the input described with reference to fig. 13E to 13G. Depending on whether the input is a request to rotate the object about the first axis or the second axis, it is determined whether to rotate the object by an amount constrained by a threshold amount or to rotate the object beyond the threshold amount, thereby improving the ability to control different types of rotation operations. Providing additional control options without cluttering the user interface with the additional displayed controls enhances the operability of the device and makes the user-device interface more efficient.
In some embodiments, in response to detecting the first input (18008): in accordance with a determination that the first input comprises a first movement of the contact across the touch-sensitive surface in a first direction (e.g., a y-direction, a vertical direction on the touch-sensitive surface), and that the first movement of the contact in the first direction satisfies a first criterion for rotating the representation of the virtual object relative to the first axis, wherein the first criterion includes a requirement that the first input comprise more than a first threshold amount of movement in the first direction in order to satisfy the first criterion (e.g., the device does not initiate rotation of the three-dimensional object about the first axis until the device detects more than the first threshold amount of movement in the first direction), the device determines that the first input corresponds to a request to rotate the three-dimensional object about the first axis (e.g., an x-axis, a horizontal axis parallel to the display, or a horizontal axis through the virtual object); and in accordance with a determination that the first input comprises a second movement of the contact across the touch-sensitive surface in a second direction (e.g., the x-direction, a horizontal direction on the touch-sensitive surface), and that the second movement of the contact in the second direction satisfies second criteria for rotating the representation of the virtual object relative to a second axis, wherein the second criteria include a requirement that the first input comprise more than a second threshold amount of movement in the second direction in order to satisfy the second criteria (e.g., the device does not initiate rotation of the three-dimensional object about the second axis until the device detects more than the second threshold amount of movement in the second direction), the device determines that the first input corresponds to a request to rotate the three-dimensional object about the second axis (e.g., parallel to a vertical axis of the display, or a vertical axis through the virtual object), wherein the first threshold is greater than a second threshold (e.g., the user needs to swipe in a vertical direction to trigger rotation about the horizontal axis (e., tilt the object forward or backward relative to the user) by a greater amount than by swiping in a horizontal direction to trigger rotation about a vertical axis (e.g., rotating the object). Determining whether to rotate the object by an amount constrained by a threshold amount or by more than a threshold amount, depending on whether the input is a request to rotate the object about a first axis or a second axis, improves the ability to control different types of rotation operations in response to the input corresponding to the request to rotate the object. Providing additional control options without cluttering the user interface with the additional displayed controls enhances the operability of the device and makes the user-device interface more efficient.
In some embodiments (18010), the rotation of the virtual three-dimensional object relative to the first axis is at a first angleA first degree of correspondence between the characteristic value of the first input parameter (e.g., swipe distance or swipe speed) that is input and the amount of rotation applied to the virtual three-dimensional object about the first axis occurs, a rotation of the virtual three-dimensional object relative to the second axis occurs with a second degree of correspondence between the characteristic value of the first input parameter (e.g., swipe distance or swipe speed) of the second input gesture and the amount of rotation applied to the virtual three-dimensional object about the second axis, and the first degree of correspondence involves less rotation of the virtual three-dimensional object relative to the first input parameter than the second degree of correspondence (e.g., rotation about the first axis has more friction or catch than rotation about the second axis). For example, the first rotation amount of the virtual object 11002 occurs in response to the swipe input with the swipe distance d1For rotation about the y-axis (as described with reference to fig. 13B to 13C), and a second rotation amount of the virtual object 11002, which is smaller than the first rotation amount, occurs in response to the swipe input, with a swipe distance d1For rotation about the x-axis (as described with reference to fig. 13E-13G). Rotating the virtual object with a greater or lesser degree of rotation in response to the input, depending on whether the input is a request to rotate the object about the first axis or the second axis, improves the ability to control different types of rotation operations in response to the input corresponding to the request to rotate the object. Providing additional control options without cluttering the user interface with the additional displayed controls enhances the operability of the device and makes the user-device interface more efficient.
In some implementations, the device detects (18012) an end of the first input (e.g., the input includes movement of one or more contacts on the touch-sensitive surface, and detecting the end of the first input includes detecting liftoff of the one or more contacts from the touch-sensitive surface). After (e.g., in response to) detecting the end of the first input, the device continues (18014) rotating the three-dimensional object based on a magnitude of the first input (e.g., based on a speed of movement of the contact prior to the contact lift-off) before detecting the end of the input, including: in accordance with a determination that the three-dimensional object is rotated about the first axis, slowing rotation of the object about the first axis by a first amount that is proportional to a magnitude of the rotation of the three-dimensional object about the first axis (e.g., slowing rotation of the three-dimensional object about the first axis based on a first simulated physical parameter, such as simulated friction having a first coefficient of friction); and in accordance with a determination that the three-dimensional object is rotated relative to the second axis, slowing rotation of the object relative to the second axis by a second amount that is proportional to a magnitude of the rotation of the three-dimensional object relative to the second axis (e.g., slowing rotation of the three-dimensional object about the second axis based on a second simulated physical parameter, such as simulated friction having a second coefficient of friction that is less than the first coefficient of friction), wherein the second amount is different from the first amount. For example, in fig. 13C-13D, the virtual object 11002 continues to rotate after the contact 13002 lifts off, which causes rotation of the virtual object 11002 as described with reference to fig. 13B-13C. In some embodiments, the second amount is greater than the first amount. In some embodiments, the second amount is less than the first amount. After detecting the end of the input, slowing rotation of the virtual object by a first amount or a second amount depending on whether the input is a request to rotate the object about the first axis or the second axis, providing visual feedback indicating that the rotation operation is applied to the virtual object differently to rotate about the first axis and the second axis. Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and avoiding attempting to provide input for manipulating the virtual object before placing the object in the second orientation corresponding to the plane), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the device detects (18016) an end of the first input (e.g., the input includes movement of one or more contacts on the touch-sensitive surface, and detecting the end of the first input includes detecting liftoff of the one or more contacts from the touch-sensitive surface). After (e.g., in response to) detecting the end of the first input (18018): in accordance with a determination that the three-dimensional object is rotated relative to the first axis beyond the respective rotation threshold, the device reverses at least a portion of the rotation of the three-dimensional object relative to the first axis; and, in accordance with a determination that the rotation of the three-dimensional object relative to the first axis does not exceed the respective rotation threshold, the device forgoes reversing the rotation of the three-dimensional object relative to the first axis. (e.g., stop rotation of the three-dimensional object relative to the first axis and/or continue rotation of the three-dimensional object relative to the first axis in the direction of motion of the input, the magnitude of the rotation being determined by the magnitude of the input before the end of the input is detected). For example, after the virtual object 11002 rotates beyond the rotation threshold, as described with reference to fig. 13E to 13G, the rotation of the virtual object 11002 is reversed, as shown in fig. 13G to 13H. In some embodiments, an amount of reversal of rotation of the three-dimensional object is determined based on a distance the three-dimensional object is rotated beyond a respective rotation threshold (e.g., if the amount of rotation the three-dimensional object is rotated beyond the respective rotation threshold is greater, then reversing the rotation of the three-dimensional object relative to the first axis by a greater amount, compared to if the amount of rotation the three-dimensional object is rotated beyond the respective rotation threshold is less, then reversing the rotation relative to the first axis by a lesser amount). In some embodiments, the reversal of rotation is driven by a simulated physical parameter, such as an elastic effect that pulls with greater force as the three-dimensional object is rotated further beyond a respective rotation threshold relative to the first axis. In some embodiments, the reversal of rotation is in a direction of rotation determined based on the direction of rotation being rotated relative to the first axis beyond the respective rotation threshold (e.g., if the three-dimensional object is rotated such that the top of the object moves back into the display, the reversal of rotation rotates the top of the object forward out of the display, if the three-dimensional object is rotated such that the top of the object rotates forward out of the display, the reversal of rotation rotates the top of the object back into the display, if the three-dimensional object is rotated such that the right of the object moves back into the display, the reversal of rotation rotates the right of the object forward out of the display, and/or if the three-dimensional object is rotated such that the left of the object rotates forward out of the display, the reversal of rotation rotates the left of the object back into the display). In some embodiments, for example, similar rubber band selection (e.g., conditional reversal of rotation) for rotation about the second axis is performed with rotation relative to the second axis constrained to a respective range of angles. In some embodiments, for example, where rotation relative to the second axis is not constrained such that the device allows the three-dimensional object to rotate 360 degrees, no rubber band selection for rotation about the second axis is performed (e.g., because the device does not impose a rotation threshold on rotation relative to the second axis). Reversing at least a portion of the rotation of the three-dimensional object relative to the first axis after detecting the end of the input, or forgoing reversing a portion of the rotation of the three-dimensional object relative to the first axis, depending on whether the object is rotated beyond a rotation threshold, thereby providing visual feedback indicative of a rotation threshold applicable to the rotation of the virtual object. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user avoid attempting to provide input for rotating the virtual object beyond a rotation threshold), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments (18020), in accordance with a determination that the first input corresponds to a request to rotate the three-dimensional object about a third axis (e.g., a third axis, such as a z-axis, that is perpendicular to a plane of the display (e.g., an x-y plane)) that is different from the first axis and the second axis, the device forgoes rotating the virtual three-dimensional object relative to the third axis (e.g., rotation about the z-axis is inhibited and the request to rotate the object about the z-axis is ignored by the device). In some embodiments, the device provides an alert (e.g., a tactile output to indicate input failure). In accordance with a determination that the rotation input corresponds to a request to rotate the virtual object about the third axis, forgoing rotation of the virtual object, visual feedback is provided indicating that rotation about the third axis is restricted. Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user avoid attempting to provide input for rotating the virtual object about the third axis), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the device displays (18022) a representation of a shadow cast by the virtual three-dimensional object while displaying a representation of a first perspective of the virtual three-dimensional object in a first user interface area (e.g., a staging user interface). The device changes the shape of the representation of the shadow in accordance with a rotation of the virtual three-dimensional object relative to the first axis and/or the second axis. For example, when the virtual object 11002 is rotated, the shape of the shadow 13006 of the virtual object 11002 is different from fig. 13B to 13F. In some embodiments, the shadow is shifted and changes shape to indicate a current orientation of the virtual object relative to an invisible ground plane in a staging user interface that supports a predefined bottom side of the virtual object. In some embodiments, the surface of the virtual three-dimensional object appears to reflect light from a simulated light source that is located in a predefined direction in a virtual space represented in the staging user interface. Changing the shape of the shadow according to the rotation of the virtual object provides visual feedback (e.g., a virtual plane (e.g., a gantry in a staging view) indicating the orientation of the virtual object relative thereto). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user determine the appropriate direction for causing a swipe input that rotates about the first axis or the second axis), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, while rotating the virtual three-dimensional object in the first user interface region (18024): in accordance with a determination to display the virtual three-dimensional object at a second perspective that reveals a predefined bottom of the virtual three-dimensional object, forgoing displaying the representation of the shadow with the representation of the second perspective of the virtual three-dimensional object. For example, when the virtual object is viewed from below, the device does not display a shadow of the virtual object (e.g., as described with reference to fig. 13G to 13I). Discarding the shadow of displaying the virtual object in accordance with determining to display the bottom of the virtual object provides visual feedback (e.g., indicating that the object has rotated to a position no longer corresponding to the virtual plane (e.g., a gantry of a staging view)). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, after rotating the virtual three-dimensional object in the first user interface region (e.g., a landing view), the device detects (18026) a second input corresponding to a request to reset the virtual three-dimensional object in the first user interface region (e.g., the second input is a double click on the first user interface region). In response to detecting the second input, the device displays (18028) a representation (e.g., a first perspective, or a default starting perspective different from the first perspective (e.g., when the first perspective is a displayed perspective after user manipulation in the staging user interface)) of a predefined original perspective of the virtual three-dimensional object in the first user interface area (e.g., by rotating and resizing the virtual object) (e.g., in response to a double-click, the device resets the orientation of the virtual object to a predefined original orientation (e.g., upright with the front side facing the user and the bottom side resting on a predefined ground plane)). For example, fig. 13I to 13J show an input of changing the perspective of the virtual object 11002 from a changed perspective (as a result of the rotational input described with reference to fig. 13B to 13G) to the original perspective in fig. 13J (which is the same as the perspective of the virtual object 11002 shown in fig. 13A). In some embodiments, in response to detecting the second input corresponding to an instruction to reset the virtual three-dimensional object, the device further resizes the virtual three-dimensional object to reflect a default display size of the virtual three-dimensional object. In some embodiments, a double-click input resets the orientation and size of the virtual object in the staging user interface, while a double-click input only resets the size, and not the orientation of the virtual object in the augmented reality user interface. In some embodiments, the device requires that a double tap be directed at the virtual object in order to reset the size of the virtual object in the augmented reality user interface, while the device resets the orientation and size of the virtual object in response to the double tap detected on the virtual object and the double tap detected around the virtual object. In the augmented reality view, a single finger swipe drags the virtual object instead of rotating the virtual object (e.g., as opposed to in a staging view). Displaying the predefined original perspective of the virtual object in response to detecting the request to reset the virtual object enhances the operability of the device and makes the user-device interface more efficient (e.g., by providing an option to reset the object rather than requiring the user to estimate when the provided input to adjust the properties of the object returns the object to the predefined original perspective). Reducing the number of inputs required to perform an operation improves the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling a user to use the device more quickly and efficiently.
In some embodiments, while displaying the virtual three-dimensional object in the first user interface region (e.g., a landing user interface), the device detects (18030) a third input corresponding to a request to resize the virtual three-dimensional object (e.g., the third input is a pinch or expand gesture directed to the virtual object represented on the first user interface region, the third input having a magnitude that satisfies a criterion (e.g., an original or enhanced criterion for initiating a resizing operation (described in more detail below with reference to method 19000)). When resizing the representation of the virtual three-dimensional object, the device displays an indicator to indicate a current zoom level of the virtual object. In some embodiments, the device stops displaying the indicator of the zoom level when the third input terminates. Adjusting the size of the virtual object according to the magnitude of the input for resizing the object enhances the operability of the device (e.g., by providing the option to resize the object by a desired amount). Reducing the number of inputs required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, while resizing the representation of the virtual three-dimensional object in the first user interface area (e.g., a staging user interface), the device detects (18034) that the size of the virtual three-dimensional object has reached a predefined default display size of the virtual three-dimensional object. In response to detecting that the size of the virtual three-dimensional object has reached a predefined default display size of the virtual three-dimensional object, the device generates (18036) a haptic output (e.g., a discrete haptic output) to indicate that the virtual three-dimensional object is displayed at the predefined default display size. Fig. 11O provides an embodiment of a haptic output 11024 that is provided in response to detecting that the size of the virtual object 11002 has reached a previously predefined size of the virtual object 11002 (e.g., as described with reference to fig. 11M-11O). In some embodiments, the device generates the same haptic output when the size of the virtual object is reset to the default display size in response to the double-click input. Generating a tactile output provides feedback to the user in accordance with a determination that the size of the virtual object has reached the predefined default display size (e.g., indicating that no further input is required to return the simulated size of the virtual object to the predefined size). Providing improved haptic feedback enhances the operability of the device (e.g., by providing sensory information that allows a user to perceive that a predefined simulated physical size of a virtual object has been reached without cluttering the user interface with the displayed information), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, a visual indication of a zoom level (e.g., a slider indicating a value corresponding to a current zoom level) is displayed in a first user interface area (e.g., a landing user interface). When resizing the representation of the virtual three-dimensional object, the visual indication of the zoom level is adjusted in accordance with the resized representation of the virtual three-dimensional object.
In some embodiments, while displaying the representation of the third perspective of the virtual three-dimensional object in the first user interface region (e.g., a landing user interface), the device detects (18042) a fourth input corresponding to a request to display the virtual three-dimensional object in a second user interface region (e.g., an augmented reality user interface) that includes a field of view of one or more cameras (e.g., a camera embedded in the device). In response to detecting the fourth input, the device displays (18044), via the display generation component, a representation of the virtual object on at least a portion of a field of view of the one or more cameras included in the second user interface area (e.g., displays the field of view of the one or more cameras in response to a request to display the virtual object in the second user interface area), wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located. Displaying a representation of a virtual object includes: the virtual three-dimensional object is rotated about a first axis (e.g., an axis parallel to the plane of the display (e.g., an x-y plane) in a horizontal direction, such as the x-axis) to a predefined angle (e.g., to a default yaw angle, such as 0 degrees; or to an angle aligned with (e.g., parallel to) a plane detected in the physical environment captured in the field of view of the one or more cameras). In some embodiments, the device displays an animation of the three-dimensional object, the animation gradually rotating to a predefined angle relative to the first axis. A current angle of the virtual three-dimensional object is maintained relative to a second axis (e.g., an axis parallel to a plane of the display (e.g., an x-y plane) in a vertical direction, such as the y-axis). In response to a request to display a virtual object in the field of view of one or more cameras, rotating the virtual object to a predefined angle about the first axis (e.g., without further input to reposition the virtual object to a predefined orientation relative to the plane) enhances the operability of the device. Reducing the number of inputs required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, while displaying the representation of the fourth perspective of the virtual three-dimensional object in the first user interface region (e.g., a staging user interface), the device detects (18046) a fifth input corresponding to a request to return to a two-dimensional user interface that includes the two-dimensional representation of the virtual three-dimensional object. In response to detecting the fifth input, the device (18048): rotating the virtual three-dimensional object (e.g., prior to displaying the virtual three-dimensional object and the two-dimensional representation of the two-dimensional user interface) to display a perspective of the virtual three-dimensional object that corresponds to the two-dimensional representation of the virtual three-dimensional object; and displaying the two-dimensional representation of the virtual three-dimensional object after rotating the virtual three-dimensional object to display the respective perspective corresponding to the two-dimensional representation of the virtual three-dimensional object. In some embodiments, the device displays an animation of the three-dimensional object that gradually rotates to display a perspective of the virtual three-dimensional object that corresponds to the two-dimensional representation of the virtual three-dimensional object. In some embodiments, the device also resizes the virtual three-dimensional object during or after the rotation to match the size of the two-dimensional representation of the virtual three-dimensional object displayed in the two-dimensional user interface. In some embodiments, an animated transition is displayed to show that the rotated virtual three-dimensional object moves in the two-dimensional user interface towards, and stabilizes in, the position of the two-dimensional representation (e.g., a thumbnail of the virtual object). In response to an input to return to displaying the two-dimensional representation of the virtual three-dimensional object, rotating the virtual three-dimensional object to a perspective corresponding to the two-dimensional representation of the virtual three-dimensional object provides visual feedback (e.g., to indicate that the displayed object is two-dimensional). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and avoiding attempting to provide input for rotating a two-dimensional object along an axis for which rotation of the two-dimensional object is not available), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, prior to displaying the representation of the first perspective of the virtual three-dimensional object, the device displays (18050) a user interface that includes a representation of the virtual three-dimensional object (e.g., a thumbnail or icon) that includes a representation of a view of the virtual three-dimensional object from the respective perspective (e.g., a static representation such as a two-dimensional image corresponding to the virtual three-dimensional object). While displaying the representation of the virtual three-dimensional object, the device detects (18052) a request to display the virtual three-dimensional object (e.g., a tap input or other selection input directed to the representation of the virtual three-dimensional object). In response to detecting the request to display the virtual three-dimensional object, the device replaces (18054) the display of the representation of the virtual three-dimensional object with the virtual three-dimensional object rotated to match the respective perspective of the representation of the virtual three-dimensional object. 11A-11E provide embodiments of a user interface 5060 that displays a representation of a virtual object 11002. In response to the request to display the virtual object 11002, as described with reference to fig. 11A, the display of the virtual object 11002 in the staging user interface 6010 replaces the display of the user interface 5060, as shown in fig. 11E. The perspective of the virtual object 11002 in fig. 11E is the same as the perspective of the representation of the virtual object 11002 in fig. 11A. In some embodiments, the representation of the virtual three-dimensional object is enlarged (e.g., to a size that matches the size of the virtual three-dimensional object) prior to being replaced by the virtual three-dimensional object. In some embodiments, the virtual three-dimensional object is initially displayed at the size of the representation of the virtual three-dimensional object, and is subsequently enlarged. In some embodiments, during the transition from the representation of the virtual three-dimensional object to the virtual three-dimensional object, the apparatus gradually enlarges the representation of the virtual three-dimensional object, fades the representation of the virtual three-dimensional object with the virtual three-dimensional object avatar, and then gradually enlarges the virtual three-dimensional object to form a smooth transition between the representation of the virtual three-dimensional object and the virtual three-dimensional object. In some embodiments, the initial position of the virtual three-dimensional object is selected to correspond to the position of the representation of the virtual three-dimensional object. In some embodiments, the representation of the virtual three-dimensional object is shifted to a location selected to correspond to a location at which the virtual three-dimensional object is to be displayed. Replacing the display of the (two-dimensional) representation of the virtual three-dimensional object with the rotated virtual three-dimensional object provides visual feedback to match the perspective of the (two-dimensional) representation (e.g., to indicate that the three-dimensional object is the same object as the two-dimensional representation of the virtual three-dimensional object). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, prior to displaying the first user interface, the device displays (18056) a two-dimensional user interface that includes a two-dimensional representation of the virtual three-dimensional object. While displaying the two-dimensional user interface that includes the two-dimensional representation of the virtual three-dimensional object, the device detects (18058), at a location on the touch-sensitive surface that corresponds to the two-dimensional representation of the virtual three-dimensional object, a first portion of the touch input (e.g., an increase in contact intensity) that satisfies a preview criterion (e.g., the preview criterion requires that an intensity of the press input exceed a first intensity threshold (e.g., a light press pressure threshold) and/or the preview criterion requires that a duration of the press input exceed a first duration threshold). In response to detecting the first portion of the touch input that meets the preview criteria, the device displays (18060) a preview of the virtual three-dimensional object, the preview being larger than the two-dimensional representation of the virtual three-dimensional object (e.g., the preview is animated to display different perspectives of the virtual three-dimensional object); in some embodiments, the device displays an animation of the three-dimensional object gradually zooming in (e.g., based on the duration or pressure of the input or based on a predetermined rate of the animation). Displaying a preview of the virtual three-dimensional object (e.g., without replacing the display of the currently displayed user interface with a different user interface) enhances operability of the device (e.g., by enabling a user to display the virtual three-dimensional object and return to viewing a two-dimensional representation of the virtual three-dimensional object without having to provide input for navigating between user interfaces). Reducing the number of inputs required to perform an operation improves the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling a user to use the device more quickly and efficiently.
In some embodiments, while displaying the preview of the virtual three-dimensional object, the device detects (18062) the second portion of the touch input (e.g., by the same continuously maintained contact). In response to detecting the second portion of the touch input (18064): in accordance with a determination that the second portion of the touch input satisfies the menu display criteria (e.g., the menu display criteria require the contact to move in a predefined direction (e.g., upward) by more than a threshold amount), the device displays a plurality of selectable options (e.g., a share menu) corresponding to a plurality of operations associated with the virtual object (e.g., a share option, such as various means of sharing the virtual object with another device or user); and in accordance with a determination that the second portion of the touch input satisfies the staging criterion (e.g., the staging criterion requires the intensity of the contact to exceed a second threshold intensity that is greater than the first threshold intensity (e.g., a deep press intensity threshold)), the device replaces the display of the two-dimensional user interface that includes the two-dimensional representation of the virtual three-dimensional object with the first user interface that includes the virtual three-dimensional object. Displaying a menu associated with the virtual object or replacing a display of a two-dimensional user interface comprising a two-dimensional representation of the virtual three-dimensional object with a first user interface comprising the virtual three-dimensional object, depending on whether the staging criteria are satisfied, enables a number of different types of operations to be performed in response to the input. Enabling multiple different types of operations to be performed with the first type of input increases the efficiency with which a user can perform the operations, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first user interface includes (18066) a plurality of controls (e.g., buttons for switching to a world view, for returning, etc.). Prior to displaying the first user interface, the device displays (18068) a two-dimensional user interface that includes a two-dimensional representation of a virtual three-dimensional object. In response to detecting a request to display a virtual three-dimensional object in the first user interface, the device (18070) displays the virtual three-dimensional object in the first user interface without displaying a set of one or more controls associated with the virtual three-dimensional object; and after displaying the virtual three-dimensional object in the first user interface, the device displays a set of one or more controls. For example, as described with reference to fig. 11A-11E, a display of a user interface 5060 including a two-dimensional representation of a virtual object 11002 is displayed before the staging user interface 6010. In response to a request to display the virtual object 11002 in the staging user interface 6010 (as described with reference to fig. 11A), the virtual object 11002 is displayed (as shown in fig. 11B through 11C) without the controls 6016, 6018, and 6020 of the staging user interface 6010. In fig. 11D-11E, controls 6016, 6018, and 6020 of the landing user interface 6010 fade into view in the user interface. In some embodiments, the set of one or more controls includes controls for displaying a virtual three-dimensional object in the augmented reality environment, wherein the virtual three-dimensional object is placed in a fixed position relative to a plane detected in a field of view of one or more cameras of the device. In some embodiments, in response to detecting the request to display the virtual three-dimensional object in the first user interface: in accordance with a determination that the virtual three-dimensional object is not ready to be displayed in the first user interface (e.g., the three-dimensional model of the virtual object is not fully loaded when ready to display the first user interface) (e.g., a loading time of the virtual object exceeds a threshold amount of time (e.g., is apparent and perceptible to a user)), the device displays a portion of the first user interface (e.g., a background window of the first user interface) without displaying the plurality of controls on the first user interface; and in accordance with a determination that the virtual three-dimensional object is ready to be displayed in the first user interface (e.g., after displaying the portion of the first user interface without the control), the device displays (e.g., fades in) the virtual three-dimensional object in the first user interface; and after displaying the virtual three-dimensional object in the first user interface, the device displays (e.g., fades in) the control. In response to detecting a request to display a virtual three-dimensional object in a first user interface and in accordance with a determination that the virtual three-dimensional object is ready to be displayed (e.g., a three-dimensional model of the virtual object has been loaded when ready to display the first user interface (e.g., the loading time of the virtual object is less than a threshold amount of time (e.g., negligible and imperceptible to a user)). a device displays the first user interface with a plurality of controls on the first user interface, and the device displays (e.g., does not fade in) the virtual three-dimensional object in the first user interface with the plurality of controls. A control indicating that the virtual object was manipulated during the amount of time required to load the virtual object is unavailable). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user avoid providing input to manipulate objects when manipulation operations are not available during the loading time of the virtual object), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
It should be understood that the particular order in which the operations in fig. 18A-18I are described is merely an example and is not intended to suggest that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods 800, 900, 1000, 16000, 17000, 19000, and 20000) also apply in a similar manner to method 18000 described above with respect to fig. 18A-18I. For example, the contact, input, virtual object, user interface region, field of view, tactile output, movement, and/or animation described above with reference to method 18000 optionally has one or more of the features of the contact, input, virtual object, user interface region, field of view, tactile output, movement, and/or animation described herein with reference to other methods described herein (e.g., methods 800, 900, 1000, 17000, 18000, 19000, and 20000). For the sake of brevity, these details are not repeated here.
Fig. 19A-19H are flow diagrams illustrating a method 19000 of increasing a second threshold movement value required for a second object manipulation behavior in accordance with a determination that the first object manipulation behavior satisfies the first threshold movement value. Method 19000 is performed at an electronic device (e.g., device 300 of fig. 3 or portable multifunction device 100 of fig. 1A) having a display generation component (e.g., a display, a projector, a heads-up display, etc.) and a touch-sensitive surface (e.g., a touch-sensitive surface or a touch-screen display that acts as both a display generation component and a touch-sensitive surface). Some operations in method 19000 are optionally combined, and/or the order of some operations is optionally changed.
The device displays (19002), via the display generation component, a first user interface region that includes a user interface object (e.g., a user interface region that includes a representation of a virtual object) associated with a plurality of object manipulation behaviors including a first object manipulation behavior (e.g., rotation of the user interface object about a respective axis) that is performed in response to an input that satisfies a first gesture recognition criterion (e.g., a rotation criterion) and a second object manipulation behavior (e.g., one of translation and zoom of the user interface object) that is performed in response to an input that satisfies a second gesture recognition criterion (e.g., one of a translation criterion and a zoom criterion). For example, the displayed virtual object 11002 is associated with manipulation behaviors including rotation about respective axes (e.g., as described with reference to fig. 14B-14E), translation (e.g., as described with reference to fig. 14K-14M), and scaling (e.g., as described with reference to fig. 14G-14I).
While displaying the first user interface region, the device detects (19004) a first portion of the input directed to the user interface object (e.g., the device detects one or more contacts on the touch-sensitive surface at locations corresponding to display locations of the user interface object), including detecting movement of the one or more contacts on the touch-sensitive surface, and when the one or more contacts are detected on the touch-sensitive surface, the device evaluates movement of the one or more contacts in conjunction with both the first gesture recognition criteria and the second gesture recognition criteria.
In response to detecting the first portion of the input, the device updates an appearance of the user interface object based on the first portion of the input, including (19006): in accordance with a determination that the first portion of the input satisfies the first gesture recognition criteria before the second gesture recognition criteria are satisfied: changing an appearance of the user interface object (e.g., rotating the user interface object) according to the first object manipulation behavior based on the first portion of the input (e.g., based on a direction and/or magnitude of the first portion of the input); and updating the second gesture recognition criteria (e.g., without changing the appearance of the user interface object according to the second object manipulation behavior) by increasing a threshold of the second gesture recognition criteria (e.g., increasing a threshold required for a movement parameter (e.g., movement distance, speed, etc.) in the second gesture recognition criteria). For example, in fig. 14E, virtual object 1102 has been rotated in accordance with a determination that the rotation criteria have been met (prior to the zoom criteria being met), and the threshold ST for the zoom criteria is increased to ST'. In some embodiments, it is relatively easy to initiate a pan or zoom operation on an object by satisfying the criteria for identifying a gesture for panning or zooming (assuming the criteria for panning or zooming were not previously satisfied) before the criteria for identifying a gesture for rotating the object were satisfied. Once the criteria for identifying a gesture for rotating an object are satisfied, it becomes more difficult to initiate a pan or zoom operation on the object (e.g., the criteria for pan and zoom are updated to have an increased movement parameter threshold), and object manipulation is biased toward a manipulation behavior corresponding to the gesture that has been identified and used to manipulate the object. In accordance with a determination that the second gesture recognition criteria are satisfied prior to the first gesture recognition criteria being satisfied: the device changes an appearance of the user interface object (e.g., translates or resizes the user interface object) according to the second object manipulation behavior based on the first portion of the input (e.g., based on a direction and/or magnitude of the first portion of the input); and (e.g., without changing the appearance of the user interface object according to the first object manipulation behavior) update the first gesture recognition criteria (e.g., increase a threshold required for movement parameters (e.g., movement distance, speed, etc.) in the first gesture recognition criteria) by increasing the threshold of the first gesture recognition criteria. For example, in fig. 14I, the size of virtual object 1102 has been increased in accordance with a determination that the scaling criteria have been satisfied (before the rotation criteria are satisfied), and the threshold RT for the rotation criteria is increased to RT'. In some embodiments, it is relatively easy to initiate a rotation operation on an object by satisfying the criteria for identifying a gesture for rotating before satisfying the criteria for identifying a gesture for translating or scaling the object (assuming the criteria for identifying a gesture for rotating an object have not been satisfied before). Once the criteria for identifying a gesture for translating or scaling an object are satisfied, it becomes more difficult to initiate a rotation operation on the object (e.g., the criteria for rotating the object are updated to have an increased movement parameter threshold), and the object manipulation behavior is biased toward a manipulation behavior corresponding to the gesture that has been identified and used to manipulate the object. In some implementations, the appearance of the user interface object is dynamically and continuously changed (e.g., displaying different sizes, positions, perspectives, reflections, shadows, etc.) according to the value of the respective movement parameter entered. In some embodiments, the device follows a preset correspondence (e.g., a respective correspondence for each type of manipulation behavior) between the movement parameters (e.g., a respective movement parameter for each type of manipulation behavior) and the changes made to the appearance of the user interface object (e.g., a respective aspect of the appearance of each type of manipulation behavior). Increasing the first threshold of input movement required for the first object manipulation increases the operability of the device when the input movement increases above the second threshold of the second object manipulation (e.g., by helping a user avoid accidentally performing the second object manipulation while attempting to provide input for performing the first object manipulation). Improving the ability of a user to control manipulation of different types of objects enhances the operability of the device and makes the user device interface more efficient.
In some embodiments, after updating the appearance of the user interface object based on the first portion of the input, the device detects (19008) a second portion of the input (e.g., by the same continuously maintained contact in the first portion of the input, or a different contact detected after termination (e.g., liftoff) of the contact in the first portion of the input). In some embodiments, the second portion of the input is detected based on continuously detected input directed to the user interface object. In response to detecting the second portion of the input, the device updates (19010) an appearance of the user interface object based on the second portion of the input, including: in accordance with a determination that the first portion of the input satisfies the first gesture recognition criteria and the second portion of the input does not satisfy the updated second gesture recognition criteria: (e.g., regardless of whether the second portion of the input satisfies the first gesture recognition criteria or the original second gesture recognition criteria) changing the appearance of the user interface object based on the second portion of the input according to the first object manipulation behavior (e.g., based on a direction and/or magnitude of the second portion of the input) rather than changing the appearance of the user interface object according to the second object manipulation behavior (e.g., even if the second portion of the input does satisfy the original second gesture recognition criteria before the updating); in accordance with a determination that the first portion of the input satisfies the second gesture recognition criteria and the second portion of the input does not satisfy the updated first gesture recognition criteria: the appearance of the user interface object is changed based on the second portion of the input (e.g., based on a direction and/or magnitude of the second portion of the input) according to the second object manipulation behavior (e.g., without regard to whether the second portion of the input satisfies the second gesture recognition criteria or the original first gesture recognition criteria), rather than changing the appearance of the user interface object according to the first object manipulation behavior (e.g., even if the second portion of the input does satisfy the original first gesture recognition before the update).
In some implementations (19012), after the first portion of the input satisfies the first gesture recognition criteria, when the appearance of the user interface object is changed according to the first object manipulation behavior based on the second portion of the input, the second portion of the input includes the input that satisfies the second gesture recognition criteria before the second gesture recognition criteria is updated (e.g., an original threshold value of the movement parameter of the input in the second gesture recognition criteria before the threshold value is increased) (e.g., the second portion of the input does not include the input that satisfies the updated second gesture recognition criteria).
In some implementations (19014), after the first portion of the input satisfies the second gesture recognition criteria, when the appearance of the user interface object is changed according to the second object manipulation behavior based on the second portion of the input, the second portion of the input includes the input that satisfies the first gesture recognition criteria before the first gesture recognition criteria is updated (e.g., an original threshold value of the movement parameter of the input in the first gesture recognition criteria before the threshold value is increased) (e.g., the second portion of the input does not include the input that satisfies the updated first gesture recognition criteria).
In some implementations (19016), after the first portion of the input satisfies the first gesture recognition criteria, the second portion of the input does not include the input that satisfies the first gesture recognition criteria (e.g., the original threshold value of the movement parameter with the input in the first gesture recognition criteria) when changing the appearance of the user interface object according to the first object manipulation behavior based on the second portion of the input. For example, after the first gesture recognition criteria are met once, the input no longer needs to continue to meet the first gesture recognition criteria in order to cause the first object manipulation behavior.
In some implementations (19018), after the first portion of the input satisfies the second gesture recognition criteria, the second portion of the input does not include the input satisfying the second gesture recognition criteria (e.g., the original threshold of movement parameters with the input in the second gesture recognition criteria) when changing the appearance of the user interface object according to the second object manipulation behavior based on the second portion of the input. For example, after the second gesture recognition criteria are met once, the input no longer needs to continue to meet the second gesture recognition criteria in order to cause the second object manipulation behavior. Performing the first object manipulation behavior when the second portion of the input includes movement that increases above the increased threshold enhances operability of the device (e.g., by providing the user with the ability to intentionally perform the second object manipulation after the first object manipulation is performed meeting the increased criteria without requiring the user to provide new input). Reducing the number of inputs required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, updating the appearance of the user interface object based on the second portion of the input includes (19020): in accordance with a determination that the first portion of the input satisfies the second gesture recognition criteria and the second portion of the input satisfies the updated first gesture recognition criteria: changing an appearance of the user interface object according to the first object manipulation behavior based on the second portion of the input; and changing an appearance of the user interface object in accordance with a second object manipulation behavior based on the second portion of the input; and, in accordance with a determination that the first portion of the input satisfies the first gesture recognition criteria and the second portion of the input satisfies the updated second gesture recognition criteria: based on the second portion of the input, changing an appearance of the user interface object in accordance with the first object manipulation behavior; and changing an appearance of the user interface object according to a second object manipulation behavior based on the second portion of the input. For example, after first meeting the first gesture recognition criteria and entering and then meeting the updated second gesture recognition criteria, the entering may now cause the first object manipulation behavior and the second object manipulation behavior. For example, after first meeting the second gesture recognition criteria and entering and then meeting the updated first gesture recognition criteria, the entering may now cause the first object manipulation behavior and the second object manipulation behavior. Updating the object in accordance with the first object manipulation behavior and the second object manipulation behavior enhances operability of the device in response to a portion of the input detected after the second gesture recognition criteria and the updated first gesture recognition criteria are met (e.g., by providing the user with the ability to freely manipulate the object after an increased threshold is met using the first object manipulation and the second object manipulation without requiring the user to provide new input). Reducing the number of inputs required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, after updating the appearance of the user interface object based on the second portion of the input, (e.g., after the first gesture recognition criteria and the updated second gesture recognition criteria are met, or after the second gesture recognition criteria and the updated first gesture recognition criteria are met) the device detects (19022) a third portion of the input (e.g., by the same continuously held contact in the first and second portions of the input, or a different contact detected after termination (e.g., liftoff) of the contact in the first and second portions of the input). In response to detecting the third portion of the input, the device updates (19024) an appearance of the user interface object based on the third portion of the input, including: changing an appearance of the user interface object according to the first object manipulation behavior based on the third portion of the input; and changing an appearance of the user interface object according to the second object manipulation behavior based on the third portion of the input. For example, after the first gesture recognition criteria and the updated second gesture recognition criteria are satisfied, or after both the second gesture recognition criteria and the updated first gesture recognition criteria are satisfied, the input may then cause the first object manipulation behavior and the second object manipulation behavior regardless of thresholds in the original or updated first and second gesture recognition criteria. Updating the object in accordance with the first object manipulation behavior and the second object manipulation behavior enhances operability of the device in response to a portion of the input detected after the second gesture recognition criteria and the updated first gesture recognition criteria are met (e.g., by providing the user with the ability to freely manipulate the object using the first object manipulation and the second object manipulation after demonstrating an intent to perform the first object manipulation type by meeting an increased threshold without requiring the user to provide new input). Reducing the number of inputs required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations (19026), the third portion of the input does not include input that satisfies the first gesture recognition criteria or input that satisfies the second gesture recognition criteria. For example, after the first gesture recognition criteria and the updated second gesture recognition criteria are satisfied, or after both the second gesture recognition criteria and the updated first gesture recognition criteria are satisfied, the input may then cause the first object manipulation behavior and the second object manipulation behavior regardless of thresholds in the original or updated first and second gesture recognition criteria. Updating the object in accordance with the first object manipulation behavior and the second object manipulation behavior in response to a portion of the input detected after the second gesture recognition criteria and the updated first gesture recognition criteria are met enhances operability of the device (e.g., by providing the user with the ability to freely manipulate the object using the first object manipulation and the second object manipulation after the elevated criteria are met without the user providing new input). Reducing the number of inputs required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the plurality of object manipulation behaviors includes (19028) a third object manipulation behavior (e.g., rotating the user interface object about a respective axis) that is performed in response to an input satisfying a third gesture recognition criterion (e.g., a zoom criterion). Updating the appearance of the user interface object based on the first portion of the input includes (19030): in accordance with a determination that the first portion of the input satisfies the first gesture recognition criteria before the second gesture recognition criteria are satisfied or the third gesture recognition criteria are satisfied: based on the first portion of the input (e.g., based on a direction and/or magnitude of the first portion of the input), changing an appearance of the user interface object (e.g., rotating the user interface object) according to the first object manipulation behavior; and updating the second gesture recognition criteria (e.g., without changing the appearance of the user interface object according to the second object manipulation behavior) by increasing a threshold of the second gesture recognition criteria (e.g., increasing a threshold required for a movement parameter (e.g., movement distance, speed, etc.) in the second gesture recognition criteria). For example, it is relatively easy to initiate a pan or zoom operation on an object by satisfying the criteria for identifying a gesture for panning or zooming (assuming the criteria for panning or zooming were not previously satisfied) before the criteria for identifying a gesture for rotating the object were satisfied. Once the criteria for identifying a gesture for rotating an object are satisfied, it becomes more difficult to initiate a pan or zoom operation on the object (e.g., the criteria for pan and zoom are updated to have an increased movement parameter threshold), and object manipulation is biased toward a manipulation behavior corresponding to the gesture that has been identified and used to manipulate the object. The device updates the third gesture recognition criteria by increasing a threshold of the third gesture recognition criteria (e.g., increasing a threshold required for a movement parameter (e.g., movement distance, speed, etc.) in the third gesture recognition criteria). For example, it is relatively easy to initiate a pan or zoom operation on an object by satisfying the criteria for identifying a gesture for panning or zooming (assuming the criteria for panning or zooming were not previously satisfied) before the criteria for identifying a gesture for rotating the object were satisfied. Once the criteria for identifying a gesture for rotating an object are satisfied, it becomes more difficult to initiate a pan or zoom operation on the object (e.g., the criteria for pan and zoom are updated to have an increased movement parameter threshold), and object manipulation is biased toward a manipulation behavior corresponding to the gesture that has been identified and used to manipulate the object. In accordance with a determination that the input satisfies the second gesture recognition criteria before either the first gesture recognition criteria is satisfied or the third gesture recognition criteria is satisfied: the device changes an appearance of the user interface object (e.g., translates or resizes the user interface object) according to the second object manipulation behavior based on the first portion of the input (e.g., based on a direction and/or magnitude of the first portion of the input); and (e.g., without changing the appearance of the user interface object according to the first object manipulation behavior) update the first gesture recognition criteria (e.g., increase a threshold required for movement parameters (e.g., movement distance, speed, etc.) in the first gesture recognition criteria) by increasing the threshold of the first gesture recognition criteria. For example, it is relatively easy to initiate a rotation operation on an object by satisfying the criteria for identifying a gesture for rotating before satisfying the criteria for identifying a gesture for translating or scaling the object (assuming that the criteria for identifying a gesture for rotating an object have not been satisfied before). Once the criteria for identifying a gesture for translating or scaling an object are satisfied, it becomes more difficult to initiate a rotation operation on the object (e.g., the criteria for rotating the object are updated to have an increased movement parameter threshold), and the object manipulation behavior is biased toward a manipulation behavior corresponding to the gesture that has been identified and used to manipulate the object. In some implementations, the appearance of the user interface object is dynamically and continuously changed (e.g., displaying different sizes, positions, perspectives, reflections, shadows, etc.) according to the value of the respective movement parameter entered. In some embodiments, the device follows a preset correspondence (e.g., a respective correspondence for each type of manipulation behavior) between the movement parameters (e.g., a respective movement parameter for each type of manipulation behavior) and the changes made to the appearance of the user interface object (e.g., a respective aspect of the appearance of each type of manipulation behavior). The device updates the third gesture recognition criteria by increasing a threshold of the third gesture recognition criteria (e.g., increasing a threshold required for a movement parameter (e.g., movement distance, speed, etc.) in the third gesture recognition criteria). For example, it is relatively easy to initiate a pan or zoom operation on an object by satisfying the criteria for identifying a gesture for panning or zooming (assuming the criteria for panning or zooming were not previously satisfied) before the criteria for identifying a gesture for rotating the object were satisfied. Once the criteria for identifying a gesture for rotating an object are satisfied, it becomes more difficult to initiate a pan or zoom operation on the object (e.g., the criteria for pan and zoom are updated to have an increased movement parameter threshold), and object manipulation is biased toward a manipulation behavior corresponding to the gesture that has been identified and used to manipulate the object. In accordance with a determination that the input satisfies the third gesture recognition criteria before either the first gesture recognition criteria or the second gesture recognition criteria are satisfied: the device changes an appearance of the user interface object (e.g., resizes the user interface object) according to the third object manipulation behavior based on the first portion of the input (e.g., based on a direction and/or magnitude of the first portion of the input); and the device updates (e.g., without changing the appearance of the user interface object in accordance with the first object manipulation behavior and the second object manipulation behavior) the first gesture recognition criteria (e.g., by increasing a threshold of the first gesture recognition criteria) (e.g., by increasing a threshold required for a movement parameter (e.g., movement distance, speed, etc.) in the first gesture recognition criteria). For example, it is relatively easy to initiate a rotation operation on an object by satisfying the criteria for identifying a gesture for rotating before satisfying the criteria for identifying a gesture for translating or scaling the object (assuming that the criteria for identifying a gesture for rotating an object have not been satisfied before). Once the criteria for identifying a gesture for translating or scaling an object are satisfied, it becomes more difficult to initiate a rotation operation on the object (e.g., the criteria for rotating the object are updated to have an increased movement parameter threshold), and the object manipulation behavior is biased toward a manipulation behavior corresponding to the gesture that has been identified and used to manipulate the object. In some implementations, the appearance of the user interface object is dynamically and continuously changed (e.g., displaying different sizes, positions, perspectives, reflections, shadows, etc.) according to the value of the respective movement parameter entered. In some embodiments, the device follows a preset correspondence (e.g., a respective correspondence for each type of manipulation behavior) between the movement parameters (e.g., a respective movement parameter for each type of manipulation behavior) and the changes made to the appearance of the user interface object (e.g., a respective aspect of the appearance of each type of manipulation behavior). The device updates the second gesture recognition criteria by increasing a threshold of the second gesture recognition criteria (e.g., increasing a threshold required for a movement parameter (e.g., movement distance, speed, etc.) in the second gesture recognition criteria). For example, it is relatively easy to initiate a pan or zoom operation on an object by satisfying the criteria for identifying a gesture for panning or zooming (assuming the criteria for panning or zooming were not previously satisfied) before the criteria for identifying a gesture for rotating the object were satisfied. Once the criteria for identifying a gesture for rotating an object are satisfied, it becomes more difficult to initiate a pan or zoom operation on the object (e.g., the criteria for pan and zoom are updated to have an increased movement parameter threshold), and object manipulation is biased toward a manipulation behavior corresponding to the gesture that has been identified and used to manipulate the object. Updating the object in accordance with the third object manipulation behavior in response to only a portion of the input detected when the corresponding third gesture recognition criteria are satisfied enhances operability of the device (e.g., by helping a user avoid accidentally performing the third object manipulation while attempting to provide input for performing the first object manipulation or the second object manipulation). Reducing the number of inputs required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the plurality of object manipulation behaviors includes (19032) a third object manipulation behavior performed in response to the input satisfying a third gesture recognition criterion, the first portion of the input does not satisfy the third gesture recognition criteria before the first gesture recognition criteria or the second gesture recognition criteria are satisfied, after the first portion of the input meets the first gesture recognition criteria or the second gesture recognition criteria, the device updates the third gesture recognition criteria by increasing a threshold of the third gesture recognition criteria, the second portion of the input does not satisfy the updated third gesture recognition criteria before the updated first gesture recognition criteria or the updated second gesture recognition criteria are satisfied (e.g., the device updates the third gesture recognition criteria by increasing a threshold of the third gesture recognition criteria after the first portion of the input satisfies one of the first gesture recognition criteria or the second gesture recognition criteria). In response to detecting the third portion of the input (19034): in accordance with a determination that the third portion of the input satisfies the updated third gesture recognition criteria (e.g., regardless of whether the third portion of the input satisfies the first gesture recognition criteria or the second gesture recognition criteria (e.g., updated or original)), the device changes an appearance of the user interface object in accordance with the third object manipulation behavior based on the third portion of the input (e.g., based on a direction and/or magnitude of the third portion of the input) (e.g., while changing the appearance of the user interface object in accordance with the first object manipulation behavior and the second object manipulation behavior (e.g., even if the third portion of the input does not satisfy the original first gesture recognition criteria and second gesture recognition criteria)). In accordance with a determination that the third portion of the input does not satisfy the updated third gesture recognition criteria, the device forgoes changing the appearance of the user interface object in accordance with the third object manipulation behavior based on the third portion of the input (e.g., while changing the appearance of the user interface object in accordance with the first object manipulation behavior and the second object manipulation behavior (e.g., even if the third portion of the input does not satisfy the original first gesture recognition criteria and second gesture recognition criteria)). After the second gesture recognition criteria, the updated first gesture recognition criteria, and the updated third gesture recognition criteria are satisfied, responsive to detecting a portion of the input, updating the object in accordance with the first object manipulation behavior, the second object manipulation behavior, and the third object manipulation behavior enhances operability of the device (e.g., by providing the user with the ability to freely manipulate the object using the first object manipulation type, the second object manipulation type, and the third object manipulation type without the user providing new input after establishing an intent to perform all three object manipulation types by satisfying an increased threshold). Reducing the number of inputs required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations (19036), the third portion of the input satisfies the updated third gesture recognition criteria. After updating the appearance of the user interface object based on the third portion of the input (e.g., after the first gesture recognition criteria and the updated second and third gesture recognition criteria are both satisfied, or after the second gesture recognition criteria and the updated first and third gesture recognition criteria are both satisfied), the device detects (19038) the fourth portion of the input (e.g., through the same continuously maintained contact in the first, second, and third portions of the input, or a different contact detected after termination (e.g., liftoff) of the contact in the first, second, and third portions of the input). In response to detecting the fourth portion of the input, the device updates (19040) an appearance of the user interface object based on the fourth portion of the input, including: changing an appearance of the user interface object according to the first object manipulation behavior based on the fourth portion of the input; changing an appearance of the user interface object according to the second object manipulation behavior based on the fourth portion of the input; and changing an appearance of the user interface object according to the third object manipulation behavior based on the fourth portion of the input. For example, after the first gesture recognition criteria and the updated second and third gesture recognition criteria are satisfied, or after the second and updated first and third gesture recognition criteria are satisfied, the input may then cause all three types of manipulation behavior regardless of the threshold values in the original or updated first, second and third gesture recognition criteria.
In some embodiments, the fourth portion of the input does not include (19042): an input satisfying a first gesture recognition criterion, an input satisfying a second gesture recognition criterion, or an input satisfying a third gesture recognition criterion. For example, after the first gesture recognition criteria and the updated second and third gesture recognition criteria are satisfied, or after the second and updated first and third gesture recognition criteria are satisfied, the input may then cause all three types of manipulation behavior regardless of the threshold values in the original or updated first, second and third gesture recognition criteria. The need for multiple simultaneously detected contacts for a gesture enhances the operability of the device (e.g., by helping a user avoid accidentally performing object manipulations while providing input at less than a required number of simultaneously detected contacts). Reducing the number of inputs required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations (19044), the first and second gesture recognition criteria (and the third gesture recognition criteria) both require a first number of simultaneously detected contacts (e.g., two contacts) in order to be satisfied. In some embodiments, a single-finger gesture may also be used for panning, and the single-finger panning threshold is lower than the two-finger panning threshold. In some embodiments, the original and updated movement thresholds set for the two-finger pan gesture are 40 and 70 points, respectively, moved through the center of gravity of the contact. In some embodiments, the original and updated movement thresholds set for the two-finger rotation gesture are 12 degrees and 18 degrees of rotational movement by touch, respectively. In some embodiments, the original and updated movement thresholds set for the two-finger zoom gesture are 50 points (distance between contacts) and 90 points, respectively. In some embodiments, the threshold set for the single-finger drag gesture is 30 points.
In some embodiments (19046), the first object manipulation behavior changes a zoom level or a display size of the user interface object (e.g., resizing the objects by a pinch gesture (e.g., movement of contacts toward each other after recognizing the pinch gesture based on first gesture recognition criteria (e.g., original or updated)), and the second object manipulation behavior changes a rotation angle of the user interface object (e.g., changing a viewing perspective of the user interface object around the outer axis or the inner axis by a twist/rotate gesture (e.g., movement of contacts around a common trajectory after recognizing a twist/rotate gesture by second gesture recognition criteria (e.g., original or updated)). For example, the first object manipulation behavior changes the display size of the virtual object 11002 as described with reference to fig. 14G to 14I, and the second object manipulation behavior changes the rotation angle of the virtual object 11002 as described with reference to fig. 14B to 14E. In some embodiments, the second object manipulation behavior changes a zoom level or a display size of the user interface object (e.g., resizing the objects by a pinch gesture (e.g., movement of the contacts toward each other after recognition of the pinch gesture based on second gesture recognition criteria (e.g., original or updated)), and the first object manipulation behavior changes a rotation angle of the user interface object (e.g., changing a viewing perspective of the user interface object around the outer axis or the inner axis by a twist/rotate gesture (e.g., movement of the contacts around a common trajectory after recognition of a twist/rotate gesture by the first gesture recognition criteria (e.g., original or updated)).
In some implementations (19048), the first object manipulation behavior changes a zoom level or a display size of the user interface object (e.g., resizing the objects by a pinch gesture (e.g., movement of the contacts toward each other after recognizing the pinch gesture based on first gesture recognition criteria (e.g., original or updated)), and the second object manipulation behavior changes a position of the user interface object in the first user interface area (e.g., dragging the user interface object by a single-finger or two-finger drag gesture (e.g., movement of the contacts in a corresponding direction after recognizing the drag gesture by second gesture recognition criteria (e.g., original or updated)). For example, the first object manipulation behavior changes the display size of the virtual object 11002 as described with reference to fig. 14G through 14I, and the second object manipulation behavior changes the position of the virtual object 11002 in the user interface as described with respect to fig. 14B through 14E. In some embodiments, the second object manipulation behavior changes a zoom level or a display size of the user interface object (e.g., resizing the objects by a pinch gesture (e.g., movement of the contacts toward each other after recognizing the pinch gesture based on second gesture recognition criteria (e.g., original or updated)), and the second object manipulation behavior changes a position of the user interface object in the first user interface area (e.g., dragging the user interface object by a single-finger or two-finger drag gesture (e.g., movement of the contacts in a corresponding direction after recognizing the drag gesture by the first gesture recognition criteria (e.g., original or updated)).
In some implementations (19050), the first object manipulation behavior changes a position of the user interface object in the first user interface region (e.g., dragging the object by a single-finger or two-finger drag gesture (e.g., movement of the contact in a respective direction after a drag gesture is identified by a first gesture identification criterion (e.g., original or updated)), and the second object manipulation behavior changes a rotation angle of the user interface object (e.g., changing a viewing perspective of the user interface object around an outer axis or an inner axis by a twist/rotate gesture (e.g., movement of the contact around a common trajectory after a twist/rotate gesture is identified by a second gesture identification criterion (e.g., original or updated)). For example, the first object manipulation behavior changes the display size of the virtual object 11002 as described with reference to fig. 14B through 14E, and the second object manipulation behavior changes the position of the virtual object 11002 in the user interface as described with respect to fig. 14B through 14E. In some embodiments, the second object manipulation behavior changes a position of the user interface object in the first user interface region (e.g., drags the object by a single-finger or two-finger drag gesture (e.g., movement of the contact in a respective direction after a drag gesture is identified by second gesture recognition criteria (e.g., original or updated)), and the first object manipulation behavior changes a rotation angle of the user interface object (e.g., changes a viewing perspective of the user interface object around an outer axis or an inner axis by a twist/rotate gesture (e.g., movement of the contact around a common trajectory after a twist/rotate gesture is identified by first gesture recognition criteria (e.g., original or updated)).
In some embodiments (19052), the first portion of the input and the second portion of the input are provided by a plurality of consecutively maintained contacts. The device re-establishes (19054) the first and second gesture recognition criteria (e.g., with original thresholds) to initiate additional first and second object manipulation behaviors after detecting liftoff of the plurality of successively held contacts. For example, after the contact is lifted off, the device re-establishes gesture recognition thresholds for rotation, translation, and scaling of the newly detected touch input. Reestablishing the threshold for input movement after ending input by liftoff of the contact enhances the operability of the device (e.g., by reducing the degree of input required to perform object manipulation by resetting the increased movement threshold each time a new input is provided). Reducing the degree of input required to perform the operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations (19056), the first gesture recognition criteria corresponds to rotation about a first axis and the second gesture recognition criteria corresponds to rotation about a second axis orthogonal to the first axis. In some embodiments, instead of updating thresholds for different types of gestures, the updates also apply to thresholds set for different sub-types of manipulation behaviors (e.g., rotation about a first axis instead of rotation about different axes) within one type of manipulation behavior corresponding to the recognized gesture type (e.g., a twist/pivot gesture). For example, once a rotation about a first axis is identified and performed, the set of rotation thresholds about the different axes is updated (e.g., increased) and must be overcome by subsequent inputs in order to trigger the rotation about the different axes. Increasing the threshold for input movement required to rotate the object about the first axis enhances the operability of the device when the input movement increases above the threshold for input movement required to rotate the object about the second axis (e.g., by helping a user avoid accidentally rotating the object about the second axis while attempting to rotate the object about the first axis). Reducing the number of inputs required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
It should be understood that the particular order in which the operations in fig. 19A-19H are described is merely an example and is not intended to suggest that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods 800, 900, 1000, 16000, 17000, 18000, and 20000) also apply in a similar manner to method 19000 described above with respect to fig. 19A-19H. For example, the contact, input, virtual object, user interface region, field of view, tactile output, movement, and/or animation described above with reference to method 19000 optionally has one or more of the features of the contact, input, virtual object, user interface region, field of view, tactile output, movement, and/or animation described herein with reference to other methods described herein (e.g., methods 800, 900, 1000, 16000, 17000, 18000, and 20000). For the sake of brevity, these details are not repeated here.
20A-20F are flow diagrams illustrating a method 20000 for generating an audio alert to move a virtual object out of the field of view of a displayed one or more device cameras in accordance with a determination of movement of a device. Method 20000 is performed at an electronic device (e.g., device 300 of fig. 3 or portable multifunction device 100 of fig. 1A) having a display generation component (e.g., a display, a projector, a heads-up display, etc.), one or more input devices (e.g., a touch-sensitive surface or a touch screen display that acts as both a display generation component and a touch-sensitive surface), one or more audio output generators, and one or more cameras. Some operations of method 20000 are optionally combined, and/or the order of some operations is optionally changed.
The device displays (20002) a representation of a virtual object (e.g., in response to a request to place the virtual object in an augmented reality view of a physical environment surrounding the device including the camera (e.g., in response to tapping a "world" button displayed with a landing view of the virtual object)) in a first user interface region including a representation of a field of view of one or more cameras via a display generation component (e.g., the first user interface region is a user interface displaying an augmented reality view of a physical environment surrounding the device including the camera), wherein displaying includes maintaining a first spatial relationship between the representation of the virtual object and a plane detected within the physical environment captured in the field of view of the one or more cameras (e.g., the virtual object is displayed in an orientation and position such that a fixed angle between the representation of the virtual object and the plane is maintained (e.g., the virtual object appears to remain at a fixed position on the plane or scroll along the field of view plane) On a display). For example, as shown in fig. 15V, a virtual object 11002 is displayed in a user interface area that includes a field of view 6036 of one or more cameras.
The device detects (20004) movement of the device (e.g., lateral movement and/or rotation of the device including the one or more cameras) that adjusts the field of view of the one or more cameras. For example, as described with reference to fig. 15V-15W, movement of device 100 adjusts the field of view of one or more cameras.
In response to detecting movement (20006) of the device that adjusts the field of view of the one or more cameras: upon adjusting the field of view of the one or more cameras, the device adjusts the display of the representation of the virtual object in the first user interface region according to a first spatial relationship (e.g., orientation and/or position) between the virtual object and a plane detected within the field of view of the one or more cameras, and in accordance with a determination that movement of the device causes more than a threshold amount (e.g., 100%, 50%, or 20%) of the virtual object to move outside of the displayed portion of the field of view of the one or more cameras (e.g., because a spatial relationship between a representation of the virtual object and a plane detected within the physical environment captured in the field of view of the one or more cameras remains fixed during movement of the device relative to the physical environment), the device generates, by the one or more audio output generators, a first audio alert (e.g., a voice notification indicating that more than the threshold amount of the virtual object is no longer displayed in the camera view). For example, as described with reference to fig. 15W, an audio alert 15118 is generated in response to movement of the device 100 causing the virtual object 11002 to move outside of the displayed portion of the field of view 6036 of the one or more cameras. Generating an audio output to provide feedback to the user indicating an extent to which movement of the device affects display of the virtual object relative to the augmented reality view in accordance with a determination that the movement of the device causes the virtual object to move outside of the displayed augmented reality view. Providing improved feedback to the user enhances the operability of the device (e.g., by providing a system that allows the user to perceive whether the virtual object has moved off the display without cluttering the display with additional display information and without requiring the user to view the display), and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, outputting the first audio alert includes (20008) generating an audio output indicating an amount of virtual objects remaining visible on a display portion of a field of view of the one or more cameras (e.g., the amount of virtual objects remaining visible is measured relative to a total size of the virtual objects from a current viewing perspective (e.g., 20%, 25%, 50%, etc.)) (e.g., the audio output says "object x is 20% visible"). For example, in response to movement of the device 100 causing the virtual object 11002 to move partially out of the display portion of the field of view 6036 of the one or more cameras, as described with reference to fig. 15X-15Y, the audio alert 15126 is generated to include a notification 15128 indicating "chair 90% visible, occupying 20% of the screen". Generating an audio output indicative of the amount of virtual objects visible in the displayed augmented reality view provides feedback to the user (e.g., indicating the degree to which movement of the device changes the degree to which the virtual objects are visible). Providing improved feedback to the user (e.g., by providing a feedback that allows the user to perceive whether the virtual object has moved off the display without cluttering the display with additional display information and without requiring the user to view the display) enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, outputting the first audio alert includes (20010) generating an audio output that indicates an amount of the displayed portion of the field of view occupied by the virtual object (e.g., an amount of the augmented reality view of the physical environment occupied by the virtual object (e.g., 20%, 25%, 50%, etc.)) (e.g., the audio output includes a notification saying "object x occupies 15% of the world view"). In some embodiments, the audio output further comprises a description of an action performed by the user that caused the change in the display state of the virtual object. For example, the audio output includes a notification saying "the device moved to the left; object x is 20% visible, accounting for 15% of the world view. "for example, in FIG. 15Y, an audio alert 15126 is generated that includes a notification 15128 indicating" chair 90% visible, 20% of screen "is included. Generating audio output indicative of the amount of the augmented reality view occupied by the virtual object provides feedback to the user (e.g., indicating the degree to which movement of the device changes the degree to which the augmented reality view is occupied). Providing improved feedback to the user enhances the operability of the device (e.g., by providing information that allows the user to perceive the size of the virtual object relative to the display without cluttering the display with additional displayed information and without requiring the user to view the display) and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the device detects (20012) input by contact at a location on the touch-sensitive surface that corresponds to a representation of the field of view of the one or more cameras (e.g., detects a tap input or a double-tap input on a portion of the touch screen that displays an augmented reality view of the physical environment). In response to detecting the input, and in accordance with a determination that the input is detected at a first location on the touch-sensitive surface that corresponds to a first portion of the field of view of the one or more cameras that is not occupied by the virtual object, the device generates (20014) a second audio alert (e.g., a click or buzz indicating that the virtual object cannot be located in the tapped area). For example, as described with reference to fig. 15Z, in response to an input detected on the touch screen 112 at a location corresponding to a portion of the field of view 6036 of the one or more cameras not occupied by the virtual object 11002, the device generates an audio alert 15130. In some embodiments, in response to detecting the input, in accordance with a determination that the input is detected at a second location corresponding to a second portion of the field of view of the one or more cameras occupied by the virtual object, forgoing generating the second audio alert. In some embodiments, instead of generating a second audio alert to indicate that the user has failed to locate the virtual object, the device generates a different audio alert indicating that the user has located the virtual object. In some embodiments, instead of generating a second audio alert, the device outputs an audio notification describing the operation performed on the virtual object (e.g., "object x is selected.", "object x is resized to a default size.", "object x is rotated to a default orientation," etc.) or the state of the virtual object (e.g., object x, 20% visible, occupying 15% of the world view).
Generating audio output in response to input detected at a location corresponding to a portion of the displayed augmented reality view not occupied by the virtual object provides feedback to the user (e.g., indicates that input must be provided at a different location) (e.g., obtains information about the virtual object and/or performs an operation)). Providing improved feedback to the user enhances the operability of the device (e.g., by providing information that allows the user to perceive whether the input successfully connects with the virtual object, without cluttering the display with additional displayed information and without requiring the user to view the display), and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, outputting the first audio alert includes generating (20016) an audio output indicative of an operation performed on the virtual object (e.g., prior to generating the audio output, the device determines a currently selected operation and performs the operation in response to an input confirming a user's intent to perform the currently selected operation (e.g., a double click)), and a resulting state of the virtual object after performing the operation. For example, the audio output includes a notification saying "the device moved to the left; object x is 20% visible, occupying 15% of the world view, "object x rotates 30 degrees clockwise; the object is rotated 50 degrees around the y-axis, "or" object x zooms in 20% and occupies 50% of the world view. For example, as described with reference to fig. 15AH to 15AI, in response to the execution of the rotation operation with respect to the virtual object 11002, an audio alarm 15190 is generated which includes a notification 15192 indicating "the chair rotates five degrees counterclockwise". The chair is now rotated by zero degrees "relative to the screen to generate an audio output indicative of the operation performed on the virtual object provides the user with feedback indicating how the provided input affects the virtual object. Providing improved feedback to the user enhances the operability of the device (e.g., by providing information that allows the user to perceive how the operation changes the virtual object without cluttering the display with additional displayed information and without requiring the user to view the display) and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments (20018), the resulting state of the virtual object after performing the operation is described in the audio output of the first audio alert relative to a reference frame corresponding to the physical environment captured in the field of view of the one or more cameras (e.g., after manipulating the object (e.g., in response to a touch-based gesture or movement of the device), the device generates a new state that describes the object verbally (e.g., rotated 30 degrees, rotated 60 degrees, or moved to the left relative to the initial position/orientation of the virtual object when the virtual object is initially placed in the augmented reality view of the physical environment)). For example, as described with reference to fig. 15AH to 15AI, in response to the execution of the rotation operation with respect to the virtual object 11002, an audio alarm 15190 is generated which includes a notification 15192 indicating "the chair rotates five degrees counterclockwise". The chair is now rotated zero degrees relative to the screen "in some embodiments, the operation includes movement of the device relative to the physical environment (e.g., causing movement of the virtual object relative to a representation of a portion of the physical environment captured in a field of view of one or more cameras), and the speech describes a new state of the virtual object in response to movement of the device relative to the physical environment. Generating an audio output indicative of the state of the virtual object after performing the operation on the object provides the user with feedback allowing the user to perceive how the operation changes the virtual object. Providing improved feedback to the user enhances the operability of the device (e.g., by providing information that allows the user to perceive how the operation changes the virtual object without cluttering the display with additional displayed information and without requiring the user to view the display) and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the device detects (20020) additional movement of the device (e.g., lateral movement and/or rotation of the device including one or more cameras) that further adjusts the field of view of the one or more cameras after generating the first audio alert. For example, as described with respect to fig. 15W-15X, movement of device 100 further adjusts the field of view of one or more cameras (following the field of view adjustment of one or more cameras that occurs in response to movement of device 100 from fig. 15V-15W). In response to detecting further movement of the device to further adjust the field of view of the one or more cameras (20022): upon further adjusting the field of view of the one or more cameras, the device adjusts display of the representation of the virtual object in the first user interface area according to a first spatial relationship (e.g., orientation and/or position) between the virtual object and a plane detected within the field of view of the one or more cameras, and in accordance with a determination that additional movement of the device causes movement of the virtual object greater than a second threshold amount (e.g., 50%, 80%, or 100%) within the displayed portion of the field of view of the one or more cameras (e.g., because the spatial relationship between the representation of the virtual object and the plane detected within the physical environment captured in the field of view of the one or more cameras remains fixed during movement of the device relative to the physical environment), the device generates a first audio alert (e.g., an audio output including a notification, the notification indicates that more than a threshold amount of the virtual object is moved back into the camera view). For example, as described with reference to fig. 15X, an audio alert 15122 (e.g., including a bulletin, "chair now projected in the world, 100% visible, occupying 10% of the screen") is generated in response to movement of the device 100 causing the virtual object 11002 to move within the displayed portion of the field of view 6036 of the one or more cameras. In accordance with a determination that movement of the device causes the virtual object to move within the displayed augmented reality view, generating an audio output provides feedback to the user indicating an extent to which the movement of the device affects the display of the virtual object relative to the augmented reality view. Providing improved feedback to the user enhances the operability of the device (e.g., by providing a system that allows the user to perceive whether a virtual object has moved into the display without cluttering the display with additional display information and without requiring the user to view the display), and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, when a representation of a virtual object is displayed in a first user interface area and a first object manipulation type of a plurality of object manipulation types applicable to the virtual object is currently selected for the virtual object, the device detects (20024) a request to switch to another object manipulation type applicable to the virtual object (e.g., detects a swipe input by contact (e.g., including movement of the contact in a horizontal direction) at a location on the touch-sensitive surface corresponding to a portion of the first user interface area displaying a representation of a field of view of one or more cameras). For example, as described with reference to fig. 15AG, when the clockwise rotation control 15170 is currently selected, a swipe input is detected to switch to the counterclockwise rotation control 15180 (for rotating the virtual object 15160 counterclockwise). In response to detecting a request to switch to another object manipulation type applicable to the virtual object, the device generates (20026) an audio output (e.g., the audio output includes a notification saying "rotate object around x-axis", "resize object" or "move object on plane", etc.) naming a second object manipulation type among the plurality of object manipulation types applicable to the virtual object, wherein the second object manipulation type is different from the first object manipulation type. For example, in FIG. 15AH, in response to detecting the request described with reference to 15AG, an audio alert 15182 is generated, including notification 15184 ("selected: counterclockwise rotation"). In some embodiments, the device traverses a predefined list of applicable object manipulation types in response to a continuous swipe input in the same direction. In some embodiments, in response to detecting a swipe input in the reverse direction from an immediately preceding swipe input, the device generates an audio output that includes a notification naming the object manipulation type applicable to the virtual object of the previous notification (e.g., a notification preceding the most recently notified object manipulation type). In some embodiments, the device does not display a corresponding control for each object manipulation type applicable to the virtual object (e.g., no buttons or controls are displayed for gesture-initiated operations (e.g., rotation, resizing, translation, etc.)). Generating an audio output in response to a request to switch object manipulation types provides feedback to the user indicating that a switching operation has been performed. Providing improved feedback to the user enhances the operability of the device (e.g., by providing information confirming that the switch input was successfully performed without cluttering the display with additional displayed information and without requiring the user to view the display) and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, after generating (20028) an audio output naming a second object manipulation type among a plurality of object manipulation types applicable to the virtual object (e.g., the audio output includes a notification saying "rotate object around x-axis", "resize object," or "move object on plane", etc.), the device detects a request to perform an object manipulation behavior corresponding to the currently selected object manipulation type (e.g., detects a double-click input by contact on the touch-sensitive surface at a location corresponding to a portion of the first user interface area displaying a representation of the field of view of the one or more cameras). For example, as described with reference to fig. 15AH, a double-click input is detected to rotate the virtual object 11002 counterclockwise. In response to detecting a request to perform an object manipulation behavior corresponding to a currently selected object manipulation type, the device performs (20030) an object manipulation behavior corresponding to a second object manipulation type (e.g., rotating the virtual object 5 degrees around the y-axis, or increasing the size of the object by 5%, or moving the object on a plane by 20 pixels) (e.g., adjusting the display of the representation of the virtual object in the first user interface area according to the second object manipulation type). For example, in fig. 15AI, in response to detecting the request described with reference to 15AH, the virtual object 11002 is rotated counterclockwise. In some embodiments, in addition to executing the object manipulation behavior corresponding to the second object manipulation type, the device outputs an audio output that includes a notification indicating the object manipulation behavior executed relative to the virtual object and a resulting state of the virtual object after the object manipulation behavior is executed. For example, in fig. 15AI, an audio output 15190 is generated that includes a notification 15192 ("chair rotated five degrees counterclockwise. now, chair rotated zero degrees relative to screen") that performing an object manipulation operation in response to an input detected at the time of the selection operation provides an additional control option for performing the operation (e.g., allowing the user to perform the operation by providing a tap input rather than requiring a double tap input). Providing additional control options for providing input without additional display controls clutter the user interface enhances operability of the device (e.g., by providing users with limited ability to provide multi-touch gestures with options to manipulate objects) and makes the user-device interface more efficient, which in turn reduces power usage and extends battery life of the device by enabling users to use the device more quickly and efficiently.
In some embodiments, in response to detecting a request to switch to another object manipulation type applicable to the virtual object (20032): in accordance with a determination that the second object manipulation type is a continuously adjustable manipulation type, the device generates an audio alert and an audio output naming the second object manipulation type to indicate that the second object manipulation type is a continuously adjustable manipulation type (e.g., outputs an audio output saying "adjustable" (e.g., "rotate object clockwise about y-axis") after naming the audio notification of the second object manipulation type); the device detects a request to perform an object manipulation behavior corresponding to the second object manipulation type, including detecting a swipe input at a location on the touch-sensitive surface corresponding to a portion of the first user interface area displaying a representation of the field of view of the one or more cameras (e.g., following detection of a double-tap input by contact at a location on the touch-sensitive surface corresponding to a portion of the first user interface area displaying a representation of the field of view of the one or more cameras); and in response to detecting a request to perform an object manipulation behavior corresponding to a second object manipulation type, the device performs the object manipulation behavior corresponding to the second object manipulation type by an amount corresponding to a size of the slide input (e.g., rotating the virtual object 5 or 10 degrees about the y-axis, or increasing the size of the object by 5% or 10%, or moving the object on the plane by 20 or 40 pixels, depending on whether the magnitude of the swipe input is a first amount or a second amount greater than the first amount). For example, as described with reference to fig. 15J through 15K, when the clockwise rotation control 15038 is currently selected, a swipe input is detected to switch to the zoom control 15064. An audio alert 15066 is generated that includes the notification 15068 ("proportional: tunable"). As described with reference to fig. 15K-15L, a swipe input is detected for zooming in on the virtual object 11002, and in response to the input, a zoom operation is performed on the virtual object 11002 (in the illustrative example of fig. 15K-15L, an input for a continuously adjustable manipulation is detected while displaying the landing view interface 6010, although it should be appreciated that similar inputs may be detected at a location on the touch-sensitive surface that corresponds to a portion of the first user interface area displaying a representation of the field of view of one or more cameras). In some embodiments, in addition to performing the second object manipulation behavior, the device outputs an audio notification indicating an amount of the object manipulation behavior performed with respect to the virtual object and a resulting state of the virtual object after performing the object manipulation behavior. Performing the object manipulation operation in response to the swipe input provides additional control options for performing the operation (e.g., allowing the user to perform the operation by providing the swipe input rather than requiring the two-touch input). Providing additional control options for providing input without additional display controls clutter the user interface (e.g., by providing users with limited ability to provide multi-contact gestures with options to manipulate objects) and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling users to use the device more quickly and efficiently.
In some embodiments, prior to displaying the representation of the virtual object in the first user interface region, the device displays (20034) a representation of the virtual object (e.g., a stage user interface) in a second user interface region, wherein the second user interface region does not include a representation of the field of view of the one or more cameras (e.g., the second user interface region is a stage user interface in which the virtual object can be manipulated (e.g., rotated, resized, and moved) without maintaining a fixed relationship to a plane detected in the physical environment captured in the field of view of the cameras). When a representation of the virtual object is displayed in the second user interface region, a first operation of the plurality of operations applicable to the virtual object is currently selected for the virtual object, the device detects (20036) a request to switch to another operation applicable to the virtual object (e.g., including a request to switch a type of object manipulation applicable to the virtual object in the second user interface region (e.g., resizing, rotating, tilting, etc.) or a user interface operation applicable to the virtual object in the second user interface region (e.g., returning to a 2D user interface, dropping the object into an augmented reality view of the physical environment)) (e.g., detecting the request includes detecting a swipe input by contact (e.g., including movement of the contact in a horizontal direction) at a location on the touch-sensitive surface corresponding to the first user interface region). For example, as described with reference to fig. 15F to 15G, when the landing user interface 6010 is displayed and the down tilt control 15022 is currently selected, a swipe input is detected to switch to the clockwise rotation control 15038. In response to detecting a request to switch to another operation applicable to the virtual object in the second user interface region, the device generates (20038) an audio output that names a second operation among a plurality of operations applicable to the virtual object (e.g., the audio output includes a notification saying "rotate object around x-axis", "resize object", "tilt object toward display", or "display object in augmented reality view", etc.), wherein the second operation is different from the first operation. In some implementations, the device traverses a predefined list of applicable operations in response to a continuous swipe input in the same direction. For example, in FIG. 15G, in response to detecting the request described with reference to FIG. 15F, an audio alert 15040 is generated, including a notification 15042 ("selected: clockwise rotation button"). In response to a request to switch operation types, an audio output is generated naming the selected operation type, providing feedback to the user indicating that the switching input has been successfully received. In response to a request to switch operation types, an audio output is generated naming the selected operation type, providing feedback to the user indicating that the switching input has been successfully received. Providing improved feedback to the user enhances the operability of the device (e.g., by providing information that allows the user to perceive when a change in a selected control has occurred without cluttering the display with additional displayed information and without requiring the user to view the display) and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, prior to displaying the representation of the virtual object in the first user interface area (20040): while displaying the representation of the virtual object in a second user interface region (e.g., a landing user interface) that does not include the representation of the field of view of the one or more cameras (e.g., the second user interface region is a landing user interface in which the virtual object can be manipulated (e.g., rotated, resized, and moved) without maintaining a fixed relationship to a plane in the physical environment), the device detects a request to display the representation of the virtual object in the first user interface region that includes the representation of the field of view of the one or more cameras (e.g., the object is displayed in an augmented reality view when the currently selected operation is "the object is displayed in an augmented reality view" and the double-tap input is detected just after the device outputs an audio notification naming the currently selected operation in response to a swipe input (e.g., received just prior to the double-tap input). For example, as described with reference to fig. 15P-15V, when the staging user interface 6010 is displayed and the toggle control 6018 is selected, a double-click input is detected to display a representation of the virtual object 11002 to a user interface area that includes a representation of the field of view 6036 of one or more cameras. In response to detecting a request to display a representation of a virtual object in a first user interface area that includes representations of fields of view of one or more cameras: the device displays a representation of the virtual object in the first user interface region according to a first spatial relationship between the representation of the virtual object and a plane detected within the physical environment captured in the field of view of the one or more cameras (e.g., when the virtual object is caused to fall into the physical environment represented in the augmented reality view, the angle of rotation and the size of the virtual object remains in the augmented reality view in the staging view and the tilt angle is reset in the augmented reality view according to the orientation of the plane detected in the physical environment captured in the field of view.); and the device generates a fourth audio alert indicating placement of the virtual object in the augmented reality view relative to the physical environment captured in the field of view of the one or more cameras. For example, as described with reference to fig. 15V, in response to an input to display a representation of the virtual object 11002 in the user interface area including a representation of the field of view 6036 of one or more cameras, a representation of the virtual object 11002 is displayed in the user interface area including the field of view 6036 of one or more cameras and an audio alert 15114 is generated that includes a notification 15116 ("chair now projected in the world, 100% visible, occupies 10% of the screen"). An audio output is generated in response to the request to place the object in the augmented reality view, providing feedback to the user indicating that the operation of placing the virtual object was successfully performed. Providing improved feedback to the user enhances the operability of the device (e.g., by providing information that allows the user to perceive the display of an object in an augmented reality view without cluttering the display with additional displayed information and without requiring the user to view the display) and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the third audio alert indicates (20042) information about the appearance of the virtual object relative to the portion of the field of view of the one or more cameras (e.g., the third audio alert includes an audio output that includes a notification saying "object x is placed in the world, object x is 30% visible, occupying 90% of the screen"). For example, as described with reference to fig. 15V, an audio alert 15114 is generated that includes a notification 15116 ("chair now projected in the world, 100% visible, occupying 10% of the screen"). Generating an audio output indicative of the appearance of a virtual object that is visible relative to the displayed augmented reality view provides feedback to the user (e.g., indicating that the degree of placement of the object in the augmented reality view affects the appearance of the virtual object). Providing improved feedback to the user enhances the operability of the device (e.g., by providing information that allows the user to perceive how the object is displayed in the augmented reality view without cluttering the display with additional displayed information and without requiring the user to view the display) and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, a device generates (20044) haptic output in conjunction with placement of a virtual object in an augmented reality view relative to a physical environment captured in a field of view of one or more cameras. For example, when an object is placed on a plane detected in the field of view of the camera, the device generates a haptic output indicating that the object falls on the plane. In some embodiments, the device generates a haptic output when the object reaches a predefined default size during resizing of the object. In some embodiments, the device generates a tactile output for each operation performed with respect to the virtual object (e.g., for each rotation by a preset angular amount, for dragging the virtual object onto a different plane, for resetting the object to an original orientation and/or size, etc.). In some embodiments, these haptic outputs precede corresponding audio alerts describing the performed operation and the resulting state of the virtual object. For example, as described with reference to fig. 15V, the haptic output 15118 is generated in connection with placement of the virtual object 11002 in the field of view 6036 of one or more cameras. Generating haptic output in conjunction with placing a virtual object relative to a physical environment captured by one or more cameras provides feedback to a user (e.g., indicating that an operation to place the virtual object was successfully performed). Providing improved feedback to the user enhances the operability of the device (e.g., by providing sensory information that allows the user to perceive that placement of the virtual object has occurred, rather than cluttering the user interface with the displayed information) and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the device displays (20046) the first control at a first location in the first user interface area (e.g., among a plurality of controls displayed at different locations in the first user interface area) while displaying a representation of the field of view of one or more cameras. In accordance with a determination that the control fade criteria are satisfied (e.g., when the first user interface region is displayed for at least a threshold amount of time without detecting a touch input on the touch-sensitive surface, the control fade criteria are satisfied), the device stops (20048) displaying the first control (e.g., and all other controls in the first user interface region) in the first user interface region while maintaining display of the representation of the field of view of the one or more cameras in the first user interface region (e.g., without redisplaying the controls when the user moves the device relative to the physical environment). While displaying the first user interface area and not displaying the first control in the first user interface area, the device detects (20050) touch input on the touch-sensitive surface at a respective location that corresponds to a first location in the first user interface area. In response to detecting the touch input, the device generates (20052) a fifth audio alert that includes an audio output specifying an operation corresponding to the first control (e.g., "return to staging view" or "rotate object about y-axis"). In some embodiments, in response to detecting the touch input, the device also redisplays the first control at the first location. In some embodiments, once the user knows the location of the control on the display, redisplaying the control and making it the currently selected control when touch input is made at the usual location of the control on the display provides a faster way to access the control than browsing available controls using a series of swipe inputs. Automatically stopping the display control in response to determining that the control fade criterion is satisfied reduces the number of inputs required to stop the display control. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
It should be understood that the particular order in which the operations in fig. 20A-20F are described is merely an example and is not intended to suggest that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods 800, 900, 1000, 16000, 17000, 18000, and 20000) also apply in a similar manner to method 20000 described above with respect to fig. 20A-20F. For example, the contact, input, virtual object, user interface region, field of view, tactile output, movement, and/or animation described above with reference to method 20000 optionally has one or more of the features of the contact, input, virtual object, user interface region, field of view, tactile output, movement, and/or animation described herein with reference to other methods described herein (e.g., methods 800, 900, 1000, 16000, 17000, 18000, and 19000). For the sake of brevity, these details are not repeated here.
The operations described above with reference to fig. 8A-8E, 9A-9D, 10A-10D, 16A-16G, 17A-17D, 18A-18I, 19A-19H, and 20A-20F are optionally implemented by the components depicted in fig. 1A-1B. For example, display operations 802, 806, 902, 906, 910, 1004, 1008, 16004, 17004, 18002, 19002, and 20002; detect operations 804, 904, 908, 17006, 18004, 19004, and 20004; change operation 910, receive operations 1002, 1006, 16002, and 17002; stop operation 17008; a rotation operation 18006; an update operation 19006; an adjustment operation 20006; and generating operation 20006 is optionally implemented by event sorter 170, event recognizer 180, and event handler 190. Event monitor 171 in event sorter 170 detects a contact on touch-sensitive display 112 and event dispatcher module 174 communicates the event information to application 136-1. A respective event recognizer 180 of application 136-1 compares the event information to respective event definitions 186 and determines whether a first contact at a first location on the touch-sensitive surface (or whether rotation of the device) corresponds to a predefined event or sub-event, such as selection of an object on a user interface or rotation of the device from one orientation to another. When a respective predefined event or sub-event is detected, the event recognizer 180 activates an event handler 190 associated with the detection of the event or sub-event. Event handler 190 optionally uses or calls data updater 176 or object updater 177 to update application internal state 192. In some embodiments, event handler 190 accesses a respective GUI updater 178 to update the content displayed by the application. Similarly, those skilled in the art will clearly know how other processes may be implemented based on the components depicted in fig. 1A-1B.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments described, with various modifications as are suited to the particular use contemplated.

Claims (57)

1. A method, comprising:
at a device having a display generation component, one or more input devices, and one or more cameras:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras;
detecting a first movement of the one or more cameras while displaying the representation of the virtual object with the first set of visual attributes and the first orientation on a first portion of the physical environment captured in the field of view of the one or more cameras; and
in response to detecting the first movement of the one or more cameras, displaying the representation of the virtual object with the first set of visual attributes and the first orientation on a second portion of the physical environment captured in the field of view of the one or more cameras, wherein the second portion of the physical environment is different from the first portion of the physical environment.
2. The method of claim 1, comprising:
while displaying the representation of the virtual object in the first set of visual properties and the first orientation, detecting that the object placement criteria are satisfied.
3. The method of claim 2, comprising:
in response to detecting that the object placement criteria are satisfied, displaying, via the display generation component, an animated transition that shows the representation of the virtual object moving from the first orientation to the second orientation and changing from having the first set of visual properties to having the second set of visual properties.
4. The method of claim 2, wherein detecting that the object placement criteria are satisfied comprises one or more of:
detecting that a plane has been identified in the field of view of the one or more cameras;
detecting less than a threshold amount of movement between the device and the physical environment for at least a threshold amount of time; and
detecting that at least a predetermined amount of time has elapsed since the request to display the virtual object in the first user interface area was received.
5. The method of claim 1, comprising:
detecting a second movement of the one or more cameras while displaying the representation of the virtual object in the second set of visual attributes and the second orientation on a third portion of the physical environment captured in the field of view of the one or more cameras; and
in response to detecting the second movement of the device, when the physical environment captured in the field of view of the one or more cameras moves in accordance with the second movement of the device and the second orientation continues to correspond to the plane in the physical environment detected in the field of view of the one or more cameras, maintaining the representation of the virtual object displayed in the second set of visual attributes and the second orientation on a third portion of the physical environment captured in the field of view of the one or more cameras.
6. The method of claim 1, comprising:
in accordance with a determination that the object placement criteria are satisfied, generate a haptic output in conjunction with displaying the representation of the virtual object having the second set of visual attributes and the second orientation, the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras.
7. The method of claim 1, comprising:
while displaying the representation of the virtual object in the second set of visual attributes and the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras, receiving an update regarding at least a position or orientation of the plane in the physical environment detected in the field of view of the one or more cameras; and
in response to receiving an update regarding at least the position or the orientation of the plane in the physical environment detected in the field of view of the one or more cameras, adjusting at least a position and/or an orientation of the representation of the virtual object in accordance with the update.
8. The method of claim 1, wherein:
the first set of visual attributes comprises a first size and a first level of translucency; and is
The second set of visual attributes includes a second dimension different from the first dimension, and a second level of translucency lower than the first level of translucency.
9. The method of claim 1, wherein:
while the virtual object is displayed in a respective user interface that does not include at least a portion of the field of view of the one or more cameras, receiving the request to display the virtual object in the first user interface area that includes at least a portion of the field of view of the one or more cameras, and
the first orientation corresponds to an orientation of the virtual object when the virtual object is displayed in the respective user interface when the request is received.
10. The method of claim 1, wherein the first orientation corresponds to a predefined orientation.
11. The method of claim 1, comprising:
detecting a request to change a simulated physical dimension of the virtual object from a first simulated physical dimension to a second simulated physical dimension relative to the physical environment captured in the field of view of the one or more cameras while the virtual object is displayed in the first user interface area with the second set of visual attributes and the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras; and
in response to detecting the request to change the simulated physical dimensions of the virtual object:
gradually changing a display size of the representation of the virtual object in the first user interface area in accordance with a gradual change in the simulated physical size of the virtual object from the first simulated physical size to the second simulated physical size; and
in accordance with a determination that the simulated physical dimension of the virtual object has reached a predefined simulated physical dimension during a gradual change in the displayed size of the representation of the virtual object in the first user interface region, generating a haptic output to indicate that the simulated physical dimension of the virtual object has reached the predefined simulated physical dimension.
12. The method of claim 11, comprising:
while displaying the virtual object in the first user interface area at the second simulated physical dimension of the virtual object that is different from the predefined simulated physical dimension, detecting a request to return the virtual object to the predefined simulated physical dimension; and
in response to detecting the request to return the virtual object to the predefined simulated physical dimension, changing the display size of the representation of the virtual object in the first user interface area in accordance with a change in a simulated physical dimension of the virtual object to the predefined simulated physical dimension.
13. The method of claim 1, comprising:
selecting a plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes according to respective positions and orientations of the one or more cameras relative to the physical environment, wherein selecting the plane comprises:
in accordance with a determination that the object placement criteria are satisfied when displaying the representation of the virtual object on a third portion of the physical environment captured in the field of view of the one or more cameras, selecting a first plane of a plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes; and
in accordance with a determination that the object placement criteria are satisfied when the representation of the virtual object is displayed on a fourth portion of the physical environment captured in the field of view of the one or more cameras, selecting a second plane of the plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes, wherein the third portion of the physical environment is different from the fourth portion of the physical environment and the first plane is different from the second plane.
14. The method of claim 1, comprising:
displaying a snapshot affordance while the virtual object is displayed in the first user interface area in the second set of visual attributes and the second orientation; and
in response to activation of the snapshot affordance, capturing a snapshot image including a current view of the representation of the virtual object, the representation of the virtual object being located at a placement position in the physical environment in the field of view of the one or more cameras and having the second set of visual attributes and the second orientation, the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras.
15. The method of claim 1, comprising:
displaying one or more control affordances in the first user interface area with the representation of the virtual object having the second set of visual attributes; and
detecting that a control fade criterion is satisfied while displaying the one or more control affordances with the representation of the virtual object having the second set of visual attributes; and the number of the first and second groups,
in response to detecting that the control fade criteria are satisfied, ceasing to display the one or more control affordances while continuing to display the representation of the virtual object having the second set of visual attributes in the first user interface area that includes the field of view of the one or more cameras.
16. The method of claim 1, comprising:
in response to the request to display the virtual object in the first user interface area: prior to displaying the representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, in accordance with a determination that calibration criteria are not satisfied, displaying a prompt for the user to move the device relative to the physical environment.
17. A computer system, comprising:
a display generation section;
one or more input devices;
one or more cameras;
one or more processors; and
memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras;
detecting a first movement of the one or more cameras while displaying the representation of the virtual object with the first set of visual attributes and the first orientation on a first portion of the physical environment captured in the field of view of the one or more cameras; and
in response to detecting the first movement of the one or more cameras, displaying the representation of the virtual object with the first set of visual attributes and the first orientation on a second portion of the physical environment captured in the field of view of the one or more cameras, wherein the second portion of the physical environment is different from the first portion of the physical environment.
18. The computer system of claim 17, wherein the one or more programs include instructions for:
while displaying the representation of the virtual object in the first set of visual properties and the first orientation, detecting that the object placement criteria are satisfied.
19. The computer system of claim 18, wherein the one or more programs include instructions for:
in response to detecting that the object placement criteria are satisfied, displaying, via the display generation component, an animated transition that shows the representation of the virtual object moving from the first orientation to the second orientation and changing from having the first set of visual properties to having the second set of visual properties.
20. The computer system of claim 18, wherein detecting that the object placement criteria are satisfied comprises one or more of:
detecting that a plane has been identified in the field of view of the one or more cameras;
detecting less than a threshold amount of movement between the device and the physical environment for at least a threshold amount of time; and
detecting that at least a predetermined amount of time has elapsed since the request to display the virtual object in the first user interface area was received.
21. The computer system of claim 17, wherein the one or more programs include instructions for:
detecting a second movement of the one or more cameras while displaying the representation of the virtual object in the second set of visual attributes and the second orientation on a third portion of the physical environment captured in the field of view of the one or more cameras; and
in response to detecting the second movement of the device, when the physical environment captured in the field of view of the one or more cameras moves in accordance with the second movement of the device and the second orientation continues to correspond to the plane in the physical environment detected in the field of view of the one or more cameras, maintaining the representation of the virtual object displayed in the second set of visual attributes and the second orientation on a third portion of the physical environment captured in the field of view of the one or more cameras.
22. The computer system of claim 17, wherein the one or more programs include instructions for:
in accordance with a determination that the object placement criteria are satisfied, generate a haptic output in conjunction with displaying the representation of the virtual object having the second set of visual attributes and the second orientation, the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras.
23. The computer system of claim 17, wherein the one or more programs include instructions for:
while displaying the representation of the virtual object in the second set of visual attributes and the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras, receiving an update regarding at least a position or orientation of the plane in the physical environment detected in the field of view of the one or more cameras; and
in response to receiving an update regarding at least the position or the orientation of the plane in the physical environment detected in the field of view of the one or more cameras, adjusting at least a position and/or an orientation of the representation of the virtual object in accordance with the update.
24. The computer system of claim 17, wherein:
the first set of visual attributes comprises a first size and a first level of translucency; and is
The second set of visual attributes includes a second dimension different from the first dimension, and a second level of translucency lower than the first level of translucency.
25. The computer system of claim 17, wherein:
while the virtual object is displayed in a respective user interface that does not include at least a portion of the field of view of the one or more cameras, receiving the request to display the virtual object in the first user interface area that includes at least a portion of the field of view of the one or more cameras, and
the first orientation corresponds to an orientation of the virtual object when the virtual object is displayed in the respective user interface when the request is received.
26. The computer system of claim 17, wherein the first orientation corresponds to a predefined orientation.
27. The computer system of claim 17, wherein the one or more programs include instructions for:
detecting a request to change a simulated physical dimension of the virtual object from a first simulated physical dimension to a second simulated physical dimension relative to the physical environment captured in the field of view of the one or more cameras while the virtual object is displayed in the first user interface area with the second set of visual attributes and the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras; and
in response to detecting the request to change the simulated physical dimensions of the virtual object:
gradually changing a display size of the representation of the virtual object in the first user interface area in accordance with a gradual change in the simulated physical size of the virtual object from the first simulated physical size to the second simulated physical size; and
in accordance with a determination that the simulated physical dimension of the virtual object has reached a predefined simulated physical dimension during a gradual change in the displayed size of the representation of the virtual object in the first user interface region, generating a haptic output to indicate that the simulated physical dimension of the virtual object has reached the predefined simulated physical dimension.
28. The computer system of claim 17, wherein the one or more programs include instructions for:
while displaying the virtual object in the first user interface area at the second simulated physical dimension of the virtual object that is different from the predefined simulated physical dimension, detecting a request to return the virtual object to the predefined simulated physical dimension; and
in response to detecting the request to return the virtual object to the predefined simulated physical dimension, changing the display size of the representation of the virtual object in the first user interface area in accordance with a change in a simulated physical dimension of the virtual object to the predefined simulated physical dimension.
29. The computer system of claim 17, wherein the one or more programs include instructions for:
selecting a plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes according to respective positions and orientations of the one or more cameras relative to the physical environment, wherein selecting the plane comprises:
in accordance with a determination that the object placement criteria are satisfied when displaying the representation of the virtual object on a third portion of the physical environment captured in the field of view of the one or more cameras, selecting a first plane of a plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes; and
in accordance with a determination that the object placement criteria are satisfied when the representation of the virtual object is displayed on a fourth portion of the physical environment captured in the field of view of the one or more cameras, selecting a second plane of the plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes, wherein the third portion of the physical environment is different from the fourth portion of the physical environment and the first plane is different from the second plane.
30. The computer system of claim 17, wherein the one or more programs include instructions for:
displaying a snapshot affordance while the virtual object is displayed in the first user interface area in the second set of visual attributes and the second orientation; and
in response to activation of the snapshot affordance, capturing a snapshot image including a current view of the representation of the virtual object, the representation of the virtual object being located at a placement position in the physical environment in the field of view of the one or more cameras and having the second set of visual attributes and the second orientation, the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras.
31. The computer system of claim 17, wherein the one or more programs include instructions for:
displaying one or more control affordances in the first user interface area with the representation of the virtual object having the second set of visual attributes; and
detecting that a control fade criterion is satisfied while displaying the one or more control affordances with the representation of the virtual object having the second set of visual attributes; and the number of the first and second groups,
in response to detecting that the control fade criteria are satisfied, ceasing to display the one or more control affordances while continuing to display the representation of the virtual object having the second set of visual attributes in the first user interface area that includes the field of view of the one or more cameras.
32. The computer system of claim 17, wherein the one or more programs include instructions for:
in response to the request to display the virtual object in the first user interface area: prior to displaying the representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, in accordance with a determination that calibration criteria are not satisfied, displaying a prompt for the user to move the device relative to the physical environment.
33. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer system with a display generation component, one or more input devices, and one or more cameras, cause the computer system to:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras;
detecting a first movement of the one or more cameras while displaying the representation of the virtual object with the first set of visual attributes and the first orientation on a first portion of the physical environment captured in the field of view of the one or more cameras; and
in response to detecting the first movement of the one or more cameras, displaying the representation of the virtual object with the first set of visual attributes and the first orientation on a second portion of the physical environment captured in the field of view of the one or more cameras, wherein the second portion of the physical environment is different from the first portion of the physical environment.
34. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
while displaying the representation of the virtual object in the first set of visual properties and the first orientation, detecting that the object placement criteria are satisfied.
35. The non-transitory computer readable storage medium of claim 34, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
in response to detecting that the object placement criteria are satisfied, displaying, via the display generation component, an animated transition that shows the representation of the virtual object moving from the first orientation to the second orientation and changing from having the first set of visual properties to having the second set of visual properties.
36. The non-transitory computer-readable storage medium of claim 34, wherein detecting that the object placement criteria are satisfied comprises one or more of:
detecting that a plane has been identified in the field of view of the one or more cameras;
detecting less than a threshold amount of movement between the device and the physical environment for at least a threshold amount of time; and
detecting that at least a predetermined amount of time has elapsed since the request to display the virtual object in the first user interface area was received.
37. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
detecting a second movement of the one or more cameras while displaying the representation of the virtual object in the second set of visual attributes and the second orientation on a third portion of the physical environment captured in the field of view of the one or more cameras; and
in response to detecting the second movement of the device, when the physical environment captured in the field of view of the one or more cameras moves in accordance with the second movement of the device and the second orientation continues to correspond to the plane in the physical environment detected in the field of view of the one or more cameras, maintaining the representation of the virtual object displayed in the second set of visual attributes and the second orientation on a third portion of the physical environment captured in the field of view of the one or more cameras.
38. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
in accordance with a determination that the object placement criteria are satisfied, generate a haptic output in conjunction with displaying the representation of the virtual object having the second set of visual attributes and the second orientation, the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras.
39. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
while displaying the representation of the virtual object in the second set of visual attributes and the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras, receiving an update regarding at least a position or orientation of the plane in the physical environment detected in the field of view of the one or more cameras; and
in response to receiving an update regarding at least the position or the orientation of the plane in the physical environment detected in the field of view of the one or more cameras, adjusting at least a position and/or an orientation of the representation of the virtual object in accordance with the update.
40. The non-transitory computer-readable storage medium of claim 33, wherein:
the first set of visual attributes comprises a first size and a first level of translucency; and is
The second set of visual attributes includes a second dimension different from the first dimension, and a second level of translucency lower than the first level of translucency.
41. The non-transitory computer-readable storage medium of claim 33, wherein:
while the virtual object is displayed in a respective user interface that does not include at least a portion of the field of view of the one or more cameras, receiving the request to display the virtual object in the first user interface area that includes at least a portion of the field of view of the one or more cameras, and
the first orientation corresponds to an orientation of the virtual object when the virtual object is displayed in the respective user interface when the request is received.
42. The non-transitory computer-readable storage medium of claim 33, wherein the first orientation corresponds to a predefined orientation.
43. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
detecting a request to change a simulated physical dimension of the virtual object from a first simulated physical dimension to a second simulated physical dimension relative to the physical environment captured in the field of view of the one or more cameras while the virtual object is displayed in the first user interface area with the second set of visual attributes and the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras; and
in response to detecting the request to change the simulated physical dimensions of the virtual object:
gradually changing a display size of the representation of the virtual object in the first user interface area in accordance with a gradual change in the simulated physical size of the virtual object from the first simulated physical size to the second simulated physical size; and
in accordance with a determination that the simulated physical dimension of the virtual object has reached a predefined simulated physical dimension during a gradual change in the displayed size of the representation of the virtual object in the first user interface region, generating a haptic output to indicate that the simulated physical dimension of the virtual object has reached the predefined simulated physical dimension.
44. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
while displaying the virtual object in the first user interface area at the second simulated physical dimension of the virtual object that is different from the predefined simulated physical dimension, detecting a request to return the virtual object to the predefined simulated physical dimension; and
in response to detecting the request to return the virtual object to the predefined simulated physical dimension, changing the display size of the representation of the virtual object in the first user interface area in accordance with a change in a simulated physical dimension of the virtual object to the predefined simulated physical dimension.
45. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
selecting a plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes according to respective positions and orientations of the one or more cameras relative to the physical environment, wherein selecting the plane comprises:
in accordance with a determination that the object placement criteria are satisfied when displaying the representation of the virtual object on a third portion of the physical environment captured in the field of view of the one or more cameras, selecting a first plane of a plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes; and
in accordance with a determination that the object placement criteria are satisfied when the representation of the virtual object is displayed on a fourth portion of the physical environment captured in the field of view of the one or more cameras, selecting a second plane of the plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes, wherein the third portion of the physical environment is different from the fourth portion of the physical environment and the first plane is different from the second plane.
46. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
displaying a snapshot affordance while the virtual object is displayed in the first user interface area in the second set of visual attributes and the second orientation; and
in response to activation of the snapshot affordance, capturing a snapshot image including a current view of the representation of the virtual object, the representation of the virtual object being located at a placement position in the physical environment in the field of view of the one or more cameras and having the second set of visual attributes and the second orientation, the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras.
47. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
displaying one or more control affordances in the first user interface area with the representation of the virtual object having the second set of visual attributes; and
detecting that a control fade criterion is satisfied while displaying the one or more control affordances with the representation of the virtual object having the second set of visual attributes; and the number of the first and second groups,
in response to detecting that the control fade criteria are satisfied, ceasing to display the one or more control affordances while continuing to display the representation of the virtual object having the second set of visual attributes in the first user interface area that includes the field of view of the one or more cameras.
48. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
in response to the request to display the virtual object in the first user interface area: prior to displaying the representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, in accordance with a determination that calibration criteria are not satisfied, displaying a prompt for the user to move the device relative to the physical environment.
49. A method, comprising:
at a device having a display generation component, one or more input devices, and one or more cameras:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras, wherein selecting the plane comprises:
in accordance with a determination that the object placement criteria are satisfied when displaying the representation of the virtual object on a first portion of the physical environment captured in the field of view of the one or more cameras, selecting a first plane of a plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes; and
in accordance with a determination that the object placement criteria are satisfied when the representation of the virtual object is displayed on a second portion of the physical environment captured in the field of view of the one or more cameras, selecting a second plane of the plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes, wherein the first portion of the physical environment is different from the second portion of the physical environment and the first plane is different from the second plane.
50. A computer system, comprising:
a display generation section;
one or more input devices;
one or more cameras;
one or more processors; and
memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras, wherein selecting the plane comprises:
in accordance with a determination that the object placement criteria are satisfied when displaying the representation of the virtual object on a first portion of the physical environment captured in the field of view of the one or more cameras, selecting a first plane of a plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes; and
in accordance with a determination that the object placement criteria are satisfied when the representation of the virtual object is displayed on a second portion of the physical environment captured in the field of view of the one or more cameras, selecting a second plane of the plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes, wherein the first portion of the physical environment is different from the second portion of the physical environment and the first plane is different from the second plane.
51. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer system with a display generation component, one or more input devices, and one or more cameras, cause the computer system to:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras, wherein selecting the plane comprises:
in accordance with a determination that the object placement criteria are satisfied when displaying the representation of the virtual object on a first portion of the physical environment captured in the field of view of the one or more cameras, selecting a first plane of a plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes; and
in accordance with a determination that the object placement criteria are satisfied when the representation of the virtual object is displayed on a second portion of the physical environment captured in the field of view of the one or more cameras, selecting a second plane of the plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes, wherein the first portion of the physical environment is different from the second portion of the physical environment and the first plane is different from the second plane.
52. A method, comprising:
at a device having a display generation component, one or more input devices, and one or more cameras:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras;
while displaying the representation of the virtual object having the second set of visual properties and the second orientation, displaying one or more control affordances with the representation of the virtual object;
detecting that a control fade criterion is satisfied while displaying the one or more control affordances with the representation of the virtual object having the second set of visual attributes; and the number of the first and second groups,
in response to detecting that the control fade criteria are satisfied, ceasing to display the one or more control affordances while continuing to display the representation of the virtual object having the second set of visual attributes in the first user interface area that includes the field of view of the one or more cameras.
53. A computer system, comprising:
a display generation section;
one or more input devices;
one or more cameras;
one or more processors; and
memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras;
while displaying the representation of the virtual object having the second set of visual properties and the second orientation, displaying one or more control affordances with the representation of the virtual object;
detecting that a control fade criterion is satisfied while displaying the one or more control affordances with the representation of the virtual object having the second set of visual attributes; and the number of the first and second groups,
in response to detecting that the control fade criteria are satisfied, ceasing to display the one or more control affordances while continuing to display the representation of the virtual object having the second set of visual attributes in the first user interface area that includes the field of view of the one or more cameras.
54. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer system with a display generation component, one or more input devices, and one or more cameras, cause the computer system to:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras;
while displaying the representation of the virtual object having the second set of visual properties and the second orientation, displaying one or more control affordances with the representation of the virtual object;
Detecting that a control fade criterion is satisfied while displaying the one or more control affordances with the representation of the virtual object having the second set of visual attributes; and the number of the first and second groups,
in response to detecting that the control fade criteria are satisfied, ceasing to display the one or more control affordances while continuing to display the representation of the virtual object having the second set of visual attributes in the first user interface area that includes the field of view of the one or more cameras.
55. A method, comprising:
at a device having a display generation component, one or more input devices, and one or more cameras:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras, wherein the method further comprises:
in response to the request to display the virtual object in the first user interface area: prior to displaying the representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, in accordance with a determination that calibration criteria are not satisfied, displaying a prompt for the user to move the device relative to the physical environment.
56. A computer system, comprising:
a display generation section;
one or more input devices;
one or more cameras;
one or more processors; and
memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties different from the first set of visual properties and a second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras, wherein the one or more programs further include instructions for:
in response to the request to display the virtual object in the first user interface area: prior to displaying the representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, in accordance with a determination that calibration criteria are not satisfied, displaying a prompt for the user to move the device relative to the physical environment.
57. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer system with a display generation component, one or more input devices, and one or more cameras, cause the computer system to:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties different from the first set of visual properties and a second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras, wherein the one or more programs further include instructions that, when executed by the computer system, cause the computer system to:
in response to the request to display the virtual object in the first user interface area: prior to displaying the representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, in accordance with a determination that calibration criteria are not satisfied, displaying a prompt for the user to move the device relative to the physical environment.
CN201911078900.7A 2018-01-24 2018-09-29 Apparatus, method and graphical user interface for system level behavior of 3D models Pending CN110851053A (en)

Applications Claiming Priority (11)

Application Number Priority Date Filing Date Title
US201862621529P 2018-01-24 2018-01-24
US62/621,529 2018-01-24
US201862679951P 2018-06-03 2018-06-03
US62/679,951 2018-06-03
DKPA201870346A DK201870346A1 (en) 2018-01-24 2018-06-11 Devices, Methods, and Graphical User Interfaces for System-Wide Behavior for 3D Models
DKPA201870348 2018-06-11
DKPA201870347 2018-06-11
DKPA201870348A DK180842B1 (en) 2018-01-24 2018-06-11 Devices, procedures, and graphical user interfaces for System-Wide behavior for 3D models
DKPA201870346 2018-06-11
DKPA201870347A DK201870347A1 (en) 2018-01-24 2018-06-11 Devices, Methods, and Graphical User Interfaces for System-Wide Behavior for 3D Models
CN201811165504.3A CN110069190A (en) 2018-01-24 2018-09-29 Equipment, method and the graphic user interface of system-level behavior for 3D model

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811165504.3A Division CN110069190A (en) 2018-01-24 2018-09-29 Equipment, method and the graphic user interface of system-level behavior for 3D model

Publications (1)

Publication Number Publication Date
CN110851053A true CN110851053A (en) 2020-02-28

Family

ID=67365888

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201911078900.7A Pending CN110851053A (en) 2018-01-24 2018-09-29 Apparatus, method and graphical user interface for system level behavior of 3D models
CN201811165504.3A Pending CN110069190A (en) 2018-01-24 2018-09-29 Equipment, method and the graphic user interface of system-level behavior for 3D model

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201811165504.3A Pending CN110069190A (en) 2018-01-24 2018-09-29 Equipment, method and the graphic user interface of system-level behavior for 3D model

Country Status (2)

Country Link
JP (1) JP6745852B2 (en)
CN (2) CN110851053A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111672121A (en) * 2020-06-11 2020-09-18 腾讯科技(深圳)有限公司 Virtual object display method and device, computer equipment and storage medium

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10939047B2 (en) 2019-07-22 2021-03-02 Himax Technologies Limited Method and apparatus for auto-exposure control in a depth sensing system
TWI722542B (en) * 2019-08-22 2021-03-21 奇景光電股份有限公司 Method and apparatus for performing auto-exposure control in depth sensing system including projector
CN110865704B (en) * 2019-10-21 2021-04-27 浙江大学 Gesture interaction device and method for 360-degree suspended light field three-dimensional display system
WO2021161719A1 (en) * 2020-02-12 2021-08-19 パナソニックIpマネジメント株式会社 Nursing care equipment provision assistance system, nursing care equipment provision assistance method, and program
CN111340962B (en) * 2020-02-24 2023-08-15 维沃移动通信有限公司 Control method, electronic device and storage medium
EP4042674A1 (en) * 2020-06-03 2022-08-17 Apple Inc. Camera and visitor user interfaces
JP6919050B1 (en) * 2020-12-16 2021-08-11 株式会社あかつき Game system, program and information processing method
CN112419511B (en) * 2020-12-26 2024-02-13 董丽萍 Three-dimensional model file processing method and device, storage medium and server
US11941750B2 (en) * 2022-02-11 2024-03-26 Shopify Inc. Augmented reality enabled dynamic product presentation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140285522A1 (en) * 2013-03-25 2014-09-25 Qualcomm Incorporated System and method for presenting true product dimensions within an augmented real-world setting
CN104081317A (en) * 2012-02-10 2014-10-01 索尼公司 Image processing device, and computer program product

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080071559A1 (en) * 2006-09-19 2008-03-20 Juha Arrasvuori Augmented reality assisted shopping
JP5573238B2 (en) * 2010-03-04 2014-08-20 ソニー株式会社 Information processing apparatus, information processing method and program
JP5799521B2 (en) * 2011-02-15 2015-10-28 ソニー株式会社 Information processing apparatus, authoring method, and program
US10078384B2 (en) * 2012-11-20 2018-09-18 Immersion Corporation Method and apparatus for providing haptic cues for guidance and alignment with electrostatic friction
EP3178222B1 (en) * 2014-09-02 2018-07-04 Apple Inc. Remote camera user interface
CN104486430A (en) * 2014-12-18 2015-04-01 北京奇虎科技有限公司 Method, device and client for realizing data sharing in mobile browser client
TWI567691B (en) * 2016-03-07 2017-01-21 粉迷科技股份有限公司 Method and system for editing scene in three-dimensional space
CN105824412A (en) * 2016-03-09 2016-08-03 北京奇虎科技有限公司 Method and device for presenting customized virtual special effects on mobile terminal
US10176641B2 (en) * 2016-03-21 2019-01-08 Microsoft Technology Licensing, Llc Displaying three-dimensional virtual objects based on field of view
WO2017208637A1 (en) * 2016-05-31 2017-12-07 ソニー株式会社 Information processing device, information processing method, and program
CN107071392A (en) * 2016-12-23 2017-08-18 网易(杭州)网络有限公司 Image processing method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104081317A (en) * 2012-02-10 2014-10-01 索尼公司 Image processing device, and computer program product
US20140285522A1 (en) * 2013-03-25 2014-09-25 Qualcomm Incorporated System and method for presenting true product dimensions within an augmented real-world setting

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111672121A (en) * 2020-06-11 2020-09-18 腾讯科技(深圳)有限公司 Virtual object display method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN110069190A (en) 2019-07-30
JP2019128941A (en) 2019-08-01
JP6745852B2 (en) 2020-08-26

Similar Documents

Publication Publication Date Title
US20210333979A1 (en) Devices, Methods, and Graphical User Interfaces for System-Wide Behavior for 3D Models
US11922584B2 (en) Devices, methods, and graphical user interfaces for displaying objects in 3D contexts
KR102543095B1 (en) Devices and methods for measuring using augmented reality
JP6745852B2 (en) Devices, methods, and graphical user interfaces for system-wide behavior of 3D models
AU2019101597B4 (en) Devices, methods, and graphical user interfaces for system-wide behavior for 3D models
AU2022201389B2 (en) Devices, methods, and graphical user interfaces for system-wide behavior for 3D models
EP3901741B1 (en) Devices and methods for measuring using augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination