CN111897614B - Integration of head portraits with multiple applications - Google Patents

Integration of head portraits with multiple applications Download PDF

Info

Publication number
CN111897614B
CN111897614B CN202010776600.2A CN202010776600A CN111897614B CN 111897614 B CN111897614 B CN 111897614B CN 202010776600 A CN202010776600 A CN 202010776600A CN 111897614 B CN111897614 B CN 111897614B
Authority
CN
China
Prior art keywords
user
contactable
electronic device
avatar
contact information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010776600.2A
Other languages
Chinese (zh)
Other versions
CN111897614A (en
Inventor
张宰祐
M·万欧斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DKPA201970531A external-priority patent/DK201970531A1/en
Application filed by Apple Inc filed Critical Apple Inc
Priority claimed from CN202080001137.2A external-priority patent/CN112204519A/en
Publication of CN111897614A publication Critical patent/CN111897614A/en
Application granted granted Critical
Publication of CN111897614B publication Critical patent/CN111897614B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Abstract

The present disclosure relates generally to the integration of an avatar with multiple applications. In some embodiments, the avatar is used to generate a sticker for sending in the content creation user interface. In some embodiments, the avatar is used to generate a representation of the contactable user in the contactable user editing user interface. In some embodiments, a user interface may be used to create and edit avatars. In some embodiments, the user interface may be operative to display an avatar responsive to a detected change in facial pose of the user. In some embodiments, contact information is transmitted or received.

Description

Integration of head portraits with multiple applications
This application is a divisional patent application entitled "integration of an avatar with multiple applications" filed on even 3/31/2020, application No. 202080001137.2.
Cross Reference to Related Applications
This application claims priority from the following patent applications: U.S. patent application No.62/843,967 entitled "audio INTEGRATION WITH MULTIPLE APPLICATIONS" filed on 6.5.2019; U.S. patent application No.62/855,891 entitled "audio INTEGRATION WITH MULTIPLE APPLICATIONS" filed on 31/5/2019; danish patent application No. pa 201970530 entitled "average interaction WITH MULTIPLE APPLICATIONS" filed on 27.8.2019; danish patent application No. pa 201970531 entitled "average interaction WITH MULTIPLE APPLICATIONS" filed on 27.8.2019; U.S. patent application No.16/582,500 entitled "audio INTEGRATION WITH MULTIPLE APPLICATIONS" filed on 25.9.2019; U.S. patent application No.16/582,570 entitled "audio INTEGRATION WITH MULTIPLE APPLICATIONS" filed on 25.9.2019; and U.S. patent application No.16/583,706 entitled "amplitude INTEGRATION WITH MULTIPLE APPLICATIONS" filed on 26.9.2019, the contents of which are hereby incorporated by reference in their entirety.
Technical Field
The present disclosure relates generally to computer user interfaces and, more particularly, to techniques for displaying avatars in various application user interfaces.
Background
Multimedia content such as emoticons, stickers, and virtual avatars are sometimes used in various application user interfaces. Emoticons, stickers, and virtual avatars represent a wide variety of people, objects, actions, and/or other things. Personal contact information such as their names and photo representations are used in messaging applications.
Disclosure of Invention
However, some techniques for using an electronic device to display and use avatars in various application user interfaces are often cumbersome and inefficient. For example, some prior art techniques use complex and time-consuming user interfaces that may include multiple keystrokes or keystrokes. The prior art requires more time than necessary, which results in wasted time and equipment energy for the user. This latter consideration is particularly important in battery-powered devices.
Thus, the present technology provides a faster and more efficient method and interface for displaying an avatar in various application user interfaces for an electronic device. Such methods and interfaces optionally complement or replace other methods for displaying an avatar in various application user interfaces. Such methods and interfaces reduce the cognitive burden placed on the user and result in a more efficient human-machine interface. For battery-driven computing devices, such methods and interfaces conserve power and increase the time interval between battery charges.
Example methods are described herein. An example method includes, at an electronic device having a display device and an input device: receiving, via one or more input devices, a request to display a sticker user interface; and in response to receiving a request to display a sticker user interface, displaying, via the display device, the sticker user interface including representations of a plurality of sets of stickers based on a user-created avatar, including: in accordance with a determination that the user has created a first set of two or more user-created avatars that includes a first avatar and a second avatar, displaying representations of a first plurality of sets of stickers, wherein the representations of the first plurality of sets of stickers include a representation of a set of stickers based on the first avatar and a representation of a set of stickers based on the second avatar; and in accordance with a determination that the user has created a second set of two or more user-created avatars that includes a third avatar that is not included in the first set of two or more user-created avatars, displaying representations of a second plurality of sets of stickers that are different from the representations of the first plurality of sets of stickers, wherein the representations of the second plurality of sets of stickers include representations of a set of stickers based on the third avatar that are not included in the representations of the first plurality of sets of stickers.
Example non-transitory computer-readable storage media are described herein. An example non-transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display device and an input device, the one or more programs including instructions for: receiving, via one or more input devices, a request to display a sticker user interface; and in response to receiving a request to display a sticker user interface, displaying, via the display device, the sticker user interface including representations of a plurality of sets of stickers based on a user-created avatar, including: in accordance with a determination that the user has created a first set of two or more user-created avatars that includes a first avatar and a second avatar, displaying representations of a first plurality of sets of stickers, wherein the representations of the first plurality of sets of stickers include a representation of a set of stickers based on the first avatar and a representation of a set of stickers based on the second avatar; and in accordance with a determination that the user has created a second set of two or more user-created avatars that includes a third avatar that is not included in the first set of two or more user-created avatars, displaying representations of a second plurality of sets of stickers that are different from the representations of the first plurality of sets of stickers, wherein the representations of the second plurality of sets of stickers include representations of a set of stickers based on the third avatar that are not included in the representations of the first plurality of sets of stickers.
Example transitory computer-readable storage media are described herein. An example transitory computer-readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display device and an input device, the one or more programs including instructions for: receiving, via one or more input devices, a request to display a sticker user interface; and in response to receiving a request to display a sticker user interface, displaying, via the display device, the sticker user interface including representations of a plurality of sets of stickers based on a user-created avatar, including: in accordance with a determination that the user has created a first set of two or more user-created avatars that includes a first avatar and a second avatar, displaying representations of a first plurality of sets of stickers, wherein the representations of the first plurality of sets of stickers include a representation of a set of stickers based on the first avatar and a representation of a set of stickers based on the second avatar; and in accordance with a determination that the user has created a second set of two or more user-created avatars that includes a third avatar that is not included in the first set of two or more user-created avatars, displaying representations of a second plurality of sets of stickers that are different from the representations of the first plurality of sets of stickers, wherein the representations of the second plurality of sets of stickers include representations of a set of stickers based on the third avatar that are not included in the representations of the first plurality of sets of stickers.
An example electronic device is described herein. An example electronic device includes a display device; an input device; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for: receiving, via one or more input devices, a request to display a sticker user interface; and in response to receiving a request to display a sticker user interface, displaying, via the display device, the sticker user interface including representations of a plurality of sets of stickers based on a user-created avatar, including: in accordance with a determination that the user has created a first set of two or more user-created avatars that includes a first avatar and a second avatar, displaying representations of a first plurality of sets of stickers, wherein the representations of the first plurality of sets of stickers include a representation of a set of stickers based on the first avatar and a representation of a set of stickers based on the second avatar; and in accordance with a determination that the user has created a second set of two or more user-created avatars that includes a third avatar that is not included in the first set of two or more user-created avatars, displaying representations of a second plurality of sets of stickers that are different from the representations of the first plurality of sets of stickers, wherein the representations of the second plurality of sets of stickers include representations of a set of stickers based on the third avatar that are not included in the representations of the first plurality of sets of stickers.
An example electronic device is described herein. An example electronic device includes a display device; an input device; means for receiving, via one or more input devices, a request to display a sticker user interface; and means for displaying, via the display device, a sticker user interface in response to receiving a request to display the sticker user interface, the sticker user interface including representations of a plurality of sets of stickers based on a user-created avatar, comprising: in accordance with a determination that the user has created a first set of two or more user-created avatars that includes a first avatar and a second avatar, displaying representations of a first plurality of sets of stickers, wherein the representations of the first plurality of sets of stickers include a representation of a set of stickers based on the first avatar and a representation of a set of stickers based on the second avatar; and in accordance with a determination that the user has created a second set of two or more user-created avatars that includes a third avatar that is not included in the first set of two or more user-created avatars, displaying representations of a second plurality of sets of stickers that are different from the representations of the first plurality of sets of stickers, wherein the representations of the second plurality of sets of stickers include representations of a set of stickers based on the third avatar that are not included in the representations of the first plurality of sets of stickers.
An example method is described herein. An example method includes, at an electronic device with a display device and one or more input devices: displaying, via the display device, a contactable user editing user interface comprising one or more presentation options for a contactable user comprising an avatar presentation option; detecting, via the one or more input devices, a selection of the avatar representation option; in response to detecting selection of the avatar representation option, initiating a process for selecting an avatar to use as a representation of the contactable user in the contactable user interface; receiving, via the one or more input devices, a sequence of one or more inputs corresponding to a selection of a simulated three-dimensional avatar as part of a process for selecting the avatar to use as a representation of the contactable user in the contactable user interface; and in response to the selection of the simulated three-dimensional avatar, displaying, via the display device, a posing user interface including one or more controls for selecting a pose of the simulated three-dimensional avatar from a plurality of different poses.
An example non-transitory computer-readable storage medium is described herein. An example non-transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display device and one or more input devices, the one or more programs including instructions for: displaying, via the display device, a contactable user editing user interface comprising one or more presentation options for a contactable user comprising an avatar presentation option; detecting, via the one or more input devices, a selection of the avatar representation option; in response to detecting selection of the avatar representation option, initiating a process for selecting an avatar to use as a representation of the contactable user in the contactable user interface; receiving, via the one or more input devices, a sequence of one or more inputs corresponding to a selection of a simulated three-dimensional avatar as part of a process for selecting the avatar to use as a representation of the contactable user in the contactable user interface; and in response to the selection of the simulated three-dimensional avatar, displaying, via the display device, a posing user interface including one or more controls for selecting a pose of the simulated three-dimensional avatar from a plurality of different poses.
An example transitory computer-readable storage medium is described herein. An example transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display device and one or more input devices, the one or more programs including instructions for: displaying, via the display device, a contactable user editing user interface comprising one or more presentation options for a contactable user comprising an avatar presentation option; detecting, via the one or more input devices, a selection of the avatar representation option; in response to detecting selection of the avatar representation option, initiating a process for selecting an avatar to use as a representation of the contactable user in the contactable user interface; receiving, via the one or more input devices, a sequence of one or more inputs corresponding to a selection of a simulated three-dimensional avatar as part of a process for selecting the avatar to use as a representation of the contactable user in the contactable user interface; and in response to the selection of the simulated three-dimensional avatar, displaying, via the display device, a posing user interface including one or more controls for selecting a pose of the simulated three-dimensional avatar from a plurality of different poses.
An example electronic device is described herein. An example electronic device includes a display device; one or more input devices; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for: displaying, via the display device, a contactable user editing user interface comprising one or more presentation options for a contactable user comprising an avatar presentation option; detecting, via the one or more input devices, a selection of the avatar representation option; in response to detecting selection of the avatar representation option, initiating a process for selecting an avatar to use as a representation of the contactable user in the contactable user interface; receiving, via the one or more input devices, a sequence of one or more inputs corresponding to a selection of a simulated three-dimensional avatar as part of a process for selecting the avatar to use as a representation of the contactable user in the contactable user interface; and in response to the selection of the simulated three-dimensional avatar, displaying, via the display device, a posing user interface including one or more controls for selecting a pose of the simulated three-dimensional avatar from a plurality of different poses.
An example electronic device is described herein. An example electronic device includes a display device; one or more input devices; means for displaying, via the display device, a contactable user-editing user interface comprising one or more presentation options for a contactable user comprising an avatar presentation option; means for detecting selection of the avatar representation option via the one or more input devices; means for initiating a process for selecting an avatar to use as a representation of the contactable user in the contactable user interface in response to detecting selection of the avatar representation option; apparatus for: receiving, via the one or more input devices, a sequence of one or more inputs corresponding to a selection of a simulated three-dimensional avatar as part of a process for selecting an avatar to use as a representation of the contactable user in the contactable user interface; and means for: in response to selection of the simulated three-dimensional avatar, a posing user interface is displayed via the display device that includes one or more controls for selecting a pose of the simulated three-dimensional avatar from a plurality of different poses.
An example method is described herein. An example method includes, at an electronic device having a display device and an input device, displaying via the display device an avatar-editing user interface, the avatar-editing user interface comprising: an avatar comprising a first feature, the first feature having a first color pattern generated with a first set of colors comprising a first color in a first region of the first color pattern; a set of color options for the first feature; and a plurality of color pattern options for the first feature comprising a second color pattern option different from the first color pattern; while the first feature is displayed as having the first color pattern generated with a first set of colors including a first color in a first area of the first color pattern, detecting, via the input device, a selection of a color option of the set of color options that corresponds to a second color; in response to detecting the selection: changing an appearance of one or more of the plurality of color pattern options having a first portion corresponding to the set of color options, wherein changing the appearance includes changing a portion of a second color pattern option from a respective color to the second color; and maintaining a display of an avatar including a first feature, the first feature having the first color pattern; detecting selection of a respective one of the color pattern options having the changed appearance; and in response to detecting selection of the respective color pattern option and upon selection of a second color for the set of color options, changing an appearance of the first feature of the avatar to have an appearance generated based on the respective color pattern option, wherein the second color applies to a portion of the respective color pattern option.
An example non-transitory computer-readable storage medium is described herein. An example non-transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display device and an input device, the one or more programs including instructions for: displaying, via the display device, an avatar editing user interface, the avatar editing user interface comprising: an avatar comprising a first feature, the first feature having a first color pattern generated with a first set of colors comprising a first color in a first region of the first color pattern; a set of color options for the first feature; and a plurality of color pattern options for the first feature comprising a second color pattern option different from the first color pattern; while the first feature is displayed as having the first color pattern generated with a first set of colors including a first color in a first area of the first color pattern, detecting, via the input device, a selection of a color option of the set of color options that corresponds to a second color; in response to detecting the selection: changing an appearance of one or more of the plurality of color pattern options having a first portion corresponding to the set of color options, wherein changing the appearance includes changing a portion of a second color pattern option from a respective color to the second color; and maintaining a display of an avatar including a first feature, the first feature having the first color pattern; detecting selection of a respective one of the color pattern options having the changed appearance; and in response to detecting selection of the respective color pattern option and upon selection of a second color for the set of color options, changing an appearance of the first feature of the avatar to have an appearance generated based on the respective color pattern option, wherein the second color applies to a portion of the respective color pattern option.
An example transitory computer-readable storage medium is described herein. An example transitory computer-readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display device and an input device, the one or more programs including instructions for: displaying, via the display device, an avatar editing user interface, the avatar editing user interface comprising: an avatar comprising a first feature, the first feature having a first color pattern generated with a first set of colors comprising a first color in a first region of the first color pattern; a set of color options for the first feature; and a plurality of color pattern options for the first feature comprising a second color pattern option different from the first color pattern; while the first feature is displayed as having the first color pattern generated with a first set of colors including a first color in a first area of the first color pattern, detecting, via the input device, a selection of a color option of the set of color options that corresponds to a second color; in response to detecting the selection: changing an appearance of one or more of the plurality of color pattern options having a first portion corresponding to the set of color options, wherein changing the appearance includes changing a portion of a second color pattern option from a respective color to the second color; and maintaining a display of an avatar including a first feature, the first feature having the first color pattern; detecting selection of a respective one of the color pattern options having the changed appearance; and in response to detecting selection of the respective color pattern option and upon selection of a second color for the set of color options, changing an appearance of the first feature of the avatar to have an appearance generated based on the respective color pattern option, wherein the second color applies to a portion of the respective color pattern option.
An example electronic device is described herein. An example electronic device includes a display device; an input device; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for: displaying, via the display device, an avatar editing user interface, the avatar editing user interface comprising: an avatar comprising a first feature, the first feature having a first color pattern generated with a first set of colors comprising a first color in a first region of the first color pattern; a set of color options for the first feature; and a plurality of color pattern options for the first feature comprising a second color pattern option different from the first color pattern; while the first feature is displayed as having the first color pattern generated with a first set of colors including a first color in a first area of the first color pattern, detecting, via the input device, a selection of a color option of the set of color options that corresponds to a second color; in response to detecting the selection: changing an appearance of one or more of the plurality of color pattern options having a first portion corresponding to the set of color options, wherein changing the appearance includes changing a portion of a second color pattern option from a respective color to the second color; and maintaining a display of an avatar including a first feature, the first feature having the first color pattern; detecting selection of a respective one of the color pattern options having the changed appearance; and in response to detecting selection of the respective color pattern option and upon selection of a second color for the set of color options, changing an appearance of the first feature of the avatar to have an appearance generated based on the respective color pattern option, wherein the second color applies to a portion of the respective color pattern option.
An example electronic device is described herein. An example electronic device includes a display device; an input device; means for displaying, via the display device, an avatar-editing user interface, the avatar-editing user interface comprising: an avatar comprising a first feature, the first feature having a first color pattern generated with a first set of colors comprising a first color in a first region of the first color pattern; a set of color options for the first feature; and a plurality of color pattern options for the first feature comprising a second color pattern option different from the first color pattern; apparatus for: while the first feature is displayed as having the first color pattern generated with a first set of colors including a first color in a first area of the first color pattern, detecting, via the input device, a selection of a color option of the set of color options that corresponds to a second color; apparatus for: in response to detecting the selection, changing an appearance of one or more of the plurality of color pattern options having a color pattern option corresponding to a first portion of the set of color options, wherein changing the appearance includes changing a portion of a second color pattern option from a respective color to the second color; and maintaining a display of an avatar including a first feature, the first feature having the first color pattern; means for detecting selection of a respective one of the color pattern options having a changed appearance; and means for: in response to detecting selection of the respective color pattern option and upon selection of a second color for the set of color options, changing an appearance of the first feature of the avatar to have an appearance generated based on the respective color pattern option, wherein the second color applies to a portion of the respective color pattern option.
An example method is described herein. An example method includes, at an electronic device having a display device and an input device: displaying, via the display device, an avatar editing user interface, the avatar editing user interface comprising: a avatar, the respective avatar feature having a first pose; and an avatar option selection area including a plurality of avatar feature options corresponding to a set of candidate values for a characteristic of the avatar feature and including a plurality of avatar feature options having an avatar-based appearance; detecting, via the input device, a request to display an option for editing the corresponding avatar characteristic; and in response to detecting the request, updating the avatar option selection area to display avatar feature options corresponding to a set of candidate values for the characteristic of the respective avatar feature, including concurrently displaying: a representation of a first option for the respective avatar feature, wherein the respective avatar feature has a second pose; and a representation of a second option for the respective avatar characteristic, wherein the respective avatar characteristic has a third pose different from the second pose.
An example non-transitory computer-readable storage medium is described herein. An example non-transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display device and an input device, the one or more programs including instructions for: displaying, via the display device, an avatar editing user interface, the avatar editing user interface comprising: a avatar, the respective avatar feature having a first pose; and an avatar option selection area including a plurality of avatar feature options corresponding to a set of candidate values for a characteristic of the avatar feature and including a plurality of avatar feature options having an avatar-based appearance; detecting, via the input device, a request to display an option for editing the corresponding avatar characteristic; and in response to detecting the request, updating the avatar option selection area to display avatar feature options corresponding to a set of candidate values for the characteristic of the respective avatar feature, including concurrently displaying: a representation of a first option for the respective avatar feature, wherein the respective avatar feature has a second pose; and a representation of a second option for the respective avatar characteristic, wherein the respective avatar characteristic has a third pose different from the second pose.
An example transitory computer-readable storage medium is described herein. An example transitory computer-readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display device and an input device, the one or more programs including instructions for: displaying, via the display device, an avatar editing user interface, the avatar editing user interface comprising: a avatar, the respective avatar feature having a first pose; and an avatar option selection area including a plurality of avatar feature options corresponding to a set of candidate values for a characteristic of the avatar feature and including a plurality of avatar feature options having an avatar-based appearance; detecting, via the input device, a request to display an option for editing the corresponding avatar characteristic; and in response to detecting the request, updating the avatar option selection area to display avatar feature options corresponding to a set of candidate values for the characteristic of the respective avatar feature, including concurrently displaying: a representation of a first option for the respective avatar feature, wherein the respective avatar feature has a second pose; and a representation of a second option for the respective avatar characteristic, wherein the respective avatar characteristic has a third pose different from the second pose.
An example electronic device is described herein. An example electronic device includes a display device; an input device; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for: displaying, via the display device, an avatar editing user interface, the avatar editing user interface comprising: a avatar, the respective avatar feature having a first pose; and an avatar option selection area including a plurality of avatar feature options corresponding to a set of candidate values for a characteristic of the avatar feature and including a plurality of avatar feature options having an avatar-based appearance; detecting, via the input device, a request to display an option for editing the corresponding avatar characteristic; and in response to detecting the request, updating the avatar option selection area to display avatar feature options corresponding to a set of candidate values for the characteristic of the respective avatar feature, including concurrently displaying: a representation of a first option for the respective avatar feature, wherein the respective avatar feature has a second pose; and a representation of a second option for the respective avatar characteristic, wherein the respective avatar characteristic has a third pose different from the second pose.
An example electronic device is described herein. An example electronic device includes a display device; an input device; means for displaying, via the display device, an avatar-editing user interface, the avatar-editing user interface comprising: a avatar, the respective avatar feature having a first pose; and an avatar option selection area including a plurality of avatar feature options corresponding to a set of candidate values for a characteristic of the avatar feature and including a plurality of avatar feature options having an avatar-based appearance; means for detecting, via the input device, a request to display an option for editing a respective avatar feature; and means for: in response to detecting the request, updating the avatar option selection area to display avatar feature options corresponding to a set of candidate values for the characteristic of the respective avatar feature, including concurrently displaying: a representation of a first option for the respective avatar feature, wherein the respective avatar feature has a second pose; and a representation of a second option for the respective avatar characteristic, wherein the respective avatar characteristic has a third pose different from the second pose.
An example method is described herein. An example method includes, at an electronic device with a display device and one or more cameras: displaying, via the display device, a virtual avatar having one or more avatar characteristics, the one or more avatar characteristics changing appearance in response to a detected change in facial pose in the field of view of the one or more cameras, including a first avatar feature having a first appearance, the first appearance being modified in response to the detected change in facial pose in the field of view of the one or more cameras; detecting movement of one or more facial features of a face when the face is detected in the field of view of the one or more cameras that includes the one or more detected facial features; in response to detecting movement of the one or more facial features: in accordance with a determination that the detected movement of the one or more facial features is such that the first pose criteria are met, modifying the virtual avatar to display a first avatar feature having a second appearance different from the first appearance, the second appearance being modified in response to a detected change in facial pose in the field of view of the one or more cameras; in accordance with a determination that the detected movement of the one or more facial features satisfies a second pose criterion different from the first pose criterion, modifying the virtual avatar to display a first avatar feature having a third appearance different from the first appearance and the second appearance, the third appearance modified in response to a detected change in facial pose in the field of view of the one or more cameras; and in accordance with a determination that the detected movement of the one or more facial features meets criteria for maintaining display of the first avatar feature having the first appearance, modifying the virtual avatar to display the first avatar feature by modifying the first appearance of the first avatar feature in response to the detected change in facial pose in the field of view of the one or more cameras.
An example non-transitory computer-readable storage medium is described herein. An example non-transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a virtual avatar having one or more avatar characteristics, the one or more avatar characteristics changing appearance in response to a detected change in facial pose in the field of view of the one or more cameras, including a first avatar feature having a first appearance, the first appearance being modified in response to the detected change in facial pose in the field of view of the one or more cameras; detecting movement of one or more facial features of a face when the face is detected in the field of view of the one or more cameras that includes the one or more detected facial features; in response to detecting movement of the one or more facial features: in accordance with a determination that the detected movement of the one or more facial features is such that the first pose criteria are met, modifying the virtual avatar to display a first avatar feature having a second appearance different from the first appearance, the second appearance being modified in response to a detected change in facial pose in the field of view of the one or more cameras; in accordance with a determination that the detected movement of the one or more facial features satisfies a second pose criterion different from the first pose criterion, modifying the virtual avatar to display a first avatar feature having a third appearance different from the first appearance and the second appearance, the third appearance modified in response to a detected change in facial pose in the field of view of the one or more cameras; and in accordance with a determination that the detected movement of the one or more facial features meets criteria for maintaining display of the first avatar feature having the first appearance, modifying the virtual avatar to display the first avatar feature by modifying the first appearance of the first avatar feature in response to the detected change in facial pose in the field of view of the one or more cameras.
An example transitory computer-readable storage medium is described herein. An example transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for: displaying, via the display device, a virtual avatar having one or more avatar characteristics, the one or more avatar characteristics changing appearance in response to a detected change in facial pose in the field of view of the one or more cameras, including a first avatar feature having a first appearance, the first appearance being modified in response to the detected change in facial pose in the field of view of the one or more cameras; detecting movement of one or more facial features of a face when the face is detected in the field of view of the one or more cameras that includes the one or more detected facial features; in response to detecting movement of the one or more facial features: in accordance with a determination that the detected movement of the one or more facial features is such that the first pose criteria are met, modifying the virtual avatar to display a first avatar feature having a second appearance different from the first appearance, the second appearance being modified in response to a detected change in facial pose in the field of view of the one or more cameras; in accordance with a determination that the detected movement of the one or more facial features satisfies a second pose criterion different from the first pose criterion, modifying the virtual avatar to display a first avatar feature having a third appearance different from the first appearance and the second appearance, the third appearance modified in response to a detected change in facial pose in the field of view of the one or more cameras; and in accordance with a determination that the detected movement of the one or more facial features meets criteria for maintaining display of the first avatar feature having the first appearance, modifying the virtual avatar to display the first avatar feature by modifying the first appearance of the first avatar feature in response to the detected change in facial pose in the field of view of the one or more cameras.
An example electronic device is described herein. An example electronic device includes a display device; one or more cameras; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for: displaying, via the display device, a virtual avatar having one or more avatar characteristics, the one or more avatar characteristics changing appearance in response to a detected change in facial pose in the field of view of the one or more cameras, including a first avatar feature having a first appearance, the first appearance being modified in response to the detected change in facial pose in the field of view of the one or more cameras; detecting movement of one or more facial features of a face when the face is detected in the field of view of the one or more cameras that includes the one or more detected facial features; in response to detecting movement of the one or more facial features: in accordance with a determination that the detected movement of the one or more facial features is such that the first pose criteria are met, modifying the virtual avatar to display a first avatar feature having a second appearance different from the first appearance, the second appearance being modified in response to a detected change in facial pose in the field of view of the one or more cameras; in accordance with a determination that the detected movement of the one or more facial features satisfies a second pose criterion different from the first pose criterion, modifying the virtual avatar to display a first avatar feature having a third appearance different from the first appearance and the second appearance, the third appearance modified in response to a detected change in facial pose in the field of view of the one or more cameras; and in accordance with a determination that the detected movement of the one or more facial features meets criteria for maintaining display of the first avatar feature having the first appearance, modifying the virtual avatar to display the first avatar feature by modifying the first appearance of the first avatar feature in response to the detected change in facial pose in the field of view of the one or more cameras.
An example electronic device is described herein. An example electronic device includes a display device; one or more cameras; means for displaying, via the display device, a virtual avatar having one or more avatar features that change appearance in response to a detected change in facial pose in the field of view of the one or more cameras, including a first avatar feature having a first appearance that is modified in response to a detected change in facial pose in the field of view of the one or more cameras; apparatus for: detecting movement of one or more facial features of a face when the face is detected in the field of view of the one or more cameras that includes the one or more detected facial features; apparatus for: in response to detecting movement of the one or more facial features: in accordance with a determination that the detected movement of the one or more facial features is such that the first pose criteria are met, modifying the virtual avatar to display a first avatar feature having a second appearance different from the first appearance, the second appearance being modified in response to a detected change in facial pose in the field of view of the one or more cameras; in accordance with a determination that the detected movement of the one or more facial features satisfies a second pose criterion different from the first pose criterion, modifying the virtual avatar to display a first avatar feature having a third appearance different from the first appearance and the second appearance, the third appearance modified in response to a detected change in facial pose in the field of view of the one or more cameras; and in accordance with a determination that the detected movement of the one or more facial features meets criteria for maintaining display of the first avatar feature having the first appearance, modifying the virtual avatar to display the first avatar feature by modifying the first appearance of the first avatar feature in response to the detected change in facial pose in the field of view of the one or more cameras.
An example method is described herein. An example method includes, at an electronic device with a display device and one or more input devices: displaying a content creation user interface via the display device; while displaying the content-creation user interface, receiving, via the one or more input devices, a request to display a first display area comprising a plurality of graphical objects corresponding to predefined content for insertion into the content-creation user interface, wherein displaying the first display area comprises: in response to receiving the request, displaying, via the display device, a first display area including a first subset of graphical objects having an appearance based on a set of avatars available at the electronic device, including: in accordance with a determination that the set of avatars includes the first type of avatar, displaying one of the graphical objects in the first subset having an appearance based on the first type of avatar; and in accordance with a determination that the set of avatars does not include any avatars of the first type, displaying the graphical objects in the first subset that have an appearance based on avatars of a second type that are different from the first type without displaying one of the graphical objects in the first subset that have an appearance based on avatars of the first type.
An example non-transitory computer-readable storage medium is described herein. An example non-transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display device and one or more input devices, the one or more programs including instructions for: displaying a content creation user interface via the display device; while displaying the content-creation user interface, receiving, via the one or more input devices, a request to display a first display area comprising a plurality of graphical objects corresponding to predefined content for insertion into the content-creation user interface, wherein displaying the first display area comprises: in response to receiving the request, displaying, via the display device, a first display area including a first subset of graphical objects having an appearance based on a set of avatars available at the electronic device, including: in accordance with a determination that the set of avatars includes the first type of avatar, displaying one of the graphical objects in the first subset having an appearance based on the first type of avatar; and in accordance with a determination that the set of avatars does not include any avatars of the first type, displaying the graphical objects in the first subset that have an appearance based on avatars of a second type that are different from the first type without displaying one of the graphical objects in the first subset that have an appearance based on avatars of the first type.
An example transitory computer-readable storage medium is described herein. An example transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display device and one or more input devices, the one or more programs including instructions for: displaying a content creation user interface via the display device; while displaying the content-creation user interface, receiving, via the one or more input devices, a request to display a first display area comprising a plurality of graphical objects corresponding to predefined content for insertion into the content-creation user interface, wherein displaying the first display area comprises: in response to receiving the request, displaying, via the display device, a first display area including a first subset of graphical objects having an appearance based on a set of avatars available at the electronic device, including: in accordance with a determination that the set of avatars includes the first type of avatar, displaying one of the graphical objects in the first subset having an appearance based on the first type of avatar; and in accordance with a determination that the set of avatars does not include any avatars of the first type, displaying the graphical objects in the first subset that have an appearance based on avatars of a second type that are different from the first type without displaying one of the graphical objects in the first subset that have an appearance based on avatars of the first type.
An example electronic device is described herein. An example electronic device includes a display device; one or more input devices; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for: displaying a content creation user interface via the display device; while displaying the content-creation user interface, receiving, via the one or more input devices, a request to display a first display area comprising a plurality of graphical objects corresponding to predefined content for insertion into the content-creation user interface, wherein displaying the first display area comprises: in response to receiving the request, displaying, via the display device, a first display area including a first subset of graphical objects having an appearance based on a set of avatars available at the electronic device, including: in accordance with a determination that the set of avatars includes the first type of avatar, displaying one of the graphical objects in the first subset having an appearance based on the first type of avatar; and in accordance with a determination that the set of avatars does not include any avatars of the first type, displaying the graphical objects in the first subset that have an appearance based on avatars of a second type that are different from the first type without displaying one of the graphical objects in the first subset that have an appearance based on avatars of the first type.
An example electronic device is described herein. An example electronic device includes a display device; one or more input devices; means for displaying a content creation user interface via the display device; apparatus for: while displaying the content-creation user interface, receiving, via the one or more input devices, a request to display a first display area comprising a plurality of graphical objects corresponding to predefined content for insertion into the content-creation user interface, wherein displaying the first display area comprises: in response to receiving the request, displaying, via the display device, a first display area including a first subset of graphical objects having an appearance based on a set of avatars available at the electronic device, including: in accordance with a determination that the set of avatars includes the first type of avatar, displaying one of the graphical objects in the first subset having an appearance based on the first type of avatar; and means for: in accordance with a determination that the set of avatars does not include any avatars of the first type, graphical objects in the first subset having an appearance based on avatars of a second type different from the first type are displayed without displaying one of the graphical objects in the first subset having an appearance based on avatars of the first type.
Exemplary methods are disclosed herein. An example method includes, at an electronic device having one or more communication devices with which a user is associated, receiving a request to transmit a first message to a set of contactable users, the set of contactable users including a first contactable user; and in response to receiving the request to transmit the first message: in accordance with a determination that the first contactable user satisfies a set of sharing criteria, the set of sharing criteria including a first sharing criterion that is satisfied when the first contactable user corresponds to an approved recipient: transmitting, via the one or more communication devices, a first message and contact information of a user associated with the electronic device to the first contactable user; and in accordance with a determination that the first contactable user does not satisfy the set of sharing criteria: the first message is transmitted to the first contactable user via the one or more communication devices without transmitting contact information for a user associated with the electronic device.
Example non-transitory computer-readable storage media are described herein. An example non-transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with one or more communication devices, wherein a user is associated with the electronic device and the one or more programs include instructions for: receiving a request to transmit a first message to a group of contactable users, the group of contactable users including a first contactable user; and in response to receiving the request to transmit the first message: in accordance with a determination that the first contactable user satisfies a set of sharing criteria, the set of sharing criteria including a first sharing criterion that is satisfied when the first contactable user corresponds to an approved recipient: transmitting, via the one or more communication devices, a first message and contact information of a user associated with the electronic device to the first contactable user; and in accordance with a determination that the first contactable user does not satisfy the set of sharing criteria: the first message is transmitted to the first contactable user via the one or more communication devices without transmitting contact information for a user associated with the electronic device.
Example transitory computer-readable storage media are described herein. An example transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with one or more communication devices, wherein a user is associated with the electronic device and the one or more programs include instructions for: receiving a request to transmit a first message to a group of contactable users, the group of contactable users including a first contactable user; and in response to receiving the request to transmit the first message: in accordance with a determination that the first contactable user satisfies a set of sharing criteria, the set of sharing criteria including a first sharing criterion that is satisfied when the first contactable user corresponds to an approved recipient: transmitting, via the one or more communication devices, a first message and contact information of a user associated with the electronic device to the first contactable user; and in accordance with a determination that the first contactable user does not satisfy the set of sharing criteria: the first message is transmitted to the first contactable user via the one or more communication devices without transmitting contact information for a user associated with the electronic device.
An example electronic device is described herein. An example electronic device includes: one or more communication devices; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, wherein a user is associated with the electronic device and the one or more programs include instructions for: receiving a request to transmit a first message to a group of contactable users, the group of contactable users including a first contactable user; and in response to receiving the request to transmit the first message: in accordance with a determination that the first contactable user satisfies a set of sharing criteria, the set of sharing criteria including a first sharing criterion that is satisfied when the first contactable user corresponds to an approved recipient: transmitting, via the one or more communication devices, a first message and contact information of a user associated with the electronic device to the first contactable user; and in accordance with a determination that the first contactable user does not satisfy the set of sharing criteria: the first message is transmitted to the first contactable user via the one or more communication devices without transmitting contact information for a user associated with the electronic device.
An example electronic device is described herein. An example electronic device includes one or more communication devices, wherein a user is associated with the electronic device; means for receiving a request to transmit a first message to a group of contactable users, the group of contactable users including a first contactable user; and means for: in response to receiving the request to transmit the first message: in accordance with a determination that the first contactable user satisfies a set of sharing criteria, the set of sharing criteria including a first sharing criterion that is satisfied when the first contactable user corresponds to an approved recipient: transmitting, via the one or more communication devices, a first message and contact information of a user associated with the electronic device to the first contactable user; and in accordance with a determination that the first contactable user does not satisfy the set of sharing criteria: the first message is transmitted to the first contactable user via the one or more communication devices without transmitting contact information for a user associated with the electronic device.
Exemplary methods are disclosed herein. An example method includes, at an electronic device having a display device and having one or more communication devices: receiving a first message via the one or more communication devices; receiving a request to display the first message after receiving the first message; and in response to receiving the request to display the first message: in accordance with a determination that the first contactable user satisfies a set of prompting criteria, wherein the set of prompting criteria includes a first prompting criterion that is satisfied when updated contact information corresponding to the first contactable user has been received, concurrently displaying on the display device: a first message, and a visual indication that updated contact information is available to the first contactable user; and in accordance with a determination that the first contactable user does not meet the set of prompting criteria, displaying the first message on the display device without displaying a visual indication that updated contact information is available for the first contactable user.
Example non-transitory computer-readable storage media are described herein. An example non-transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display device and one or more communication devices, the one or more programs including instructions for: receiving a first message via the one or more communication devices; receiving a request to display the first message after receiving the first message; and in response to receiving the request to display the first message: in accordance with a determination that the first contactable user satisfies a set of prompting criteria, wherein the set of prompting criteria includes a first prompting criterion that is satisfied when updated contact information corresponding to the first contactable user has been received, concurrently displaying on the display device: a first message, and a visual indication that updated contact information is available to the first contactable user; and in accordance with a determination that the first contactable user does not meet the set of prompting criteria, displaying the first message on the display device without displaying a visual indication that updated contact information is available for the first contactable user.
Example transitory computer-readable storage media are described herein. An example transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display device and one or more communication devices, the one or more programs including instructions for: receiving a first message via the one or more communication devices; receiving a request to display the first message after receiving the first message; and in response to receiving the request to display the first message: in accordance with a determination that the first contactable user satisfies a set of prompting criteria, wherein the set of prompting criteria includes a first prompting criterion that is satisfied when updated contact information corresponding to the first contactable user has been received, concurrently displaying on the display device: a first message, and a visual indication that updated contact information is available to the first contactable user; and in accordance with a determination that the first contactable user does not meet the set of prompting criteria, displaying the first message on the display device without displaying a visual indication that updated contact information is available for the first contactable user.
An example electronic device is described herein. An example device includes a display device; one or more communication devices; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for: receiving a first message via the one or more communication devices; receiving a request to display the first message after receiving the first message; and in response to receiving the request to display the first message: in accordance with a determination that the first contactable user satisfies a set of prompting criteria, wherein the set of prompting criteria includes a first prompting criterion that is satisfied when updated contact information corresponding to the first contactable user has been received, concurrently displaying on the display device: a first message, and a visual indication that updated contact information is available to the first contactable user; and in accordance with a determination that the first contactable user does not meet the set of prompting criteria, displaying the first message on the display device without displaying a visual indication that updated contact information is available for the first contactable user.
An example electronic device is described herein. An example device includes a display device; one or more communication devices; means for receiving a first message via the one or more communication devices; means for receiving a request to display the first message after receiving the first message; and means for: in response to receiving the request to display the first message: in accordance with a determination that the first contactable user satisfies a set of prompting criteria, wherein the set of prompting criteria includes a first prompting criterion that is satisfied when updated contact information corresponding to the first contactable user has been received, concurrently displaying on the display device: a first message, and a visual indication that updated contact information is available to the first contactable user; and in accordance with a determination that the first contactable user does not meet the set of prompting criteria, displaying the first message on the display device without displaying a visual indication that updated contact information is available for the first contactable user.
Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors. Executable instructions for performing these functions are optionally included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
Thus, a faster and more efficient method and interface for displaying avatars in various application user interfaces is provided for devices, thereby increasing the effectiveness, efficiency, and user satisfaction of such devices. Such methods and interfaces may supplement or replace other methods for displaying an avatar in various application user interfaces.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following detailed description taken in conjunction with the following drawings, wherein like reference numerals designate corresponding parts throughout the figures.
FIG. 1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
Fig. 1B is a block diagram illustrating exemplary components for event processing, according to some embodiments.
FIG. 2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
Fig. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
Figure 4A illustrates an exemplary user interface for a menu of applications on a portable multifunction device according to some embodiments.
FIG. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface separate from a display, in accordance with some embodiments.
Fig. 5A illustrates a personal electronic device, according to some embodiments.
Fig. 5B is a block diagram illustrating a personal electronic device, according to some embodiments.
Fig. 6A-6V illustrate exemplary user interfaces for displaying an avatar in a sticker application user interface and an avatar keyboard application user interface, according to some embodiments.
Fig. 7 is a flow diagram illustrating a method for displaying an avatar in a sticker application user interface, in accordance with some embodiments.
FIG. 8 is a flow diagram illustrating a method for displaying an avatar in an avatar keyboard application user interface, in accordance with some embodiments.
Fig. 9A-9 AG illustrate exemplary user interfaces for displaying an avatar in a contacts application user interface, according to some embodiments.
Figure 10 is a flow diagram illustrating a method for displaying an avatar in a contacts application user interface, in accordance with some embodiments.
Fig. 11A-11 AD illustrate exemplary user interfaces for displaying an avatar in an avatar-editing application user interface, according to some embodiments.
Fig. 12 is a flow diagram illustrating a method for displaying an avatar in an avatar editing application user interface, in accordance with some embodiments.
Fig. 13 is a flow diagram illustrating a method for displaying an avatar in an avatar-editing application user interface, in accordance with some embodiments.
Fig. 14A-14E illustrate an exemplary user interface for displaying a virtual avatar, according to some embodiments.
Fig. 15 is a flow diagram illustrating a method for displaying a virtual avatar, according to some embodiments.
Fig. 16A-16X illustrate exemplary devices and user interfaces for sharing contact information, according to some embodiments.
Fig. 17 is a flow diagram illustrating a method for providing contact information using an electronic device, according to some embodiments.
Fig. 18 is a flow diagram illustrating a method for receiving contact information using an electronic device, according to some embodiments.
Detailed Description
The following description sets forth exemplary methods, parameters, and the like. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure, but is instead provided as a description of exemplary embodiments.
Electronic devices need to provide efficient methods and interfaces for displaying avatars in various application user interfaces. For example, existing applications display avatars, but the process for displaying avatars is often cumbersome and inefficient. Moreover, such processes do not provide seamless integration of the avatar with other user interfaces. Techniques for displaying avatars in various application user interfaces are disclosed herein. Such techniques may reduce the cognitive burden placed on users using avatars in various application user interfaces, thereby increasing productivity. Moreover, such techniques may reduce processor power and battery power that would otherwise be wasted on redundant user inputs.
1A-1B, 2, 3, 4A-4B, and 5A-5B provide descriptions of exemplary devices for displaying avatars in various application user interfaces. Fig. 6A-6V illustrate exemplary user interfaces for displaying an avatar in a sticker application user interface and an avatar keyboard application user interface, according to some embodiments. Fig. 7 is a flow diagram illustrating a method for displaying an avatar in a sticker application user interface, in accordance with some embodiments. FIG. 8 is a flow diagram illustrating a method for displaying an avatar in an avatar keyboard application user interface, in accordance with some embodiments. The user interfaces in fig. 6A to 6V are for illustrating the processes described below including the processes in fig. 7 and 8. Fig. 9A-9 AG illustrate exemplary user interfaces for displaying an avatar in a contacts application user interface, according to some embodiments. Figure 10 is a flow diagram illustrating a method for displaying an avatar in a contacts application user interface, in accordance with some embodiments. The user interfaces in fig. 9A to 9AG are for illustrating a process described below including the process in fig. 10. Fig. 11A-11 AD illustrate exemplary user interfaces for displaying an avatar in an avatar-editing application user interface, according to some embodiments. Fig. 12 is a flow diagram illustrating a method for displaying an avatar in an avatar editing application user interface, in accordance with some embodiments. Fig. 13 is a flow diagram illustrating a method for displaying an avatar in an avatar-editing application user interface, in accordance with some embodiments. The user interfaces in fig. 11A to 11AD are used to illustrate the processes described below including the processes in fig. 12 and 13. Fig. 14A-14E illustrate an exemplary user interface for displaying a virtual avatar, according to some embodiments. Fig. 15 is a flow diagram illustrating a method for displaying a virtual avatar, according to some embodiments. The user interfaces in fig. 14A to 14E are for illustrating the following processes including the process in fig. 15. Fig. 16A-16X illustrate exemplary devices and user interfaces for sharing contact information, according to some embodiments. Fig. 17 is a flow diagram illustrating a method for providing contact information using an electronic device, according to some embodiments. Fig. 18 is a flow diagram illustrating a method for receiving contact information using an electronic device, according to some embodiments. The exemplary devices and user interfaces in fig. 16A-16X are used to illustrate the following processes including the processes in fig. 17 and 18.
Although the following description uses the terms "first," "second," etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first touch may be named a second touch and similarly a second touch may be named a first touch without departing from the scope of various described embodiments. The first touch and the second touch are both touches, but they are not the same touch.
The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Depending on the context, the term "if" is optionally to be interpreted to mean "when", "at. Similarly, the phrase "if determined … …" or "if [ stated condition or event ] is detected" is optionally to be construed to mean "upon determination … …" or "in response to determination … …" or "upon detection of [ stated condition or event ] or" in response to detection of [ stated condition or event ] ", depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and related processes for using such devices are described herein. In some embodiments, the device is a portable communication device, such as a mobile phone, that also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of portable multifunction devices include, but are not limited to, those from Apple Inc
Figure BDA0002618645890000231
Device and iPod
Figure BDA0002618645890000232
An apparatus, and
Figure BDA0002618645890000233
an apparatus. Other portable electronic devices are optionally used, such as laptops or tablets with touch-sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touch pad).
In the following discussion, an electronic device including a display and a touch-sensitive surface is described. However, it should be understood that the electronic device optionally includes one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.
The device typically supports various applications, such as one or more of the following: a mapping application, a rendering application, a word processing application, a website creation application, a disc editing application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, a fitness support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications executing on the device optionally use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device are optionally adjusted and/or varied for different applications and/or within respective applications. In this way, a common physical architecture of the device (such as a touch-sensitive surface) optionally supports various applications with a user interface that is intuitive and clear to the user.
Attention is now directed to embodiments of portable devices having touch sensitive displays. FIG. 1A is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display 112 is sometimes referred to as a "touch screen" for convenience, and is sometimes referred to or called a "touch-sensitive display system". Device 100 includes memory 102 (which optionally includes one or more computer-readable storage media), a memory controller 122, one or more processing units (CPUs) 120, a peripheral interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, an input/output (I/O) subsystem 106, other input control devices 116, and an external port 124. The device 100 optionally includes one or more optical sensors 164. Device 100 optionally includes one or more contact intensity sensors 165 for detecting the intensity of contacts on device 100 (e.g., a touch-sensitive surface, such as touch-sensitive display system 112 of device 100). Device 100 optionally includes one or more tactile output generators 167 for generating tactile outputs on device 100 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 112 of device 100 or trackpad 355 of device 300). These components optionally communicate over one or more communication buses or signal lines 103.
As used in this specification and claims, the term "intensity" of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (surrogate) for the force or pressure of a contact on the touch-sensitive surface. The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., at least 256). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average) to determine the estimated contact force. Similarly, the pressure sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereof, the capacitance of the touch-sensitive surface in the vicinity of the contact and/or changes thereof and/or the resistance of the touch-sensitive surface in the vicinity of the contact and/or changes thereof are optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the surrogate measurement of contact force or pressure is used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the surrogate measurement). In some implementations, the surrogate measurement of contact force or pressure is converted into an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). The intensity of the contact is used as a property of the user input, allowing the user to access additional device functionality that is otherwise inaccessible to the user on smaller-sized devices with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or physical/mechanical controls, such as knobs or buttons).
As used in this specification and claims, the term "haptic output" refers to a physical displacement of a device relative to a previous position of the device, a physical displacement of a component of the device (e.g., a touch-sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a center of mass of the device that is to be detected by a user with the user's sense of touch. For example, where a device or component of a device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other portion of a user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a haptic sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of the touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is optionally interpreted by the user as a "down click" or "up click" of a physical actuation button. In some cases, the user will feel a tactile sensation, such as a "press click" or "release click," even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moving. As another example, even when there is no change in the smoothness of the touch sensitive surface, the movement of the touch sensitive surface is optionally interpreted or sensed by the user as "roughness" of the touch sensitive surface. While such interpretation of touch by a user will be limited by the user's individualized sensory perception, many sensory perceptions of touch are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click down," "click up," "roughness"), unless otherwise stated, the generated haptic output corresponds to a physical displacement of the device or a component thereof that would generate the sensory perception of a typical (or ordinary) user.
It should be understood that device 100 is only one example of a portable multifunction device, and that device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of these components. The various components shown in fig. 1A are implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application specific integrated circuits.
The memory 102 optionally includes high-speed random access memory, and also optionally includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller 122 optionally controls access to memory 102 by other components of device 100.
Peripheral interface 118 may be used to couple the input and output peripherals of the device to CPU 120 and memory 102. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in the memory 102 to perform various functions of the device 100 and to process data. In some embodiments, peripherals interface 118, CPU 120, and memory controller 122 are optionally implemented on a single chip, such as chip 104. In some other embodiments, they are optionally implemented on separate chips.
RF (radio frequency) circuitry 108 receives and transmits RF signals, also referred to as electromagnetic signals. The RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and so forth. RF circuitry 108 optionally communicates with networks such as the internet, also known as the World Wide Web (WWW), intranets, and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), and other devices via wireless communication. RF circuitry 108 optionally includes well-known circuitry for detecting Near Field Communication (NFC) fields, such as by short-range communication radios. The wireless communication optionally uses any of a number of communication standards, protocols, and techniques, including, but not limited to, Global System for Mobile communications (GSM), Enhanced Data GSM Environment (EDGE), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), evolution, data-only (EV-DO), HSPA +, Dual-cell HSPA (DC-HSPDA), Long Term Evolution (LTE), Near Field Communication (NFC), wideband code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth Low Power consumption (BTLE), Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, and/or IEEE 802.11ac), Voice over Internet protocol (VoIP), Wi-MAX, email protocols (e.g., Internet Message Access Protocol (IMAP), and/or Post Office Protocol (POP)) Instant messaging (e.g., extensible messaging and presence protocol (XMPP), session initiation protocol for instant messaging and presence with extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol including communication protocols not yet developed at the time of filing date of this document.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. The audio circuitry 110 receives audio data from the peripheral interface 118, converts the audio data to electrical signals, and transmits the electrical signals to the speaker 111. The speaker 111 converts the electrical signals into sound waves audible to a human. The audio circuit 110 also receives electrical signals converted from sound waves by the microphone 113. The audio circuit 110 converts the electrical signals to audio data and transmits the audio data to the peripheral interface 118 for processing. Audio data is optionally retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripheral interface 118. In some embodiments, the audio circuit 110 also includes a headset jack (e.g., 212 in fig. 2). The headset jack provides an interface between the audio circuitry 110 and a removable audio input/output peripheral such as an output-only headset or a headset having both an output (e.g., a monaural headset or a binaural headset) and an input (e.g., a microphone).
The I/O subsystem 106 couples input/output peripheral devices on the device 100, such as a touch screen 112 and other input control devices 116, to a peripheral interface 118. The I/O subsystem 106 optionally includes a display controller 156, an optical sensor controller 158, a depth camera controller 169, an intensity sensor controller 159, a haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. One or more input controllers 160 receive/transmit electrical signals from/to other input control devices 116. Other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels, and the like. In some alternative embodiments, input controller 160 is optionally coupled to (or not coupled to) any of: a keyboard, an infrared port, a USB port, and a pointing device such as a mouse. The one or more buttons (e.g., 208 in fig. 2) optionally include an up/down button for volume control of the speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206 in fig. 2).
A quick press of the push button optionally disengages the lock of the touch screen 112 or optionally begins the process of Unlocking the Device using a gesture on the touch screen, as described in U.S. patent application 11/322,549 (i.e., U.S. patent No.7,657,849) entitled "Unlocking a Device by Forming improvements on devices on an Unlock Image," filed on 23.12.2005, which is hereby incorporated by reference in its entirety. A long press of a button (e.g., 206) optionally turns the device 100 on or off. The functionality of one or more buttons is optionally customizable by the user. The touch screen 112 is used to implement virtual or soft buttons and one or more soft keyboards.
Touch-sensitive display 112 provides an input interface and an output interface between the device and the user. Display controller 156 receives and/or transmits electrical signals to and/or from touch screen 112. Touch screen 112 displays visual output to a user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively "graphics"). In some embodiments, some or all of the visual output optionally corresponds to a user interface object.
Touch screen 112 has a touch-sensitive surface, sensor, or group of sensors that accept input from a user based on tactile sensation and/or tactile contact. Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch screen 112 and convert the detected contact into interaction with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on touch screen 112. In an exemplary embodiment, the point of contact between touch screen 112 and the user corresponds to a finger of the user.
Touch screen 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch screen 112 and display controller 156 optionally detect contact and any movement or breaking thereof using any of a variety of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112. In an exemplary embodiment, projected mutual capacitance sensing technology is used, such as that available from Apple Inc
Figure BDA0002618645890000291
And iPod
Figure BDA0002618645890000292
The technique used in (1).
The touch sensitive display in some embodiments of touch screen 112 is optionally similar to a multi-touch sensitive trackpad described in the following U.S. patents: 6,323,846(Westerman et al), 6,570,557(Westerman et al) and/or 6,677,932(Westerman et al) and/or U.S. patent publication 2002/0015024a1, each of which is hereby incorporated by reference in its entirety. However, touch screen 112 displays visual output from device 100, while touch sensitive trackpads do not provide visual output.
In some embodiments, the touch sensitive display of touch screen 112 is as described in the following patent applications: (1) U.S. patent application No.11/381,313 entitled "Multipoint Touch Surface Controller" filed on 2.5.2006; (2) U.S. patent application No.10/840,862 entitled "Multipoint touch screen" filed on 6.5.2004; (3) U.S. patent application No.10/903,964 entitled "Gestures For Touch Sensitive Input Devices" filed on 30.7.2004; (4) U.S. patent application No.11/048,264 entitled "Gestures For Touch Sensitive Input Devices" filed on 31.1.2005; (5) U.S. patent application No.11/038,590 entitled "model-Based Graphical User Interfaces For Touch Sensitive Input Devices" filed on 18.1.2005; (6) U.S. patent application No.11/228,758 entitled "Virtual Input Device On A Touch Screen User Interface" filed On 16.9.2005; (7) U.S. patent application No.11/228,700 entitled "Operation Of A Computer With A Touch Screen Interface," filed on 16.9.2005; (8) U.S. patent application No.11/228,737 entitled "Activating Virtual Keys Of A Touch-Screen Virtual Keys" filed on 16.9.2005; and (9) U.S. patent application No.11/367,749 entitled "Multi-Functional Hand-Held Device" filed 3/2006. All of these applications are incorporated herein by reference in their entirety.
The touch screen 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen has a video resolution of about 160 dpi. The user optionally makes contact with touch screen 112 using any suitable object or appendage, such as a stylus, finger, or the like. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which may not be as accurate as stylus-based input due to the larger contact area of the finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the action desired by the user.
In some embodiments, in addition to a touch screen, device 100 optionally includes a trackpad for activating or deactivating particular functions. In some embodiments, the trackpad is a touch-sensitive area of the device that, unlike a touchscreen, does not display visual output. The trackpad is optionally a touch-sensitive surface separate from touch screen 112 or an extension of the touch-sensitive surface formed by the touch screen.
The device 100 also includes a power system 162 for powering the various components. Power system 162 optionally includes a power management system, one or more power sources (e.g., battery, Alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a Light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in a portable device.
The device 100 optionally further includes one or more optical sensors 164. FIG. 1A shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The optical sensor 164 optionally includes a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The optical sensor 164 receives light from the environment projected through one or more lenses and converts the light into data representing an image. In conjunction with imaging module 143 (also called a camera module), optical sensor 164 optionally captures still images or video. In some embodiments, an optical sensor is located on the back of device 100, opposite touch screen display 112 on the front of the device, so that the touch screen display can be used as a viewfinder for still and/or video image acquisition. In some embodiments, an optical sensor is located on the front of the device so that images of the user are optionally acquired for the video conference while the user views other video conference participants on the touch screen display. In some implementations, the position of the optical sensor 164 can be changed by the user (e.g., by rotating a lens and sensor in the device housing) such that a single optical sensor 164 is used with a touch screen display for both video conferencing and still image and/or video image capture.
The device 100 optionally also includes one or more depth camera sensors 175. FIG. 1A shows a depth camera sensor coupled to a depth camera controller 169 in I/O subsystem 106. The depth camera sensor 175 receives data from the environment to create a three-dimensional model of an object (e.g., a face) within a scene from a viewpoint (e.g., a depth camera sensor). In some implementations, along with the imaging module 143 (also referred to as a camera module), the depth camera sensor 175 is optionally used to determine depth maps of different portions of the image captured by the imaging module 143. In some embodiments, the depth camera sensor is located in the front of the device 100, such that user images with depth information are optionally acquired for the video conference while the user views other video conference participants on the touch screen display, and a self-portrait with depth map data is captured. In some embodiments, the depth camera sensor 175 is located at the rear of the device, or at the rear and front of the device 100. In some implementations, the position of the depth camera sensor 175 can be changed by the user (e.g., by rotating a lens and sensor in the device housing) such that the depth camera sensor 175 is used with a touch screen display for both video conferencing and still image and/or video image capture.
In some implementations, a depth map (e.g., a depth map image) includes information (e.g., values) related to the distance of objects in a scene from a viewpoint (e.g., a camera, an optical sensor, a depth camera sensor). In one embodiment of the depth map, each depth pixel defines the location in the Z-axis of the viewpoint at which its corresponding two-dimensional pixel is located. In some implementations, the depth map is composed of pixels, where each pixel is defined by a value (e.g., 0 to 255). For example, a "0" value represents a pixel located farthest from a viewpoint (e.g., camera, optical sensor, depth camera sensor) in a "three-dimensional" scene, and a "255" value represents a pixel located closest to the viewpoint in the "three-dimensional" scene. In other embodiments, the depth map represents the distance between an object in the scene and the plane of the viewpoint. In some implementations, the depth map includes information about the relative depths of various features of the object of interest in the field of view of the depth camera (e.g., the relative depths of the eyes, nose, mouth, ears of the user's face). In some embodiments, the depth map comprises information enabling the apparatus to determine a contour of the object of interest in the z-direction.
Device 100 optionally further comprises one or more contact intensity sensors 165. FIG. 1A shows a contact intensity sensor coupled to an intensity sensor controller 159 in the I/O subsystem 106. Contact intensity sensor 165 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors for measuring the force (or pressure) of a contact on a touch-sensitive surface). Contact intensity sensor 165 receives contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. In some implementations, at least one contact intensity sensor is collocated with or proximate to a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on the back of device 100, opposite touch screen display 112, which is located on the front of device 100.
The device 100 optionally further includes one or more proximity sensors 166. Fig. 1A shows a proximity sensor 166 coupled to the peripheral interface 118. Alternatively, the proximity sensor 166 is optionally coupled to the input controller 160 in the I/O subsystem 106. The proximity sensor 166 optionally performs as described in the following U.S. patent applications: no.11/241,839, entitled "Proximaty Detector In Handheld Device"; no.11/240,788, entitled "Proximaty Detector In Handheld Device"; no.11/620,702, entitled "Using Ambient Light Sensor To Automation Generator Sensor Output"; no.11/586,862, entitled "automatic Response To And Sensing Of User Activity In Portable Devices"; and No.11/638,251 entitled "Methods And Systems For Automatic Configuration Of Peripherals", which is hereby incorporated by reference in its entirety. In some embodiments, the proximity sensor turns off and disables the touch screen 112 when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call).
Device 100 optionally further comprises one or more tactile output generators 167. FIG. 1A shows a haptic output generator coupled to a haptic feedback controller 161 in the I/O subsystem 106. Tactile output generator 167 optionally includes one or more electro-acoustic devices, such as speakers or other audio components; and/or an electromechanical device that converts energy into linear motion, such as a motor, solenoid, electroactive aggregator, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component that converts an electrical signal into a tactile output on the device). Contact intensity sensor 165 receives haptic feedback generation instructions from haptic feedback module 133 and generates haptic output on device 100 that can be felt by a user of device 100. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., into/out of the surface of device 100) or laterally (e.g., back and forth in the same plane as the surface of device 100). In some embodiments, at least one tactile output generator sensor is located on the back of device 100, opposite touch screen display 112, which is located on the front of device 100.
Device 100 optionally also includes one or more accelerometers 168. Fig. 1A shows accelerometer 168 coupled to peripheral interface 118. Alternatively, accelerometer 168 is optionally coupled to input controller 160 in I/O subsystem 106. Accelerometer 168 optionally performs as described in the following U.S. patent publications: U.S. patent publication No.20050190059, entitled "Acceleration-Based Detection System For Portable Electronic Devices" And U.S. patent publication No.20060017692, entitled "Methods And applications For Operating A Portable Device Based On An Accelerometer," both of which are incorporated herein by reference in their entirety. In some embodiments, information is displayed in a portrait view or a landscape view on the touch screen display based on analysis of data received from one or more accelerometers. Device 100 optionally includes a magnetometer and a GPS (or GLONASS or other global navigation system) receiver in addition to accelerometer 168 for obtaining information about the position and orientation (e.g., portrait or landscape) of device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or set of instructions) 128, a contact/motion module (or set of instructions) 130, a graphics module (or set of instructions) 132, a text input module (or set of instructions) 134, a Global Positioning System (GPS) module (or set of instructions) 135, and an application program (or set of instructions) 136. Further, in some embodiments, memory 102 (fig. 1A) or 370 (fig. 3) stores device/global internal state 157, as shown in fig. 1A and 3. Device/global internal state 157 includes one or more of: an active application state indicating which applications (if any) are currently active; display state indicating what applications, views, or other information occupy various areas of the touch screen display 112; sensor status, including information obtained from the various sensors of the device and the input control device 116; and location information regarding the location and/or pose of the device.
Operating system 126 (e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
Communications module 128 facilitates communications with other devices through one or more external ports 124 and also includes various software components for processing data received by RF circuitry 108 and/or external ports 124. External port 124 (e.g., Universal Serial Bus (USB), firewire, etc.) is adapted to couple directly to other devices or indirectly through a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external port is an external port
Figure BDA0002618645890000331
(trademark of Apple inc.) a multi-pin (e.g., 30-pin) connector that is the same as or similar to and/or compatible with the 30-pin connector used on the device.
Contact/motion module 130 optionally detects contact with touch screen 112 (in conjunction with display controller 156) and other touch sensitive devices (e.g., a trackpad or physical click wheel). The contact/motion module 130 includes various software components for performing various operations related to contact detection, such as determining whether contact has occurred (e.g., detecting a finger-down event), determining contact intensity (e.g., force or pressure of contact, or a substitute for force or pressure of contact), determining whether there is movement of contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining whether contact has ceased (e.g., detecting a finger-up event or a contact-breaking). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining velocity (magnitude), velocity (magnitude and direction), and/or acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single finger contacts) or multiple point simultaneous contacts (e.g., "multi-touch"/multiple finger contacts). In some embodiments, the contact/motion module 130 and the display controller 156 detect contact on a touch pad.
In some embodiments, the contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by the user (e.g., determine whether the user has "clicked" on an icon). In some embodiments, at least a subset of the intensity thresholds are determined as a function of software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and may be adjusted without changing the physical hardware of device 100). For example, the mouse "click" threshold of the trackpad or touchscreen can be set to any one of a wide range of predefined thresholds without changing the trackpad or touchscreen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more intensity thresholds of a set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting multiple intensity thresholds at once with a system-level click on an "intensity" parameter).
The contact/motion module 130 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, the gesture is optionally detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event, and then detecting a finger-up (lift-off) event at the same location (or substantially the same location) as the finger-down event (e.g., at the location of the icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event, then detecting one or more finger-dragging events, and then subsequently detecting a finger-up (lift-off) event.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual characteristics) of the displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including without limitation text, web pages, icons (such as user interface objects including soft keys), digital images, videos, animations and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphic module 132 receives one or more codes for specifying a graphic to be displayed from an application program or the like, and also receives coordinate data and other graphic attribute data together if necessary, and then generates screen image data to output to the display controller 156.
Haptic feedback module 133 includes various software components for generating instructions for use by haptic output generator 167 in generating haptic outputs at one or more locations on device 100 in response to user interaction with device 100.
Text input module 134, which is optionally a component of graphics module 132, provides a soft keyboard for entering text in various applications such as contacts 137, email 140, IM 141, browser 147, and any other application that requires text input.
The GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to the phone 138 for use in location-based dialing; to the camera 143 as picture/video metadata; and to applications that provide location-based services, such as weather desktop widgets, local yellow pages desktop widgets, and map/navigation desktop widgets).
Application 136 optionally includes the following modules (or sets of instructions), or a subset or superset thereof:
a contacts module 137 (sometimes referred to as an address book or contact list);
a phone module 138;
a video conferencing module 139;
an email client module 140;
an Instant Messaging (IM) module 141;
fitness support module 142;
a camera module 143 for still and/or video images;
an image management module 144;
a video player module;
a music player module;
a browser module 147;
A calendar module 148;
desktop applet module 149, optionally including one or more of: a weather desktop applet 149-1, a stock market desktop applet 149-2, a calculator desktop applet 149-3, an alarm desktop applet 149-4, a dictionary desktop applet 149-5, and other desktop applets acquired by the user, and a user created desktop applet 149-6;
a desktop applet creator module 150 for forming a user-created desktop applet 149-6;
a search module 151;
a video and music player module 152 that incorporates a video player module and a music player module;
a notepad module 153;
a map module 154; and/or
Online video module 155.
Examples of other applications 136 optionally stored in memory 102 include other word processing applications, other image editing applications, drawing applications, rendering applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, contacts module 137 is optionally used to manage contact lists or contact lists (e.g., stored in memory 102 or in application internal state 192 of contacts module 137 in memory 370), including: adding one or more names to the address book; deleting names from the address book; associating a phone number, email address, physical address, or other information with a name; associating the image with a name; classifying and classifying names; providing a telephone number or email address to initiate and/or facilitate communications over telephone 138, video conferencing module 139, email 140, or IM 141; and so on.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, telephone module 138 is optionally used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contacts module 137, modify an entered telephone number, dial a corresponding telephone number, conduct a conversation, and disconnect or hang up when the conversation is complete. As noted above, the wireless communication optionally uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, text input module 134, contacts module 137, and telephony module 138, video conference module 139 includes executable instructions to initiate, conduct, and terminate video conferences between the user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions to create, send, receive, and manage emails in response to user instructions. In conjunction with the image management module 144, the email client module 140 makes it very easy to create and send an email with a still image or a video image captured by the camera module 143.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, instant messaging module 141 includes executable instructions for: inputting a sequence of characters corresponding to an instant message, modifying previously input characters, transmitting a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Messaging Service (MMS) protocol for a phone-based instant message or using XMPP, SIMPLE, or IMPS for an internet-based instant message), receiving an instant message, and viewing the received instant message. In some embodiments, the transmitted and/or received instant messages optionally include graphics, photos, audio files, video files, and/or MMS and/or other attachments supported in an Enhanced Messaging Service (EMS). As used herein, "instant message" refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module, workout support module 142 includes executable instructions for creating a workout (e.g., having time, distance, and/or calorie burning goals); communicating with fitness sensors (sports equipment); receiving fitness sensor data; calibrating a sensor for monitoring fitness; selecting and playing music for fitness; and displaying, storing and transmitting fitness data.
In conjunction with touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions for: capturing still images or video (including video streams) and storing them in the memory 102, modifying features of the still images or video, or deleting the still images or video from the memory 102.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions for arranging, modifying (e.g., editing), or otherwise manipulating, labeling, deleting, presenting (e.g., in a digital slide or album), and storing still and/or video images.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions for browsing the internet according to user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, email client module 140, and browser module 147, calendar module 148 includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to-do, etc.) according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, the desktop applet module 149 is a mini-application (e.g., weather desktop applet 149-1, stock market desktop applet 149-2, calculator desktop applet 149-3, alarm clock desktop applet 149-4, and dictionary desktop applet 149-5) or a mini-application created by a user (e.g., user created desktop applet 149-6) that is optionally downloaded and used by the user. In some embodiments, the desktop applet includes an HTML (hypertext markup language) file, a CSS (cascading style sheet) file, and a JavaScript file. In some embodiments, the desktop applet includes an XML (extensible markup language) file and a JavaScript file (e.g., Yahoo! desktop applet).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, the desktop applet creator module 150 is optionally used by a user to create a desktop applet (e.g., convert a user-specified portion of a web page into a desktop applet).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions for searching memory 102 for text, music, sound, images, video, and/or other files that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, speakers 111, RF circuitry 108, and browser module 147, video and music player module 152 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, as well as executable instructions for displaying, rendering, or otherwise playing back video (e.g., on touch screen 112 or on an external display connected via external port 124). In some embodiments, the device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple inc.).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, notepad module 153 includes executable instructions to create and manage notepads, backlogs, and the like according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 is optionally used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data related to stores and other points of interest at or near a particular location, and other location-based data) according to user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, speaker 111, RF circuitry 108, text input module 134, email client module 140, and browser module 147, online video module 155 includes instructions for: allowing a user to access, browse, receive (e.g., by streaming and/or downloading), playback (e.g., on a touch screen or on an external display connected via external port 124), send an email with a link to a particular online video, and otherwise manage online video in one or more file formats, such as h.264. In some embodiments, the link to the particular online video is sent using instant messaging module 141 instead of email client module 140. Additional description of Online video applications can be found in U.S. provisional patent application No.60/936,562 entitled "Portable Multi function Device, Method, and Graphical User Interface for Playing Online video," filed on.20.2007, and U.S. patent application No.11/968,067 entitled "Portable Multi function Device, Method, and Graphical User Interface for Playing Online video," filed on.31.2007, which are both hereby incorporated by reference in their entirety.
Each of the modules and applications described above corresponds to a set of executable instructions for performing one or more of the functions described above as well as the methods described in this patent application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. For example, the video player module is optionally combined with the music player module into a single module (e.g., the video and music player module 152 in fig. 1A). In some embodiments, memory 102 optionally stores a subset of the modules and data structures described above. Further, memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device in which operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a trackpad. By using a touch screen and/or trackpad as the primary input control device for operating the device 100, the number of physical input control devices (e.g., push buttons, dials, etc.) on the device 100 is optionally reduced.
The predefined set of functions performed exclusively through the touchscreen and/or trackpad optionally includes navigation between user interfaces. In some embodiments, the trackpad, when touched by a user, navigates device 100 from any user interface displayed on device 100 to a main, home, or root menu. In such embodiments, a "menu button" is implemented using a touch pad. In some other embodiments, the menu button is a physical push button or other physical input control device, rather than a touchpad.
Fig. 1B is a block diagram illustrating exemplary components for event processing, according to some embodiments. In some embodiments, memory 102 (FIG. 1A) or memory 370 (FIG. 3) includes event classifier 170 (e.g., in operating system 126) and corresponding application 136-1 (e.g., any of the aforementioned applications 137-151, 155, 380-390).
Event sorter 170 receives the event information and determines application 136-1 and application view 191 of application 136-1 to which the event information is to be delivered. The event sorter 170 includes an event monitor 171 and an event dispatcher module 174. In some embodiments, application 136-1 includes an application internal state 192 that indicates one or more current application views that are displayed on touch-sensitive display 112 when the application is active or executing. In some embodiments, the device/global internal state 157 is used by the event classifier 170 to determine which application(s) are currently active, and the application internal state 192 is used by the event classifier 170 to determine the application view 191 to which to deliver event information.
In some embodiments, the application internal state 192 includes additional information, such as one or more of: resume information to be used when the application 136-1 resumes execution, user interface state information indicating that information is being displayed or is ready for display by the application 136-1, a state queue for enabling a user to return to a previous state or view of the application 136-1, and a repeat/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripheral interface 118. The event information includes information about a sub-event (e.g., a user touch on touch-sensitive display 112 as part of a multi-touch gesture). Peripherals interface 118 transmits information it receives from I/O subsystem 106 or sensors such as proximity sensor 166, one or more accelerometers 168, and/or microphone 113 (via audio circuitry 110). Information received by peripheral interface 118 from I/O subsystem 106 includes information from touch-sensitive display 112 or a touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to peripheral interface 118 at predetermined intervals. In response, peripheral interface 118 transmits the event information. In other embodiments, peripheral interface 118 transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or receiving more than a predetermined duration).
In some embodiments, event classifier 170 further includes hit view determination module 172 and/or active event recognizer determination module 173.
When touch-sensitive display 112 displays more than one view, hit view determination module 172 provides a software process for determining where within one or more views a sub-event has occurred. The view consists of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected optionally corresponds to a programmatic level within a programmatic or view hierarchy of applications. For example, the lowest level view in which a touch is detected is optionally referred to as a hit view, and the set of events identified as correct inputs is optionally determined based at least in part on the hit view of the initial touch that initiated the touch-based gesture.
Hit view determination module 172 receives information related to sub-events of the touch-based gesture. When the application has multiple views organized in a hierarchy, hit view determination module 172 identifies the hit view as the lowest view in the hierarchy that should handle the sub-event. In most cases, the hit view is the lowest level view in which the initiating sub-event (e.g., the first sub-event in the sequence of sub-events that form an event or potential event) occurs. Once the hit view is identified by hit view determination module 172, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
The active event recognizer determination module 173 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some implementations, the active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of the sub-event are actively participating views, and thus determines that all actively participating views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely confined to the area associated with a particular view, the higher views in the hierarchy will remain actively participating views.
The event dispatcher module 174 dispatches the event information to an event recognizer (e.g., event recognizer 180). In embodiments that include active event recognizer determination module 173, event dispatcher module 174 delivers event information to event recognizers determined by active event recognizer determination module 173. In some embodiments, the event dispatcher module 174 stores event information in an event queue, which is retrieved by the respective event receiver 182.
In some embodiments, the operating system 126 includes an event classifier 170. Alternatively, application 136-1 includes event classifier 170. In yet another embodiment, the event classifier 170 is a stand-alone module or is part of another module stored in the memory 102 (such as the contact/motion module 130).
In some embodiments, the application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for processing touch events that occur within a respective view of the application's user interface. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, the respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of the event recognizers 180 are part of a separate module that is a higher-level object, such as a user interface toolkit or application 136-1, from which methods and other properties are inherited. In some embodiments, the respective event handlers 190 comprise one or more of: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or calls data updater 176, object updater 177 or GUI updater 178 to update application internal state 192. Alternatively, one or more of the application views 191 include one or more respective event handlers 190. Additionally, in some embodiments, one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
The corresponding event recognizer 180 receives event information (e.g., event data 179) from the event classifier 170 and recognizes events from the event information. The event recognizer 180 includes an event receiver 182 and an event comparator 184. In some embodiments, event recognizer 180 also includes metadata 183 and at least a subset of event delivery instructions 188 (which optionally include sub-event delivery instructions).
The event receiver 182 receives event information from the event sorter 170. The event information includes information about a sub-event such as a touch or touch movement. According to the sub-event, the event information further includes additional information, such as the location of the sub-event. When the sub-event relates to motion of a touch, the event information optionally also includes the velocity and direction of the sub-event. In some embodiments, the event comprises rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information comprises corresponding information about the current orientation of the device (also referred to as the device pose).
Event comparator 184 compares the event information to predefined event or sub-event definitions and determines an event or sub-event or determines or updates the state of an event or sub-event based on the comparison. In some embodiments, event comparator 184 includes event definitions 186. Event definition 186 contains definitions of events (e.g., predefined sub-event sequences), such as event 1(187-1), event 2(187-2), and others. In some embodiments, sub-events in event (187) include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition of event 1(187-1) is a double click on the displayed object. For example, a double tap includes a first touch on the displayed object for a predetermined length of time (touch start), a first lift off for a predetermined length of time (touch end), a second touch on the displayed object for a predetermined length of time (touch start), and a second lift off for a predetermined length of time (touch end). In another example, the definition of event 2(187-2) is a drag on the displayed object. For example, the drag includes a predetermined length of time of touch (or contact) on the displayed object, movement of the touch on the touch-sensitive display 112, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some embodiments, event definition 187 includes definitions of events for respective user interface objects. In some embodiments, event comparator 184 performs a hit test to determine which user interface object is associated with a sub-event. For example, in an application view that displays three user interface objects on touch-sensitive display 112, when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test to determine which of the three user interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 190, the event comparator uses the results of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects the event handler associated with the sub-event and the object that triggered the hit test.
In some embodiments, the definition of the respective event (187) further includes a delay action that delays delivery of the event information until it has been determined that the sequence of sub-events does or does not correspond to the event type of the event identifier.
When the respective event recognizer 180 determines that the sequence of sub-events does not match any event in the event definition 186, the respective event recognizer 180 enters an event not possible, event failed, or event ended state, after which subsequent sub-events of the touch-based gesture are ignored. In this case, other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the ongoing touch-based gesture.
In some embodiments, the respective event recognizer 180 includes metadata 183 with configurable attributes, tags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively participating event recognizers. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate how or how event recognizers interact with each other. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate whether a sub-event is delivered to a different level in the view or programmatic hierarchy.
In some embodiments, when one or more particular sub-events of an event are identified, the respective event identifier 180 activates the event handler 190 associated with the event. In some embodiments, the respective event identifier 180 delivers event information associated with the event to the event handler 190. Activating the event handler 190 is different from sending (and deferring) sub-events to the corresponding hit view. In some embodiments, the event recognizer 180 throws a marker associated with the recognized event, and the event handler 190 associated with the marker obtains the marker and performs a predefined process.
In some embodiments, event delivery instructions 188 include sub-event delivery instructions that deliver event information about sub-events without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the sequence of sub-events or to actively participating views. Event handlers associated with the sequence of sub-events or with actively participating views receive the event information and perform a predetermined process.
In some embodiments, data updater 176 creates and updates data used in application 136-1. For example, the data updater 176 updates a phone number used in the contacts module 137 or stores a video file used in the video player module. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, object updater 177 creates a new user interface object or updates the location of a user interface object. GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends the display information to graphics module 132 for display on the touch-sensitive display.
In some embodiments, event handler 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It should be understood that the above discussion of event processing with respect to user touches on a touch sensitive display also applies to other forms of user input utilizing an input device to operate multifunction device 100, not all of which are initiated on a touch screen. For example, mouse movements and mouse button presses, optionally in conjunction with single or multiple keyboard presses or holds; contact movements on the touchpad, such as tapping, dragging, scrolling, etc.; inputting by a stylus; movement of the device; verbal instructions; detected eye movement; inputting biological characteristics; and/or any combination thereof, is optionally used as input corresponding to sub-events defining the event to be identified.
Fig. 2 illustrates a portable multifunction device 100 with a touch screen 112 in accordance with some embodiments. The touch screen optionally displays one or more graphics within the User Interface (UI) 200. In this embodiment, as well as other embodiments described below, a user can select one or more of these graphics by making gestures on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figure) or one or more styluses 203 (not drawn to scale in the figure). In some embodiments, selection of one or more graphics will occur when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up, and/or down), and/or a rolling of a finger (right to left, left to right, up, and/or down) that has made contact with device 100. In some implementations, or in some cases, inadvertent contact with a graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture that swipes over the application icon optionally does not select the corresponding application.
Device 100 optionally also includes one or more physical buttons, such as a "home" or menu button 204. As previously described, the menu button 204 is optionally used to navigate to any application 136 in a set of applications that are optionally executed on the device 100. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on touch screen 112.
In some embodiments, device 100 includes touch screen 112, menu buttons 204, push buttons 206 for powering the device on/off and for locking the device, one or more volume adjustment buttons 208, a Subscriber Identity Module (SIM) card slot 210, a headset jack 212, and docking/charging external port 124. Pressing the button 206 optionally serves to turn the device on/off by pressing the button and holding the button in a pressed state for a predefined time interval; locking the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or unlocking the device or initiating an unlocking process. In an alternative embodiment, device 100 also accepts voice input through microphone 113 for activating or deactivating certain functions. Device 100 also optionally includes one or more contact intensity sensors 165 for detecting the intensity of contacts on touch screen 112, and/or one or more tactile output generators 167 for generating tactile outputs for a user of device 100.
Fig. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, the device 300 is a laptop, desktop, tablet, multimedia player device, navigation device, educational device (such as a child learning toy), gaming system, or control device (e.g., a home controller or industrial controller). Device 300 typically includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. The communication bus 320 optionally includes circuitry (sometimes referred to as a chipset) that interconnects and controls communication between system components. Device 300 includes an input/output (I/O) interface 330 with a display 340, typically a touch screen display. I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and a touchpad 355, a tactile output generator 357 (e.g., similar to one or more tactile output generators 167 described above with reference to fig. 1A) for generating tactile outputs on device 300, sensors 359 (e.g., optical sensors, acceleration sensors, proximity sensors, touch-sensitive sensors, and/or contact intensity sensors (similar to one or more contact intensity sensors 165 described above with reference to fig. 1A)). Memory 370 includes high speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory 370 optionally includes one or more storage devices located remotely from CPU 310. In some embodiments, memory 370 stores programs, modules, and data structures similar to or a subset of the programs, modules, and data structures stored in memory 102 of portable multifunction device 100 (fig. 1A). Further, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk editing module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (FIG. 1A) optionally does not store these modules.
Each of the above elements in fig. 3 is optionally stored in one or more of the previously mentioned memory devices. Each of the above modules corresponds to a set of instructions for performing a function described above. The modules or programs (e.g., sets of instructions) described above need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures described above. Further, memory 370 optionally stores additional modules and data structures not described above.
Attention is now directed to embodiments of user interfaces optionally implemented on, for example, portable multifunction device 100.
Fig. 4A illustrates an exemplary user interface of an application menu on portable multifunction device 100 according to some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
one or more signal strength indicators 402 of one or more wireless communications, such as cellular signals and Wi-Fi signals;
Time 404;
a bluetooth indicator 405;
a battery status indicator 406;
a tray 408 with icons for commonly used applications, such as:
an icon 416 of the phone module 138 labeled "phone", the icon 416 optionally including an indicator 414 of the number of missed calls or voice messages;
an icon 418 of the email client module 140 labeled "mail", the icon 418 optionally including an indicator 410 of the number of unread emails;
icon 420 for the browser module 147 labeled "browser"; and
icon 422 labeled "iPod" for video and music player module 152 (also known as iPod (trademark of Apple inc.) module 152); and
icons for other applications, such as:
icon 424 of IM module 141 labeled "message";
icon 426 of calendar module 148 labeled "calendar";
icon 428 of image management module 144 labeled "photo";
icon 430 of camera module 143 labeled "camera";
icon 432 for online video module 155 labeled "online video";
an icon 434 of the O stock desktop applet 149-2 labeled "stock market";
Icon 436 of map module 154 labeled "map";
icon 438 labeled "weather" for weather desktop applet 149-1;
icon 440 of alarm clock desktop applet 149-4 labeled "clock";
icon 442 labeled "fitness support" for fitness support module 142;
icon 444 of O notepad module 153 labeled "notepad"; and
the set application or module icon 446 labeled "set" provides access to the settings of the device 100 and its various applications 136.
It should be noted that the icon labels shown in fig. 4A are merely exemplary. For example, icon 422 of video and music player module 152 is labeled "music" or "music player". Other tabs are optionally used for the various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 of fig. 3) having a touch-sensitive surface 451 (e.g., tablet or trackpad 355 of fig. 3) separate from a display 450 (e.g., touch screen display 112). Device 300 also optionally includes one or more contact intensity sensors (e.g., one or more of sensors 359) to detect the intensity of contacts on touch-sensitive surface 451 and/or one or more tactile output generators 357 to generate tactile outputs for a user of device 300.
Although some of the examples below will be given with reference to input on touch screen display 112 (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects input on a touch-sensitive surface that is separate from the display, as shown in fig. 4B. In some implementations, the touch-sensitive surface (e.g., 451 in fig. 4B) has a primary axis (e.g., 452 in fig. 4B) that corresponds to a primary axis (e.g., 453 in fig. 4B) on the display (e.g., 450). In accordance with these embodiments, the device detects contacts (e.g., 460 and 462 in fig. 4B) with the touch-sensitive surface 451 at locations that correspond to respective locations on the display (e.g., in fig. 4B, 460 corresponds to 468 and 462 corresponds to 470). As such, when the touch-sensitive surface (e.g., 451 in fig. 4B) is separated from the display (450 in fig. 4B) of the multifunction device, user inputs (e.g., contacts 460 and 462 and their movements) detected by the device on the touch-sensitive surface are used by the device to manipulate the user interface on the display. It should be understood that similar methods are optionally used for the other user interfaces described herein.
Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contact, single-finger tap gesture, finger swipe gesture), it should be understood that in some embodiments one or more of these finger inputs are replaced by inputs from another input device (e.g., mouse-based inputs or stylus inputs). For example, the swipe gesture is optionally replaced by a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, a flick gesture is optionally replaced by a mouse click (e.g., instead of detecting a contact, followed by ceasing to detect a contact) while the cursor is over the location of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be understood that multiple computer mice are optionally used simultaneously, or mouse and finger contacts are optionally used simultaneously.
Fig. 5A illustrates an exemplary personal electronic device 500. The device 500 includes a body 502. In some embodiments, device 500 may include some or all of the features described with respect to devices 100 and 300 (e.g., fig. 1A-4B). In some embodiments, the device 500 has a touch-sensitive display screen 504, hereinafter referred to as a touch screen 504. Instead of or in addition to the touch screen 504, the device 500 has a display and a touch-sensitive surface. As with devices 100 and 300, in some embodiments, touch screen 504 (or touch-sensitive surface) optionally includes one or more intensity sensors for detecting the intensity of an applied contact (e.g., touch). One or more intensity sensors of the touch screen 504 (or touch-sensitive surface) may provide output data representing the intensity of a touch. The user interface of device 500 may respond to the touch based on the intensity of the touch, meaning that different intensities of the touch may invoke different user interface operations on device 500.
Exemplary techniques for detecting and processing touch intensity are found, for example, in the following related patent applications: international patent Application No. PCT/US2013/040061, issued to WIPO patent publication No. WO/2013/169849, entitled "Device, Method, and Graphical User Interface for Displaying User Interface Objects reforming to an Application", filed on 8.5.2013; and International patent application Ser. No. PCT/US2013/069483, entitled "Device, Method, and Graphical User Interface for transiting Between Input to Display Output Relationships", filed 2013, 11/11, published as WIPO patent publication No. WO/2014/105276, each of which is hereby incorporated by reference in its entirety.
In some embodiments, the device 500 has one or more input mechanisms 506 and 508. The input mechanisms 506 and 508 (if included) may be in physical form. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, device 500 has one or more attachment mechanisms. Such attachment mechanisms, if included, may allow for attachment of the device 500 with, for example, a hat, glasses, earrings, necklace, shirt, jacket, bracelet, watchband, bracelet, pants, belt, shoe, purse, backpack, and the like. These attachment mechanisms allow the user to wear the device 500.
Fig. 5B illustrates an exemplary personal electronic device 500. In some embodiments, the apparatus 500 may include some or all of the components described with reference to fig. 1A, 1B, and 3. The device 500 has a bus 512 that operatively couples an I/O portion 514 with one or more computer processors 516 and a memory 518. The I/O portion 514 may be connected to the display 504, which may have a touch sensitive member 522 and optionally an intensity sensor 524 (e.g., a contact intensity sensor). Further, I/O portion 514 may interface with communication unit 530 for receiving application programs and operating system data using Wi-Fi, Bluetooth, Near Field Communication (NFC), cellular, and/or other wireless communication techniques. Device 500 may include input mechanisms 506 and/or 508. For example, the input mechanism 506 is optionally a rotatable input device or a depressible input device and a rotatable input device. In some examples, the input mechanism 508 is optionally a button.
In some examples, the input mechanism 508 is optionally a microphone. The personal electronic device 500 optionally includes various sensors, such as a GPS sensor 532, an accelerometer 534, an orientation sensor 540 (e.g., a compass), a gyroscope 536, a motion sensor 538, and/or combinations thereof, all of which are operatively connected to the I/O section 514.
The memory 518 of the personal electronic device 500 may include one or more non-transitory computer-readable storage media for storing computer-executable instructions that, when executed by the one or more computer processors 516, may cause the computer processors to perform techniques including processes 700, 800, 1000, 1200, 1300, 1500, 1700, and 1800 (fig. 7, 8, 10, 12, 13, 15, 17, and 18), for example. A computer readable storage medium may be any medium that can tangibly contain or store computer-executable instructions for use by or in connection with an instruction execution system, apparatus, or device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer readable storage medium may include, but is not limited to, magnetic storage devices, optical storage devices, and/or semiconductor storage devices. Examples of such storage devices include magnetic disks, optical disks based on CD, DVD, or blu-ray technology, and persistent solid state memory such as flash memory, solid state drives, and the like. The personal electronic device 500 is not limited to the components and configuration of fig. 5B, but may include other components or additional components in a variety of configurations.
As used herein, the term "affordance" refers to a user-interactive graphical user interface object that is optionally displayed on a display screen of device 100, 300, and/or 500 (fig. 1A, 3, and 5A-5B). For example, images (e.g., icons), buttons, and text (e.g., hyperlinks) optionally each constitute an affordance.
As used herein, the term "focus selector" refers to an input element that is used to indicate the current portion of the user interface with which the user is interacting. In some implementations that include a cursor or other position marker, the cursor acts as a "focus selector" such that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 355 in fig. 3 or touch-sensitive surface 451 in fig. 4B) while the cursor is over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations that include a touch screen display (e.g., touch-sensitive display system 112 in fig. 1A or touch screen 112 in fig. 4A) that enables direct interaction with user interface elements on the touch screen display, a contact detected on the touch screen serves as a "focus selector" such that when an input (e.g., a press input by the contact) is detected at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element) on the touch screen display, the particular user interface element is adjusted in accordance with the detected input. In some implementations, the focus is moved from one area of the user interface to another area of the user interface without corresponding movement of a cursor or movement of a contact on the touch screen display (e.g., by moving the focus from one button to another using tab or arrow keys); in these implementations, the focus selector moves according to movement of the focus between different regions of the user interface. Regardless of the particular form taken by the focus selector, the focus selector is typically a user interface element (or contact on a touch screen display) that is controlled by the user to deliver the user's intended interaction with the user interface (e.g., by indicating to the device the element with which the user of the user interface desires to interact). For example, upon detection of a press input on a touch-sensitive surface (e.g., a trackpad or touchscreen), the location of a focus selector (e.g., a cursor, contact, or selection box) over a respective button will indicate that the user desires to activate the respective button (as opposed to other user interface elements shown on the device display).
As used in the specification and in the claims, the term "characteristic intensity" of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. The characteristic intensity is optionally based on a predefined number of intensity samples or a set of intensity samples acquired during a predetermined time period (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, 5 seconds, 10 seconds) relative to a predefined event (e.g., after detecting contact, before detecting contact liftoff, before or after detecting contact start movement, before or after detecting contact end, before or after detecting an increase in intensity of contact, and/or before or after detecting a decrease in intensity of contact). The characteristic intensity of the contact is optionally based on one or more of: a maximum value of the intensity of the contact, a mean value of the intensity of the contact, an average value of the intensity of the contact, a value at the top 10% of the intensity of the contact, a half-maximum value of the intensity of the contact, a 90% maximum value of the intensity of the contact, and the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an operation. For example, the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold. In this example, a contact whose characteristic intensity does not exceed the first threshold results in a first operation, a contact whose characteristic intensity exceeds the first intensity threshold but does not exceed the second intensity threshold results in a second operation, and a contact whose characteristic intensity exceeds the second threshold results in a third operation. In some embodiments, a comparison between the feature strengths and one or more thresholds is used to determine whether to perform one or more operations (e.g., whether to perform the respective operation or to forgo performing the respective operation) rather than to determine whether to perform the first operation or the second operation.
In some implementations, a portion of the gesture is recognized for determining the feature intensity. For example, the touch-sensitive surface optionally receives a continuous swipe contact that transitions from a starting location and reaches an ending location where the contact intensity increases. In this example, the characteristic intensity of the contact at the end location is optionally based on only a portion of the continuous swipe contact, rather than the entire swipe contact (e.g., only the portion of the swipe contact at the end location). In some embodiments, a smoothing algorithm is optionally applied to the intensity of the swipe contact before determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of: a non-weighted moving average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some cases, these smoothing algorithms eliminate narrow spikes or dips in the intensity of the swipe contact for the purpose of determining the feature intensity.
Contact intensity on the touch-sensitive surface is optionally characterized relative to one or more intensity thresholds, such as a contact detection intensity threshold, a light press intensity threshold, a deep press intensity threshold, and/or one or more other intensity thresholds. In some embodiments, the light press intensity threshold corresponds to an intensity that: at which intensity the device will perform the operations typically associated with clicking a button of a physical mouse or trackpad. In some embodiments, the deep press intensity threshold corresponds to an intensity that: at which intensity the device will perform a different operation than that typically associated with clicking a button of a physical mouse or trackpad. In some embodiments, when a contact is detected whose characteristic intensity is below a light press intensity threshold (e.g., and above a nominal contact detection intensity threshold, a contact below the nominal contact detection intensity threshold is no longer detected), the device will move the focus selector in accordance with movement of the contact on the touch-sensitive surface without performing operations associated with a light press intensity threshold or a deep press intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface drawings.
Increasing the contact characteristic intensity from an intensity below the light press intensity threshold to an intensity between the light press intensity threshold and the deep press intensity threshold is sometimes referred to as a "light press" input. Increasing the contact characteristic intensity from an intensity below the deep press intensity threshold to an intensity above the deep press intensity threshold is sometimes referred to as a "deep press" input. Increasing the contact characteristic intensity from an intensity below the contact detection intensity threshold to an intensity between the contact detection intensity threshold and the light press intensity threshold is sometimes referred to as detecting a contact on the touch surface. The decrease in the characteristic intensity of the contact from an intensity above the contact detection intensity threshold to an intensity below the contact detection intensity threshold is sometimes referred to as detecting lift-off of the contact from the touch surface. In some embodiments, the contact detection intensity threshold is zero. In some embodiments, the contact detection intensity threshold is greater than zero.
In some embodiments described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting a respective press input performed with a respective contact (or contacts), wherein the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or contacts) above a press input intensity threshold. In some embodiments, the respective operation is performed in response to detecting an increase in intensity of the respective contact above a press input intensity threshold (e.g., a "down stroke" of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below the press input intensity threshold, and the respective operation is performed in response to detecting a subsequent decrease in intensity of the respective contact below the press input threshold (e.g., an "up stroke" of the respective press input).
In some embodiments, the device employs intensity hysteresis to avoid accidental input sometimes referred to as "jitter," where the device defines or selects a hysteresis intensity threshold having a predefined relationship to the press input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press input intensity threshold, or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below a hysteresis intensity threshold corresponding to the press input intensity threshold, and the respective operation is performed in response to detecting a subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., an "upstroke" of the respective press input). Similarly, in some embodiments, a press input is detected only when the device detects an increase in contact intensity from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press input intensity threshold and optionally a subsequent decrease in contact intensity to an intensity at or below the hysteresis intensity, and a corresponding operation is performed in response to detecting the press input (e.g., depending on the circumstances, the increase in contact intensity or the decrease in contact intensity).
For ease of explanation, optionally, a description of an operation performed in response to a press input associated with a press input intensity threshold or in response to a gesture that includes a press input is triggered in response to detection of any of the following: the contact intensity increases above the press input intensity threshold, the contact intensity increases from an intensity below the hysteresis intensity threshold to an intensity above the press input intensity threshold, the contact intensity decreases below the press input intensity threshold, and/or the contact intensity decreases below the hysteresis intensity threshold corresponding to the press input intensity threshold. Additionally, in examples in which operations are described as being performed in response to detecting that the intensity of the contact decreases below the press input intensity threshold, the operations are optionally performed in response to detecting that the intensity of the contact decreases below a hysteresis intensity threshold that corresponds to and is less than the press input intensity threshold.
Attention is now directed to embodiments of a user interface ("UI") and associated processes implemented on an electronic device, such as portable multifunction device 100, device 300, or device 500.
Fig. 6A-6V illustrate exemplary user interfaces for displaying an avatar in a sticker application user interface and an avatar keyboard application user interface, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 7 and 8.
Fig. 6A-6V illustrate exemplary user inputs and corresponding changes to one or more user interfaces that may be displayed on an electronic device, such as electronic device 600 shown in fig. 6A. Device 600 includes a display 601 (in some cases, the display is a touch-sensitive display) and a camera 602 (in some embodiments, the camera includes an image sensor capable of capturing data representing a portion of a spectrum (e.g., visible, infrared, or ultraviolet light)). In some embodiments, camera 602 includes multiple image sensors and/or other types of sensors. In addition to capturing data representing sensed light, in some embodiments, camera 602 can capture other types of data such as depth data. For example, in some embodiments, the camera 602 also captures depth data using speckle, time-of-flight, parallax, or focus based techniques. Image data captured by device 600 using camera 602 includes data corresponding to a portion of the spectrum of a scene within the camera's field of view. Additionally, in some embodiments, the captured image data further includes depth data for the light data. In some other embodiments, the captured image data comprises data sufficient to determine or generate depth data for the data of the portion of the spectrum. In some embodiments, the device 600 includes one or more features of the device 100, 300, or 500.
In some examples, the electronic device 600 includes a depth camera, such as an infrared camera, a thermal imaging camera, or a combination thereof. In some examples, the device further includes a light emitting device (e.g., a light projector), such as an IR floodlight, a structured light projector, or a combination thereof. Optionally, the light emitting device is used to illuminate the object during capturing of images by the visible light camera and the depth camera (e.g. an IR camera), and information from the depth camera and the visible light camera is used to determine depth maps of different parts of the object captured by the visible light camera. In some implementations, the lighting effects described herein are displayed using parallax information from two cameras (e.g., two visible light cameras) for backward images, and depth information from a depth camera is used in conjunction with image data from the visible light cameras for forward images (e.g., self-portrait images). In some implementations, the same user interface is used when determining depth information using two visible light cameras and when determining depth information using depth cameras, thereby providing a consistent experience for the user even when using distinct techniques to determine information used in generating a lighting effect. In some embodiments, upon displaying the camera user interface with one of the lighting effects applied thereto, the device detects selection of the camera switching affordance and switches from a forward-facing camera (e.g., a depth camera and a visible light camera) to a rearward-facing camera (e.g., two visible light cameras spaced apart from each other) (or vice versa) while maintaining display of user interface controls to apply the lighting effect and replacing the display of the field of view of the forward-facing camera with the field of view of the rearward-facing camera (or vice versa).
In fig. 6A, device 600 displays messaging user interface 603, which is a user interface of a messaging application. Messaging user interface 603 includes a message area 603-1 for displaying messages communicated between the parties to the message conversation, a message composition area 603-2 for displaying content being composed for communication in the message conversation, and a keyboard area 603-3 for displaying various keyboard interfaces. In the embodiment shown in fig. 6A-6H, the message corresponds to a message conversation with the first recipient 607-1.
In FIG. 6A, the device 600 detects an input 604 (e.g., a tap gesture) on an affordance 606 and, in response, displays an avatar keyboard 605 in the keyboard region 603-3, as shown in FIG. 6B.
Avatar keyboard 605 includes various graphical objects that may be selected for transfer in messaging user interface 603. The avatar keyboard 605 includes an emoticon area 608 that displays a set of emoticons 609 that can be selected for transmission in the messaging user interface 603, and a sticker area 610 that displays a sticker 612 and a sticker application affordance 614 that, when selected, displays a sticker user interface 615 (shown in fig. 6C).
As shown in fig. 6B, the sticker area 610 includes a sticker 612 that can be selected for transmission in the messaging user interface 603. The stickers displayed in the sticker area 610 each have an appearance based on the various avatars available at the device 600. These stickers also include gestures or expressions of the avatar upon which the appearance of the sticker is based. For example, the monkey sticker 612-1 is displayed to have the appearance of a monkey head portrait having a surprised expression, the stool sticker 612-2 is displayed to have the appearance of a stool head portrait having heart-shaped eyes, and the robot sticker 612-3 is displayed to have the appearance of a robot head portrait in a neutral posture.
In some embodiments, the device 600 selectively displays various stickers 612 in the sticker area 610 based on a number of factors. For example, in some embodiments, the device 600 selectively displays various stickers based on usage history, such as recently created stickers (e.g., recommended/suggested stickers for recently created avatars) and stickers that are frequently used or recently used by the user. In some embodiments, the device 600 selectively displays different stickers 612 such that various sticker poses are represented in the sticker region 610.
The device 600 also selectively displays various stickers 612 in the sticker area 610 based on the type of avatar available at the device 600. For example, different types of avatars can include non-human character based, user created (e.g., created and/or customized by a user), or predefined (e.g., not created or customized by a user) avatars. In the embodiment shown in fig. 6B, the monkey sticker 612-1, stool sticker 612-2, and robot sticker 612-3 each have an appearance based on a predefined avatar that is a non-human character.
The device 600 displays stickers 612 for these avatars that are available at the device 600. Thus, if a particular avatar is not available at device 600, the sticker area 610 does not include a sticker for that avatar. For example, in FIG. 6B, the device does not include any user-created avatar or human character-based avatar, and therefore does not display any stickers for such avatar. As discussed in more detail below, the sticker area 610 displays one or more stickers of an avatar when such avatar is available at the device 600.
The sticker area 610 also includes a sticker application affordance 614. The sticker application affordance 614 has the appearance of a representation that includes various stickers. For example, in FIG. 6B, the Sticker application affordance 614 includes a representation 614-1 of a different sticker. The device 600 selectively displays various decal representations in the decal application affordances 614 based on a number of factors. For example, in some embodiments, when a type of avatar is not available at device 600, the device displays a representation of a sticker based on an example avatar of the type that is not available at device 600. For example, in FIG. 6B, a user-created or human-based type avatar is not available at device 600, and device 600 displays a representation 614-1 of a different user-created and human-based avatar sticker. In some embodiments, when a new avatar is created, device 600 updates the sticker application affordance 614 to include a representation of a sticker based on the newly created avatar. In some embodiments, device 600 displays representations of stickers of different types of avatars. In some embodiments, the device 600 selectively displays different sticker representations such that various sticker gestures are represented in the sticker application representation 614. In some embodiments, the representations of the stickers are displayed in an animated sequence, with the different representations individually looping over the sticker application affordance 614.
In FIG. 6B, the device 600 detects the input 616 on the sticker application affordance 614 and, in response, replaces the display of the avatar keyboard 605 and composition area 603-2 with a sticker user interface 615 as shown in FIG. 6C.
As shown in fig. 6C, device 600 displays a sticker user interface 615 having a region 618 with a representation 622 of multiple sets of stickers and a sticker region 620 having a sticker corresponding to a selected one of the sticker representations in region 618. Representation 622 corresponds to multiple sets of stickers available at device 600. The user can view different sets of stickers by selecting different representations 622 (e.g., by touching the corresponding representation 622 in the area 618, or by swiping horizontally across the sticker area 620). When a different representation 622 is selected, the device 600 updates the area 618 to indicate the selected representation 622 and updates the sticker area 620 to display a sticker corresponding to the selected representation. In fig. 6C, a monkey sub-representation 622-1 is selected in the first region 618 and a monkey sticker 624 is displayed in the sticker region 620. In some embodiments, the decal 624 is shown with a slight animation, such as smiling, blinking, waving a hand, and so forth. Monkey sticker 624 includes various gestures, such as a confused gesture shown on the blast head monkey sticker 624-1.
In some embodiments, device 600 displays region 618 the first time sticker user interface 615 is displayed, and then hides region 618 (e.g., initially does not display region 618 for subsequent instances of interface 615). The user may cause device 600 to redisplay area 618 by dragging sticker area 620, as shown in FIG. 6L.
Referring now to fig. 6C-6E, region 618 also includes a create affordance 626 that may be selected to create a new avatar. As shown, region 618 does not include any representation of user-created or human-based types of avatars, as no such avatars are currently available at device 600. Thus, the device 600 displays a paddle 628 that extends from the creation affordance 626 and has an animation of the representation 628-1 of the avatar looping over the paddle, as shown by the different representations shown for fig. 6C-6E. The animation provides an indication to the user that no user-created or human-based type of avatar is available at the device 600, and encourages the user to select the paddle 628 to create an avatar.
In FIG. 6E, device 600 detects input 630 on the creation affordance 626 and, in response, displays avatar creation user interface 632, as shown in FIG. 6F. Device 600 detects input, generally represented by input 634, in avatar creation user interface 632 to select various avatar features to build/create a new avatar (dome avatar 636), as shown in fig. 6G. In response to the input 638 on the completion affordance 640, the device 600 exits the avatar creation user interface 632 and returns to the messaging user interface 603 in fig. 6H, showing a sticker user interface 615 that is updated to include a representation 622-2 of the selected dome avatar 636 in the area 618 and a dome sticker 642 having the appearance of the dome avatar 636 but with a different pose for each of the respective dome stickers 642. The dome sticker 642 includes many of the same sticker poses as the monkey sticker shown in figure 6C. In some embodiments, after creating the dome avatar 636, the new avatar may then be available at the device 600, including in other applications such as a contacts application, a camera application, a media viewing application, and other applications on the device 600. In addition, the dome avatar 636 may be updated, and the dome avatar 636 updated, including in other applications.
In FIG. 6H, the device 600 detects an input 644 on a thumbpiece sticker 642-1 that is a sticker having the appearance of a circular cap avatar 636 and a "thumbpiece" pose. In some embodiments, selection of the upright thumb dome sticker 642-1 causes the device 600 to add the sticker to a message conversation (e.g., to send to the first recipient 607-1). In some embodiments, selection of the upright thumb dome sticker 642-1 causes the device 600 to display the upright thumb dome sticker 642-1 in the avatar keyboard 605, as shown in FIG. 6I.
Fig. 6I-6V illustrate a messaging user interface 603 for an embodiment in which the message corresponds to a message conversation with a second recipient 607-2. In FIG. 6I, the device 600 displays an avatar keyboard 605 having an upright thumb dome sticker 642-1 displayed in place of the monkey sticker 612-1. In addition, the device 600 updates the display of the decal application affordance 614 to include the representation 614-2 of the dome avatar 636. In some embodiments, representation 614-2 has the appearance of a recently used upright thumb dome sticker 642-1. In some embodiments, representation 614-2 has the appearance of other stickers that may be used for the newly created dome avatar 636.
In FIG. 6I, the device 600 detects an input 646 on the upright thumb dome sticker 642-1 and, in response, displays a sticker preview interface showing a dome sticker preview 650. In some embodiments, the user may perform a tap and hold gesture on the dome sticker preview 650 to generate a peeled-off appearance of the dome sticker preview 650, which may then be dragged to the message area 603-1 to add the dome sticker to the message conversation. In some embodiments, the user may select (e.g., via input 654) the send affordance 652 to add the upright thumb dome sticker 642-1 to the message conversation, as shown in fig. 6K.
In FIG. 6K, the device 600 detects the input 656 on the decal application affordance 614 and, in response, displays the decal user interface 615. In some embodiments, the device 600 stops the display of the emoticon 609 and displays a sticker (e.g., a dome sticker 642 corresponding to the dome avatar 636) in the emoticon area 608.
In fig. 6L, the device 600 displays a sticker user interface 615 having a dome sticker 642. In the embodiment shown in fig. 6L, device 600 has a previously displayed sticker user interface (e.g., in fig. 6C), and therefore initially does not display region 618. Additionally, the device 600 has generated a second user-created avatar (e.g., due to receiving a series of inputs to access the avatar-creating user interface and interact with the avatar-creating user interface in a manner similar to that discussed above with respect to fig. 6E-6G), as will be readily apparent. In response to the drag input 658, the device scrolls the dome sticker 642 and displays in FIG. 6M an area 618 with representations 622, including a dome representation 622-2 with a selected state and a female representation 622-3 corresponding to a set of female stickers based on the user-created female avatar.
In FIG. 6M, device 600 detects input 660 on the create affordance 626 to initiate a process for creating a boy avatar. The process for creating an avatar for a boy is similar to the process for creating an avatar described above and, for the sake of brevity, is not repeated here. After the device 600 creates the boy avatar, the device displays a sticker user interface 615 as shown in FIG. 6N. FIG. 6N shows a region 618, which is updated to include a boy representation 622-4 with a selected state, and a sticker region 620, which is updated to include boy stickers 662 that include a set of gestures based on the avatar of the new boy.
In FIG. 6N, the device 600 detects a scroll input 664 and, in response, scrolls the sticker region 620 to display an additional boy sticker 662 and an editing affordance 665, as shown in FIG. 6O.
In fig. 6O, the device 600 detects the input 668 on the editing affordance 665 and, in response, displays an avatar editing user interface 670 (similar to the avatar creation user interface 632) that shows a boy avatar 672 and a set of selectable hair style options 674 that can be selected to modify the appearance of the boy avatar 672, as shown in fig. 6P.
In fig. 6P, the device 600 detects an input 676 on the round puff hairstyle option 674-1 and, in response, modifies the boy avatar 672 to have a round puff hairstyle, as shown in fig. 6Q. The device 600 detects the input 678 on the done affordance 680 in fig. 6Q and, in response, exits the avatar editing user interface 670 and displays the messaging user interface 603 with the updated sticker user interface 615, as shown in fig. 6R.
In fig. 6R, the device 600 displays a sticker user interface 615 showing a boy sticker 662 updated with a round puff style. Device 600 detects drag input 682 on handle 684 and, in response, expands sticker user interface 615, as shown in fig. 6S.
Fig. 6S shows an additional boy sticker 662 updated with a round awning style. The device 600 detects the drag input 686 (e.g., drag down) and, in response, scrolls the sticker 662 to display additional boy stickers 662, including a vertical thumb boy sticker 662-1 and a heart-shaped eye boy sticker 662-2, as shown in fig. 6T.
In FIG. 6T, the sticker user interface 615 shows a region 618 that includes a boy representation 622-4 that is updated to include a round puff style. The device 600 detects an input 688 on the robot representation 622-5 and, in response, selects the robot representation 622-5 and replaces the boy sticker 662 with the robot sticker 690, as shown in FIG. 6U. In some embodiments, in response to one or more horizontal swipe gestures on the sticker region 620, the robot representation 622-5 may be selected and the corresponding robot sticker 690 displayed.
In fig. 6U, the device 600 displays a robotic sticker 690 having an appearance based on a robot avatar, the robotic sticker (as discussed previously) being a predefined avatar based on a non-human character. In some embodiments, stickers based on such avatars (e.g., predefined avatars or non-human character based avatars) include some stickers that have gestures that match the gestures of stickers based on user-created avatars or human character based avatars. For example, the robotic decals 690 include various poses, some of which have the same pose as some of the poses of the boy decals 662. For example, the heart-eye robotic sticker 690-1 has the same pose as the heart-eye boy sticker 662-2 (e.g., both stickers include a smile facial expression with a heart-shape on the eyes). It should be noted that although the robot sticker 690-1 and the boy sticker 662-2 have different appearances (e.g., the robot sticker 690-1 has the appearance of a robot, and the boy sticker 662-2 has the appearance of a boy), both have the same pose. Further, in some embodiments, stickers based on predefined avatars or on avatars of non-human characters optionally exclude certain gestures included in stickers based on user-created avatars or on avatars of human characters. For example, the robotic decal 690 does not include a thumbs-up gesture. In some embodiments, the excluded sticker poses are those that include human features other than the head (e.g., hands).
In some embodiments, multiple sets of stickers based on a predefined avatar or on an avatar of a non-human character each have stickers with the same or similar pose. For example, all such avatars include heart-shaped eye stickers and exclude the upright thumb sticker. In some embodiments, some stickers have the same pose for different groups of stickers, but are customized for the particular avatar on which the appearance of the sticker is based. For example, the robot sticker 690 includes a blast head robot sticker 690-2 that is similar to corresponding gestures in other sticker groups (e.g., fig. 6E shows a blast head monkey sticker 624-1 for a monkey, and fig. 6T shows a blast head sticker for a boy avatar), but has a custom appearance that corresponds to the characteristics of the avatar upon which the sticker appearance is based. For example, the blast-head robot sticker 690-2 includes an appearance that shows a mechanical part 692 (such as a cog, bolt, and spring that pops up from the robot head). In some embodiments, other avatars of similar types (e.g., predefined, non-human character-based) may include similar gestures, but have different customized appearances (e.g., a bullsequin with a bright powder) based specifically on the characteristics of the avatar.
In some embodiments, stickers based on user-created avatars or non-human character-based avatars each have a sticker with the same pose. For example, the sticker pose shown for the cap sticker 642 is the same as the sticker pose shown for the boy sticker 662 (e.g., the pose is the same, but the appearance is different based on different avatars).
In FIG. 6U, the device 600 detects an input 694 on the dome representation 622-2 and, in response, displays the selected dome representation 622-2 in the sticker region 618 and a dome sticker 642 replacing the robot sticker 690 in the sticker region 620, as shown in FIG. 6V. The cap sticker 642 includes the same pose as the boy sticker 662, including a thumb cap sticker 642-1 corresponding to the thumb boy sticker 662-1 and a heart-shaped eye cap sticker 642-2 corresponding to the heart-shaped eye boy sticker 662-2.
Fig. 7 is a flow diagram illustrating a method for displaying an avatar in a sticker application user interface using an electronic device, in accordance with some embodiments. The method 700 is performed at a device (e.g., 100, 300, 500, 600) having a display and an input device. Some operations in method 700 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 700 provides an intuitive way for displaying an avatar in a sticker application user interface. The method reduces the cognitive burden of the user in displaying the avatar in the sticker application user interface, thereby creating a more efficient human-machine interface. For battery-driven computing devices, enabling users to more quickly and efficiently display avatars in sticker application user interfaces saves power and increases the interval between battery charges.
An electronic device (e.g., 600) receives (702) a request (e.g., 616, 656) to display a sticker user interface (e.g., 615) (e.g., a single user interface displaying stickers that are selectable for use in an application, such as a messaging application) (e.g., a selection of a user interface object (e.g., an affordance) associated with displaying a sticker user interface) (e.g., a selection of a representation of a set of stickers) (e.g., a gesture on a set of stickers to display representations of a plurality of sets of stickers) via one or more input devices (e.g., 601).
In response to receiving a request to display a sticker user interface, an electronic device (e.g., 600) displays (704), via the display device (e.g., 601), a sticker user interface (e.g., 615) that includes representations (e.g., 622) of sets of stickers based on a user-created avatar (e.g., an avatar that can be created and optionally customized by a user). In some embodiments, the user-created avatar includes customizable (e.g., selectable or configurable) avatar characteristics. In some embodiments, the user-created avatar includes an avatar modeled to represent a human character, and the customizable avatar characteristics generally correspond to physical characteristics of a human. For example, such avatars may include representations of people having various body, human features or characteristics (e.g., an elderly female with a dark skin color and having long, straight, brown hair). Such an avatar will also include a representation of a person having various non-human characteristics (e.g., cosmetic enhancements, hats, glasses, etc.) that are typically associated with the appearance of a human. The avatar created by the user does not include an avatar generated without input from the user selecting the avatar characteristics.
In some embodiments, representations of multiple sets of stickers based on user-created avatars are displayed in a first region (e.g., 618) (e.g., a sticker carousel) of a user interface. In some embodiments, the first region also includes one or more representations (e.g., 622-1) of sets of stickers based on an avatar that is not a user-created avatar (e.g., an avatar that cannot be created or customized by a user). In some embodiments, the sticker carousel may scroll (e.g., horizontally) (e.g., in response to a gesture such as a swipe gesture) to display additional representations of multiple sets of stickers and other options displayed in the sticker carousel. In some embodiments, the avatar that cannot be created or customized by the user includes an avatar that is modeled to represent a non-human character. In some embodiments, an avatar modeled to represent a non-human character includes, for example, a humanoid constructed non-human character (e.g., a stylized animal, a stylized robot, or a stylization of a generally inanimate object or a generally non-human object). In some embodiments, such an avatar includes an avatar having customizable (e.g., optional or configurable) avatar characteristics that generally correspond to non-human traits and characteristics. In some embodiments, such an avatar will not include representations of people with various physical, human features or characteristics (e.g., young children with rounded faces and short wave-shaped hair), even though some of the customizable features of the human avatar include non-human characteristics (e.g., cosmetic enhancements, hats, glasses, or other non-human objects that are typically associated with human appearance). Displaying the first area with one or more representations based on sets of stickers that are not user-created avatars reduces the number of inputs to perform the task of locating and selecting stickers displayed in an application. Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first region further includes a create user interface object (e.g., 626) (e.g., create affordance) that, when selected, displays a user interface (e.g., 632) for creating a user-created avatar (e.g., a new user-created avatar). Displaying such a create user interface object reduces the amount of input required to access the user interface to perform the technical task of generating an avatar. Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, after detecting generation of a new user-created avatar, the electronic device displays a representation of a set of stickers (e.g., 622-4) based on the new user-created avatar in the first area.
In some embodiments, displaying the sticker user interface (e.g., 615) further includes displaying the sticker user interface having a first area (e.g., 618) in accordance with a determination that the request to display the sticker user interface is a first received request (e.g., 616) to display the sticker user interface (e.g., the first time the electronic device receives the request to display the sticker user interface). In some embodiments, displaying the sticker user interface further includes, in accordance with a determination that the request to display the sticker user interface is a subsequently received request to display the sticker user interface (e.g., 656) (e.g., the electronic device does not receive the request to display the sticker user interface for the first time), displaying the sticker user interface without the first area (e.g., see the sticker user interface 615 in fig. 6L).
In some embodiments, the electronic device receives a first input (e.g., 658) while displaying a sticker user interface (e.g., see sticker user interface 615 in fig. 6L) that does not have a first area (e.g., the sticker user interface is displayed to show stickers, but does not show representations of multiple sets of stickers). In some embodiments, in response to detecting the first input, in accordance with a determination that the first input satisfies a first set of criteria (e.g., the input includes movement in a downward direction and originates from the displayed sticker), the electronic device displays the first area (e.g., updates the sticker user interface to include the first area) (e.g., see fig. 6M).
In some embodiments, a first avatar-based set of stickers (e.g., the dome avatar 636) has a first set of sticker poses (e.g., the pose shown in sticker 642) and a second avatar-based set of stickers has a first set of sticker poses (e.g., the pose shown in sticker 662) (e.g., all sets of stickers based on user-created avatars have the same pose and facial expression, but have different appearances based on the particular user-created avatar on which each set of stickers is based). In some embodiments, a respective one of the plurality of sets of stickers based on the user-created avatar is displayed in response to detecting a selection of the respective one of the representations of the plurality of sets of stickers based on the user-created avatar. Displaying a set of stickers in response to detecting selection of a representation of the set of stickers based on the user-created avatar reduces the number of inputs to perform the technical task of generating the stickers. Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, displaying the sticker user interface (e.g., 615) further includes displaying a representation (e.g., 662-5) of a set of stickers based on a first predefined avatar (e.g., a robotic avatar) (e.g., a predefined or pre-existing avatar, an avatar not created by the user). In some embodiments, the avatar that cannot be created or customized by the user includes an avatar that is modeled to represent a non-human character. In some embodiments, an avatar modeled to represent a non-human character includes, for example, a humanoid constructed non-human character (e.g., a stylized animal, a stylized robot, or a stylization of a generally inanimate object or a generally non-human object). In some embodiments, such an avatar includes an avatar having customizable (e.g., optional or configurable) avatar characteristics that generally correspond to non-human traits and characteristics. In some embodiments, such an avatar will not include representations of people with various physical, human features or characteristics (e.g., young children with rounded faces and short wave-shaped hair), even though some of the customizable features of the human avatar include non-human characteristics (e.g., cosmetic enhancements, hats, glasses, or other non-human objects that are typically associated with human appearance). In some embodiments, based on the set of stickers (e.g., 690) for the first predefined avatar having a second set of sticker gestures (e.g., see sticker 690 in fig. 6U) different from the first set of sticker gestures (e.g., see sticker 662 in fig. 6T) (e.g., the set of gestures and facial expressions for the sets of stickers for the predefined avatar is different from the set of gestures and facial expressions for the sets of stickers for the user-created avatar). In some embodiments, a subset of the poses and facial expressions of the sets of stickers for the predefined avatar is the same as a subset of the poses and facial expressions of the sets of stickers for the user-created avatar (e.g., some of the sticker poses and facial expressions are common to both the sets of stickers for the predefined avatar and the sets of stickers for the user-created avatar). In some embodiments, a subset of the poses and facial expressions of the respective set of stickers for the predefined avatar are to be the same as the poses and subset of facial expressions of the other sets of stickers of the predefined avatar (e.g., some of the sticker poses and facial expressions are common to different sets of stickers for the predefined avatar). In some embodiments, in response to detecting selection of a representation of a set of stickers based on the first non-user-created avatar, a set of stickers based on the first non-user-created avatar is displayed.
In some embodiments, a set of stickers (e.g., 690) based on a first predefined avatar (e.g., an avatar representing an animated character not created by a user, such as a unicorn avatar) includes a sticker (e.g., blast head robot sticker 690-2) having a first sticker pose (e.g., a sticker depicting a head pose/expression of a blast). In some embodiments, displaying the sticker user interface further includes displaying a representation of a set of stickers based on a second predefined avatar (e.g., see sticker 624 in fig. 6C) (e.g., an avatar representing an animated character not created by the user, such as a robot avatar, avatar that cannot be created or customized by the user). In some embodiments, the avatar that cannot be created or customized by the user includes an avatar that is modeled to represent a non-human character. In some embodiments, an avatar modeled to represent a non-human character includes, for example, a humanoid constructed non-human character (e.g., a stylized animal, a stylized robot, or a stylization of a generally inanimate object or a generally non-human object). In some embodiments, such an avatar includes an avatar having customizable (e.g., optional or configurable) avatar characteristics that generally correspond to non-human traits and characteristics. In some embodiments, such an avatar will not include representations of people with various physical, human features or characteristics (e.g., young children with rounded faces and short wave-shaped hair), even though some of the customizable features of the human avatar include non-human characteristics (e.g., cosmetic enhancements, hats, glasses, or other non-human objects that are typically associated with human appearance). In some embodiments, the set of stickers based on the second predefined avatar includes a sticker having the first sticker gesture (see, e.g., blast head monkey sticker 624-1 in fig. 6C). In some embodiments, a sticker having a first sticker pose for a first predefined avatar includes a graphical element (e.g., a mechanical component 692) (e.g., a highlight or cog) corresponding to the first predefined avatar that is not included in a sticker having a first sticker pose for a second predefined avatar. In some embodiments, the plurality of sets of stickers for the predefined avatar include stickers that are unique to a set of stickers and incorporate a set of characteristics that are unique to the predefined avatar. For example, in the case of a "blasting head" unicorn sticker, the sticker has the following appearance: the top part of the unicorn head was removed and the explosive state of the brain of the unicorn was shown, the unicorn head including the brilliant powder emanating from the brain of the unicorn. The appearance of the glitter in the "blasting head" sticker is unique to the unicorn sticker and corresponds to the mysterious nature of the unicorn head portrait. As another example, in the case of a "blast head" robot sticker, the sticker has the following appearance: the top of the robot head is removed and the explosive state of the robot brain is displayed, the robot head including cogs that pop out of the robot brain. The appearance of the cogs in the "blasting head" sticker is unique to the robot sticker and corresponds to the mechanical properties of the robot avatar.
In some embodiments, the first set of sticker poses includes at least one sticker pose (e.g., the upright thumb sticker 662-1) (e.g., a sticker pose including a hand) that is not included in the second set of sticker poses. In some embodiments, the excluded sticker gestures include gestures that display body parts other than the head. Stickers having such gestures may include, for example, "thumbs up" stickers, "fist up" stickers, "hug" stickers, and the like.
In some embodiments, displaying the sticker user interface further includes displaying a keyboard display area (e.g., 605) that includes a plurality of emoticons (e.g., 609) and representations of a plurality of sets of stickers based on the avatar created by the user (e.g., sticker area 610). In some embodiments, the electronic device detects a selection (e.g., 656) of one of the plurality of sets of sticker representations (e.g., 614-2 in fig. 6K) based on the user-created avatar. In some embodiments, in response to detecting selection of one of the representations of the sets of stickers based on the user-created avatar, the electronic device displays a plurality of stickers (e.g., 642) of the set of stickers based on the user-created avatar in the keyboard display area. In some embodiments, displaying the plurality of stickers in the keyboard display area includes ceasing to display the emoticon. In some embodiments, when multiple stickers are displayed, representations of multiple sets of stickers are displayed in different locations in the keyboard display area. Displaying multiple ones of a set of stickers based on a user-created avatar in a keyboard display area in response to detecting selection of one of the representations of the multiple sets of stickers based on the user-created avatar reduces the number of inputs to perform the technical task of generating stickers for transmission in a messaging application (e.g., by reducing the number of menu options required to locate and select a desired sticker). Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
Displaying (704) the sticker user interface (e.g., 615) includes, in accordance with a determination that the user has created a first set of two or more user-created avatars including a first avatar (e.g., a dome avatar 636) (e.g., an avatar created by the user to model males) and a second avatar (e.g., an avatar created by the user to model females), displaying (706) (e.g., simultaneously displaying) a representation (e.g., 622) of the first plurality of sets of stickers (e.g., a static representation of an avatar based on which the corresponding set of stickers is based, such as a representation of the user-created avatar). The representations of the first plurality of sets of stickers include a representation of a set of stickers based on the first avatar (e.g., 622-2) (e.g., a static representation of the first avatar having a static pose and an appearance based on the first avatar that includes features (e.g., hat, sunglasses, hair style/color, skin tone, etc.) used to create the first avatar) and a representation of a set of stickers based on the second avatar (e.g., 622-3) (e.g., a static representation of the second avatar having a static pose and an appearance based on the second avatar that includes features (e.g., hat, sunglasses, hair style/color, skin tone, etc.) used to create the second avatar). Displaying a representation of multiple sets of stickers when a set of avatars has been created reduces the number of inputs to perform the technical task of generating and selecting stickers for transfer in a messaging application (e.g., by reducing the number of menu options required to locate and select a desired sticker). Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
Displaying (704) a sticker user interface (e.g., 615) includes, in accordance with a determination that the user has created a second set of two or more user-created avatars including a third avatar (e.g., boy avatar 672) (e.g., an avatar created by the user to model a child) that is not included in the first set of two or more user-created avatars, displaying (708) a representation of the second plurality of sets of stickers that is different from the representation of the first plurality of sets of stickers (e.g., simultaneously). In some embodiments, the representation of the second plurality of sets of stickers includes a representation (e.g., 622-4) of a set of stickers based on a third avatar that is not included in the representation of the first plurality of sets of stickers (e.g., a static representation of the third avatar having a static pose and an appearance based on the third avatar that includes features (e.g., hat, sunglasses, hairstyle/color, skin tone, etc.) for creating the third avatar). In some embodiments, in response to detecting generation of the respective avatars (e.g., the first avatar, the second avatar, and the third avatar), a plurality of sets of stickers based on the respective avatars and corresponding representations of the plurality of sets of stickers are generated (e.g., automatically (e.g., without subsequent user input after creation of the avatars)). In some embodiments, the representations of the first and/or second plurality of sets of stickers include representations that are not user-created avatars. Displaying different representations of multiple sets of stickers when a new avatar has been created reduces the number of inputs to perform the technical task of generating and selecting stickers for transfer in a messaging application (e.g., by reducing the number of menu options required to locate and select a desired sticker). Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the electronic device detects a selection (e.g., 694) of a representation of a set of stickers based on the first avatar (e.g., the dome representation 622-2 in fig. 6U) (e.g., a selection of one of the representations of the set of stickers based on the first avatar, the second avatar, or the third avatar). In some embodiments, the representation is selected by default (e.g., upon display of a sticker user interface). In some embodiments, the representation is selected in response to a user input. In some embodiments, in response to detecting selection of a representation of the first avatar-based set of stickers, the electronic device concurrently displays the selected representation (e.g., 622-2) and a plurality of stickers (e.g., 642) of the first avatar-based set of stickers (e.g., graphical objects having an appearance that is avatar-based (e.g., user-created avatar) and has different gestures and facial expressions), the plurality of stickers having an appearance that is based on the first avatar. In some implementations, the sticker corresponds to a selected one of the representations (e.g., the sticker has an appearance based on an avatar associated with the selected representation). When a different representation is selected, the sticker associated with the previously selected representation is replaced with a set of stickers associated with the newly selected representation. In some embodiments, the sticker includes additional features that are displayed to modify the appearance of the avatar in order to convey a particular expression, mood, emotion, or the like. For example, the sticker may include a heart shape above the eye of the avatar to convey an love or a tear below the eye of the avatar to convey sadness. In some embodiments, the sticker includes a slight modification to the appearance of the avatar, such as changing a portion of the avatar, while still maintaining an overall recognizable representation of the avatar. An example of one such modification is an "explosive head" sticker, where the sticker is a representation of an avatar with the top portion of the avatar head removed and showing the explosive state of the avatar brain.
In some embodiments, the plurality of stickers in the set of stickers based on the first avatar includes a first sticker (e.g., 642-1) having a first pose (e.g., a thumbs-up pose) and an appearance based on the first avatar and a second sticker (e.g., 642-2) having a second pose (e.g., heart-shaped eyes) different from the first pose and an appearance based on the first avatar (e.g., the dome avatar 636) (e.g., the stickers in the set of stickers have a set of different poses and an appearance based on the first avatar). In some embodiments, upon displaying a plurality of stickers in a set of stickers based on a first avatar, the electronic device detects a selection of a representation (e.g., 622-4) of a set of stickers based on a second avatar. In some embodiments, in response to detecting selection of the representation of the set of stickers based on the second avatar, the electronic device stops displaying a plurality of stickers of the set of stickers based on the first avatar. In some implementations, in response to detecting selection of the representation of the set of stickers based on the second avatar, the electronic device displays a plurality of stickers (e.g., 662) of the set of stickers based on the second avatar. In some embodiments, the second avatar-based set of stickers includes a third sticker (e.g., 662-1) having a first pose (e.g., a thumbs-up pose) and an appearance based on the second avatar (e.g., boy avatar 672) and a fourth sticker (e.g., 662-2) having a second pose (e.g., heart-shaped eyes) and an appearance based on the second avatar (e.g., the first set of stickers having the appearance of the first avatar and the set of poses is replaced with a second set of stickers having the same set of poses but having an appearance based on the second avatar). Displaying stickers having the same set of gestures but different appearances based on corresponding avatars (e.g., user-created avatars) allows users to quickly and easily compose messages to convey known emotions based on gestures, while still respecting the user's personal and artistic preferences for avatar stickers having different appearances. This provides an improved control scheme for generating custom messages that may require fewer inputs to generate custom messages than if a different control scheme (e.g., a control scheme that requires the generation of separate, custom gestures) were used. Furthermore, this type of control may be done in real time during a conversation, such as a text conversation or a video conversation, for example, whereas manual control to build a sticker would have to be done before the conversation begins or after the conversation ends. Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, one or more of the plurality of stickers is animated (e.g., the sticker is shown blinking, waving, making a facial expression, etc.).
In some embodiments, after displaying the sticker user interface, the electronic device receives a request to redisplay the sticker user interface (e.g., 638). In some embodiments, in response to receiving the request to redisplay the sticker user interface, the electronic device redisplays the sticker user interface via the display device (e.g., the sticker user interface 615 is redisplayed in fig. 6H). In some embodiments, redisplaying the sticker user interface includes, in accordance with a determination that the user has created a fourth avatar (e.g., 636) that is not included in the first or second set of two or more user-created avatars (e.g., the user has created a new avatar), displaying (e.g., simultaneously) a representation of a third plurality of sets of stickers (e.g., representation 622 shown in fig. 6H). In some embodiments, the representation of the third plurality of sets of stickers includes a representation of a set of stickers (e.g., 622-2) based on a fourth avatar that is not included in the representation of the first or second plurality of sets of stickers (e.g., again, as after a series of inputs are received to create the boy avatar 672, the sticker user interface 615 redisplays in fig. 6N and includes the newly displayed boy representation 622-4 and the boy sticker 662). In some embodiments, displaying the representation of the set of stickers includes simultaneously displaying at least a portion of the set of stickers based on (e.g., having an appearance based on) the fourth avatar. For example, when the sticker UI is redisplayed, a representation of a group of stickers based on the fourth avatar is selected, and at least a portion of the stickers in the group are displayed having a different appearance based on the appearance of the fourth avatar. Redisplaying the sticker user interface to display the sticker after the user has created the avatar reduces the number of inputs to perform the technical task of generating the sticker (e.g., for sending in a messaging conversation). Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the representation of the set of stickers based on the first avatar (e.g., 622-2) has the appearance of one of the stickers in the set of stickers based on the first avatar. In some embodiments, the representation of the set of stickers based on the second avatar (e.g., 622-4) has the appearance of one of the stickers in the set of stickers based on the second avatar. In some embodiments, the representation of the set of stickers based on the third avatar has the appearance of one of the stickers in the set of stickers based on the third avatar.
In some embodiments, displaying the sticker user interface further includes displaying an editing user interface object (e.g., 665) (e.g., an editing affordance) that, when selected, displays an editing interface (e.g., 670) for editing the corresponding user-created avatar.
In some embodiments, displaying the sticker user interface further includes displaying a plurality of stickers (e.g., 662) of a set of stickers based on the respective user-created avatar (e.g., 672), wherein the plurality of stickers have an appearance based on a first appearance of the respective user-created avatar (e.g., boy sticker 662 in fig. 6O). In some embodiments, displaying the sticker user interface further includes detecting a series of inputs (e.g., 668, 676, 678) corresponding to a request to edit the corresponding user-created avatar from a first appearance to a second appearance (e.g., from a first hair style to a second hair style) (e.g., including a selection of an editing affordance and a series of inputs interacting with the editing interface to edit the corresponding user-created avatar). In some embodiments, displaying the sticker user interface further includes detecting a request (e.g., 678) to display a plurality of stickers in a set of stickers based on the respective user-created avatar (e.g., exiting the editing interface) (e.g., detecting a selection of a representation of a set of stickers based on the respective user-created avatar). In some embodiments, displaying the sticker user interface further includes, in response to detecting a request to display a plurality of stickers in a set of stickers based on the corresponding user-created avatar, displaying a plurality of stickers in the set of stickers based on the corresponding user-created avatar (e.g., sticker 662 in fig. 6R). In some embodiments, the stickers in a set of stickers have an updated appearance (e.g., the sticker 662 in fig. 6R having a second hairstyle) based on the second appearance of the corresponding user-created avatar (e.g., the stickers in the set of stickers based on the corresponding user-created avatar are changed/updated when the corresponding user-created avatar is changed/updated). In some embodiments, the appearance of the representation of the set of stickers based on the respective user-created avatar is updated when the respective user-created avatar is changed/updated. Automatically updating the appearance of the representation of the sticker after the user has created/updated the avatar reduces the amount of input to perform the technical task of generating the sticker. Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, editing the respective user-created avatar using the editing interface (e.g., 670) changes the appearance of the sticker user interface and the respective user-created avatar in a user interface other than the sticker user interface (e.g., the editing interface changes the appearance of the respective user-created avatar throughout the operating system, including instances in which the respective user-created avatar is displayed in a different application, such as a camera application, a video communication application, a messaging application, a media display application, etc.). For example, in camera applications and video communication applications, representations of respective user-created avatars may be displayed in the field of view of the camera. As the appearance of the respective user-created avatar changes in the sticker user interface, the changes to the appearance of the respective user-created avatar are also applied to the representations of the respective user-created avatar in the camera application and the video communication application. In a messaging application, participants in a messaging conversation may be represented using corresponding user-created avatars. As the appearance of the respective user-created avatar changes in the sticker user interface, the change to the appearance of the respective user-created avatar is also applied to the respective user-created avatar in the messaging application's messaging dialog. A media display application, such as a photo viewing application or a video viewing application, may include representations of respective user-created avatars in media items. As the appearance of the respective user-created avatar changes in the sticker user interface, the change to the appearance of the respective user-created avatar is also applied to the representation of the respective user-created avatar in the media item viewed in the media display application. Updating the appearance of the avatar throughout the various user interfaces in response to detecting changes to the avatar in the sticker user interface reduces the amount of input to perform the technical task of generating or updating the avatar for different applications. Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
It should be noted that the details of the process described above with respect to method 700 (e.g., fig. 7) also apply in a similar manner to the methods described below. For example, methods 800, 1000, 1200, 1300, 1500, 1700, and 1800 optionally include one or more characteristics of the various methods described above with reference to method 700. For example, a sticker may be displayed and used in a user interface in a manner similar to that described above. For the sake of brevity, these details are not repeated in the following.
FIG. 8 is a flow diagram illustrating a method for displaying an avatar in an avatar keyboard application user interface using an electronic device, in accordance with some embodiments. The method 800 is performed at a device (e.g., 100, 300, 500, 600) having a display and one or more input devices. Some operations in method 800 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 800 provides an intuitive way for displaying an avatar in an avatar keyboard application user interface. The method reduces the cognitive burden of the user in displaying the avatar in the avatar keyboard application user interface, thereby creating a more efficient human-machine interface. For battery-driven computing devices, enabling a user to more quickly and efficiently display an avatar in an avatar keyboard application user interface conserves power and increases the interval between battery charges.
An electronic device (e.g., 600) displays (802) a content creation user interface (e.g., 603) (e.g., a document creation user interface or a message composition user interface) (e.g., a single interface screen) via the display device (e.g., 601).
While displaying the content creation user interface (e.g., 603), the electronic device (e.g., 600) receives (804) a request (e.g., 604) to display a first display area (e.g., 605) (e.g., a keyboard display area) (e.g., an emoji keyboard display area) (e.g., a sticker keyboard display area) via one or more input devices. In some embodiments, the first display area includes a plurality of graphical objects (e.g., stickers) (e.g., emoticons) corresponding to predefined content for insertion into the content creation user interface (e.g., 612, 609).
Displaying (804) a first display region (e.g., 605) in response to receiving the request includes: in response to receiving the request, a first display region including a first subset of graphical objects (e.g., 612) (e.g., a sticker) (e.g., a subset of the plurality of graphical objects in the first display region) having an appearance (e.g., having a sticker that is based on an appearance of a respective avatar and that has a different pose and facial expression) based on a set of avatars (e.g., avatars such as avatars modeled to represent human characters, avatars modeled to represent non-human characters, avatars that may be created and/or customized by a user, and avatars that may not be created or customized by a user) available at the electronic device (e.g., 600) is displayed (806) via the display device (e.g., 601). In some embodiments, the sticker includes additional features that are displayed to modify the corresponding avatar in order to convey a particular expression, mood, emotion, or the like. For example, the sticker may include a heart shape above the eye of the avatar to convey an love or a tear below the eye of the avatar to convey sadness. In some embodiments, the sticker includes a slight modification to the appearance of the avatar, such as changing a portion of the avatar, while still maintaining an overall recognizable representation of the avatar. An example of one such modification is an "explosive head" sticker, where the sticker is a representation of an avatar with the top portion of the avatar head removed and showing the explosive state of the avatar brain. In some embodiments, an avatar modeled to represent a human includes customizable (e.g., selectable or configurable) avatar characteristics that generally correspond to physical characteristics of the human. For example, such avatars may include representations of people having various body, human features or characteristics (e.g., an elderly female with a dark skin color and having long, straight, brown hair). Such an avatar will also include a representation of a person having various non-human characteristics (e.g., cosmetic enhancements, hats, glasses, etc.) that are typically associated with the appearance of a human. In some embodiments, such an avatar will not include anthropomorphic constructs, such as a stylized animal, a binning robot, or stylized, generally inanimate or generally non-human subject. In some embodiments, an avatar modeled to represent a non-human character includes, for example, a humanoid constructed non-human character (e.g., a stylized animal, a stylized robot, or a stylization of a generally inanimate object or a generally non-human object). In some embodiments, such an avatar includes an avatar having customizable (e.g., optional or configurable) avatar characteristics that generally correspond to non-human traits and characteristics. In some embodiments, such an avatar will not include representations of people with various physical, human features or characteristics (e.g., young children with rounded faces and short wave-shaped hair), even though some of the customizable features of the human avatar include non-human characteristics (e.g., cosmetic enhancements, hats, glasses, or other non-human objects that are typically associated with human appearance).
Displaying (806) a first display region (e.g., 605) including a first subset (e.g., 610) of graphical objects (e.g., 612) having appearances based on a set of avatars available at the electronic device includes: in accordance with a determination that the set of avatars includes a first type of avatar (e.g., an avatar that can be created and/or customized by a user of the electronic device) (e.g., an avatar modeled to represent a human character), one (e.g., one or more) of the graphical objects (e.g., 642-1 in fig. 6I) in the first subset having an appearance based on the first type of avatar (e.g., the dome avatar 636) are displayed (808). In some embodiments, when the set of avatars includes one avatar that is customizable, creatable, and/or modeled to represent a human character, the displayed sticker includes one or more stickers that originate from the avatar (e.g., have an avatar-based appearance). In some embodiments, these stickers are referred to as a first type of sticker. In some embodiments, the first type of sticker includes a sticker suggested by the electronic device based on a history of use of the first type of sticker (e.g., a sticker suggested for recent and/or frequent use).
Displaying (806) a first display region (e.g., 605) including a first subset of graphical objects (e.g., 612) having appearances based on a set of avatars available at the electronic device includes: in accordance with a determination that the set of avatars does not include any avatars of the first type, graphical objects (e.g., 612-1, 612-2, 612-3) in the first subset having an appearance based on a second type of avatar (e.g., an avatar that cannot be created and/or customized by a user of the electronic device) (e.g., an avatar modeled to represent a non-human character) that is different from the first type are displayed (810) without displaying one (e.g., one or more) of the graphical objects in the first subset having an appearance based on the first type of avatar. In some embodiments, when the set of avatars does not include avatars that are customizable, creatable, and/or modeled to represent humans, the displayed sticker originates from a non-human character and/or is not user-created or customizable (e.g., has an avatar-based appearance). In some embodiments, these stickers are referred to as a second type of sticker. Displaying only the second type of avatar when the first type of avatar is not available provides feedback to the user that no first type of avatar is currently available at the device and encourages the user to create the first type of avatar. Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first type of avatar is a user-created avatar (e.g., a cap avatar 636, a boy avatar 672) (e.g., an avatar that can be created and optionally customized by the user), and the second type of avatar is a predefined avatar (e.g., a robot, monkey, stool) (e.g., a type of avatar other than the user-created avatar) (e.g., an avatar that cannot be created and/or customized by the user of the electronic device). In some embodiments, the user-created avatar includes customizable (e.g., selectable or configurable) avatar characteristics.
In some embodiments, a first type of avatar is based on a human character (e.g., a dome avatar 636, a boy avatar 672) (e.g., modeled to represent the human avatar), and a second type of avatar is based on a non-human character (e.g., a monkey, stool, a robot) (e.g., modeled to represent an avatar of an animal such as a pig, cat, dog, shark, etc.; mystery characters such as a unicorn, a dragon, and a alien person; anthropomorphic objects such as a robot and stool; and/or stylized expressions such as emoticons). In some embodiments, an avatar modeled to represent a non-human character includes, for example, a humanoid constructed non-human character (e.g., a stylized animal, a stylized robot, or a stylization of a generally inanimate object or a generally non-human object). In some embodiments, such an avatar includes an avatar having customizable (e.g., optional or configurable) avatar characteristics that generally correspond to non-human traits and characteristics. In some embodiments, such an avatar will not include representations of people with various physical, human features or characteristics (e.g., young children with rounded faces and short wave-shaped hair), even though some of the customizable features of the human avatar include non-human characteristics (e.g., cosmetic enhancements, hats, glasses, or other non-human objects that are typically associated with human appearance).
In some embodiments, displaying the first display region further includes displaying a sticker user interface object (e.g., 614) (e.g., a sticker affordance). In some embodiments, the electronic device receives input directed to the sticker user interface object (e.g., 616, 656). In some embodiments, in response to receiving an input directed to the sticker user interface object, the electronic device stops displaying the first display region (e.g., 605). In some embodiments, in response to receiving an input directed to the sticker user interface object, the electronic device displays a sticker user interface (e.g., 615) (e.g., displays a user interface that can be selected for a sticker in the content creation user interface). In some embodiments, the sticker user interface includes a second plurality of graphical objects (e.g., stickers 624, stickers 642) corresponding to the predefined content for insertion into the content creation user interface. In some embodiments, the sticker user interface is displayed simultaneously with a portion of the content creation user interface (e.g., a message display area). In some embodiments, the sticker user interface replaces the sticker keyboard display area. Displaying a set of graphical objects corresponding to predefined content for insertion into a content creation user interface allows a user to quickly and easily compose messages to express a known emotion based on the predefined content. This provides an improved control scheme for generating messages that may require less input to generate messages than if a different control scheme were used (e.g., a control scheme that required generation of separate, custom content). Furthermore, this type of control may be done in real time during a conversation, such as a text conversation or a video conversation, for example, whereas manual control to build a graphical object would have to be done before the conversation begins or after the conversation ends.
In some embodiments, displaying the sticker user interface object includes displaying a sticker user interface object (e.g., 614 in fig. 6B) having a first appearance, the sticker user interface object including a plurality of representations (e.g., 614-1, 614-2) of the first type of avatar (e.g., the sticker affordances represent representations of stickers including appearances based on the first type of avatar). In some embodiments, the affordance further includes one or more representations of a second type of avatar. Displaying the sticker user interface object with the plurality of representations of the first type of avatar provides feedback to the user that selection of the sticker user interface object will allow the user to access the sticker of the first type of avatar. Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, after displaying the sticker user interface object (e.g., 614 in fig. 6B) having the first appearance (e.g., 614), the electronic device receives a series of inputs corresponding to a request to create a third avatar of the first type (e.g., 636). In some embodiments, the electronic device receives a request to redisplay the first display region (e.g., 644). In some embodiments, in response to receiving the request to redisplay the first display region, the electronic device displays a sticker user interface object (e.g., 614 in fig. 6I) having a second appearance (e.g., 614 in fig. 6I), the sticker user interface object including a representation (e.g., 614-2) of a third avatar of the first type (e.g., redisplaying the keyboard display region with the sticker user interface object, and the sticker user interface object including a representation of a sticker having an appearance based on the created first type of avatar). In some embodiments, the sticker user interface object also includes one or more representations of the first type of avatar. In some embodiments, the sticker user interface object also includes one or more representations of the second type of avatar. Redisplaying the sticker user interface object having a different appearance that is updated to reflect the newly created avatar provides feedback to the user that creating the additional avatar will update the sticker user interface object to display the additional avatar. Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, displaying the sticker user interface object (e.g., 614) having the first appearance includes displaying an animated sequence of representations of the first type of avatar and the second type of avatar. In some embodiments, the animation includes showing a representation of a first sticker on a sticker user interface object with animation (e.g., smiling, moving, etc.), then replacing the representation of the first sticker with a representation of a second sticker with animation, and so forth. In some embodiments, the representation of the sticker in the animation includes a representation of a first type of sticker and a representation of a second type of sticker. Displaying a sticker user interface object with animation in which a representation of an avatar is displayed in a loop provides feedback to a user that no such avatar is available at the device, and notifying the user of a selection of the sticker user interface object will allow the user to create such avatar. Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, displaying the sticker user interface (e.g., 615) further includes, in accordance with a determination that the set of avatars does not include any avatars of the first type, displaying a create user interface object (e.g., 626) (e.g., a create affordance) that, when selected, displays a create user interface (e.g., 632) for creating avatars of the first type. In some embodiments, when the set of avatars does not include any avatar of the first type, a plurality of representations (e.g., 622 in fig. 6C) (e.g., 622-1, 622-5) of a plurality of stickers for the avatar of the second type are displayed in the sticker user interface. In some embodiments, one of the representations of the sets of stickers for the second type of avatar is displayed with the selected state and a set of stickers (e.g., 624, 690) of the second type corresponding to the selected representation is displayed.
In some embodiments, displaying the first display area further includes displaying a plurality of emoticons (e.g., 609) (e.g., predefined emoticons in a predefined category or selected predefined emoticons based on previous user activity (e.g., recently used/frequently used)).
In some embodiments, after displaying the graphical objects in the first subset having an appearance based on a second type of avatar different from the first type without displaying one of the graphical objects in the first subset having an appearance based on the first type of avatar, the electronic device receives a series of inputs (e.g., 630, 634, 638) corresponding to a request to create a first avatar of the first type (e.g., detects creation of the first type of avatar (e.g., user-created avatar)). In some embodiments, in response to receiving the series of inputs, the electronic device creates a first avatar of a first type (e.g., 636) and adds the first avatar to the set of avatars. In some embodiments, after creating the first avatar of the first type, the electronic device receives a request (e.g., 644) to redisplay the first display region (e.g., 605). In some embodiments, in response to receiving a request to redisplay the first display area, the electronic device displays the first display area with a first subset of the graphical objects (e.g., 612). In some embodiments, the first subset of graphical objects includes a first graphical object (e.g., 642-1 in fig. 6I) having an appearance based on the first avatar of the first type (e.g., the keyboard is redisplayed, and the subset of graphical objects now includes a sticker having an appearance based on the newly created avatar). In some embodiments, stickers based on a first type of avatar (e.g., a newly created avatar) are given priority over stickers of a second type. In some embodiments, the keyboard display area includes a sticker suggested by the electronic device based on a priority of the sticker. The first display area, which displays updates with a sticker for the newly created avatar, gives feedback informing the user that the sticker may be selected for transmission in a message. In addition, this reduces the number of inputs to perform the technical task of generating stickers for the newly created avatar. Providing improved feedback and reducing the number of inputs required to perform tasks enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which additionally reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, after displaying the graphical object (e.g., 612) in the first subset having an appearance based on the second type of avatar different from the first type, the electronic device receives a series of inputs (e.g., 644) corresponding to use (e.g., sending a sticker, creating a sticker, etc.) of the graphical object (e.g., 642-1) corresponding to the second avatar of the first type (e.g., a sticker having an appearance based on the second avatar of the first type) without displaying one of the graphical objects in the first subset having an appearance based on the first type of avatar. In some embodiments, the electronic device receives a request to redisplay the first display area after receiving a series of inputs corresponding to use of the graphical object corresponding to the second avatar of the first type. In some embodiments, in response to receiving a request to redisplay the first display area, the electronic device displays the first display area with a first subset of the graphical objects including the graphical object (e.g., 642-1) corresponding to the second avatar of the first type (e.g., the keyboard is redisplayed and the subset of the graphical objects now includes the sticker used). In some embodiments, the keyboard display area includes a sticker suggested (e.g., a sticker suggested for recent use and/or frequent use) by the electronic device based on a usage history of the sticker of the first type (or the second type). Displaying the first display area updated with the previously used sticker reduces the number of inputs required to locate and send the sticker in subsequent communications. Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, while displaying the first instance of the respective one of the graphical objects having an appearance based on a set of avatars available at the electronic device (e.g., 642-1 in fig. 6I), the electronic device receives an input (e.g., 646) of a first type (e.g., a tap gesture) directed to the first instance of the respective one of the graphical objects. In some embodiments, in response to receiving the first type of input, the electronic device displays a second instance (e.g., 650) of the respective one of the graphical objects (e.g., displays a preview of the respective one of the graphical objects without transmitting a sticker corresponding to the respective one of the graphical objects). In some embodiments, other graphical objects (e.g., emoticons) in the keyboard display area are responsive to a first type of input (e.g., a tap gesture). For example, a flick gesture may be used to select an emoticon. Displaying a second instance of a respective one of the graphical objects in response to receiving the first type of input maintains consistency for interacting with the graphical object and the emoticon displayed on the keyboard. This provides an intuitive interface for interacting with different graphical objects presented in the keyboard display area, which increases familiarity, which in turn enhances operability of the device, and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, while displaying the second instance (e.g., 650) of the respective one of the graphical objects, the electronic device receives a second input directed to the second instance of the respective one of the graphical objects, wherein the second input includes a stationary first portion followed by a moving second portion that includes the second input. In some embodiments, in response to receiving the second input, in accordance with a determination that the second input satisfies the first criterion (e.g., a first portion of the second input is stationary at a location of the second instance of the respective one of the graphical objects for a threshold amount of time and a second portion of the second input includes movement to a location corresponding to a messaging area of the content creation user interface), the electronic device sends a sticker corresponding to the respective one of the graphical objects to the recipient user. For example, in fig. 6J and 6K, when the user touches and long presses preview 650 (triggering device 600 to select a sticker to send), device 600 sends dome sticker 642-1 to second recipient 607-2 in a messaging conversation and then drags the contact to message area 603-1. In some embodiments, in response to receiving the second input, in accordance with a determination that the second input does not satisfy the first criteria, the electronic device drops sending a sticker corresponding to a respective one of the graphical objects to the recipient user. For example, referring to the previous example, if the user does not maintain contact on the preview 650 long enough to trigger the device 600 to select a sticker, the device 600 will not send the sticker 642-1 in the messaging conversation to the second recipient 607-2 even if the user drags the contact to the message area 603-1. Discarding sending the sticker when the second input does not meet the first criteria reduces the likelihood of accidental transmission of the sticker, which enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, displaying the second instance (e.g., 650) of the respective one of the graphical objects further comprises displaying a send user interface object (e.g., 652) (e.g., a send affordance). In some embodiments, the electronic device receives an input directed to the sending user interface object (e.g., 654). In some embodiments, in response to receiving the input directed to the sending user interface object, the electronic device sends a sticker corresponding to a respective one of the graphical objects to the recipient user.
It is noted that the details of the process described above with respect to method 800 (e.g., fig. 8) also apply in a similar manner to the methods described above and below. For example, methods 700, 1000, 1200, 1300, 1500, 1700, and 1800 optionally include one or more characteristics of the various methods described above with reference to method 800. For example, a sticker may be displayed and used in a user interface in a manner similar to that described above. For the sake of brevity, these details are not repeated in the following.
Fig. 9A-9 AG illustrate exemplary user interfaces for displaying an avatar in a contacts application user interface, according to some embodiments. The user interfaces in the figures are used to illustrate the processes described below, including the process in FIG. 10.
Fig. 9A illustrates the electronic device 600 displaying (e.g., on the display 601) a contacts user interface 902 that shows a list of contacts that are available in a contacts application on the electronic device. A contacts application is an application for managing contactable users.
In FIG. 9A, the device detects an input 904 (e.g., a tap gesture) and in response, a new contact user interface 906 for creating a new contact for the contact application is displayed in FIG. 9B.
As shown in fig. 9B, new contact user interface 906 includes data field 908 and area 910, which optionally displays a representation of the new contact. The representation may be displayed to represent contacts in various user interfaces, such as, for example, in a messaging application, a contact list, email, or other instance in which a contact may be represented.
Fig. 9C shows the information of the new contact entered in the name field of the new contact user interface. As shown in FIG. 9C, "Jane applied" is entered as the corresponding first and last name of the new contact. Upon receiving the name information, the device 600 updates the region 910 to display a combined alphabet (monogram) representation with the abbreviation "JA" corresponding to the new contact name "Jane applied".
In fig. 9C, the device 600 detects an input 912 (e.g., a tap gesture) on the phone number option and displays a keypad for inputting the phone number of the new contact Jane applied as shown in fig. 9D.
In some embodiments, the new contact representation may be edited by selecting region 910 from new contact user interface 906. For example, FIG. 9D illustrates device 600 detecting an input 914 on area 910.
In response to detecting the input 914 on the area 910, the device 600 displays a representation editing user interface 915 for modifying the appearance of the representation of the new contact (e.g., Jane applied), as shown in fig. 9E. It should be appreciated that in some embodiments, the user interface 915 may be used to edit the appearance of a representation of an existing contact (e.g., not a new contact).
The user interface 915 includes a current representation 916. The current representation 916 represents the current appearance of the representation of the contact. Thus, upon exiting the user interface 915 (e.g., in response to selecting the completion affordance 917), representations of contacts as shown in various user interfaces of the device 600 (e.g., such as representations that appear in the area 910 of the new contacts user interface 906) will have the appearance shown in the current representation 916. As discussed in detail below, the current representation 916 is updated in response to a series of user inputs in the user interface 915. However, in response to detecting selection of the cancel affordance 918, such updates may be discarded and the representation of the contact reverts to its previous appearance (e.g., the appearance shown in area 910 prior to display of the user interface 915).
In FIG. 9E, the current representation 916 has the appearance of the letter combination options previously displayed in the new contacts user interface 906. Fig. 9F-9 AF illustrate a series of inputs for modifying the current representation 916 according to various embodiments of the present disclosure.
The user interface 915 also includes a plurality of sets of selectable options for modifying the current representation 916. The first set of options 921 includes an option that, when selected, initiates a process for selecting an avatar to set as the current representation 916. The second set of options 922 typically includes options that, when selected, initiate a process for selecting the non-avatar representation as the current representation 916. Examples of non-avatar representations include picture and letter combinations. In some embodiments, the second set of options 922 may include a subset of options determined based on usage history. For example, a subset including previously (e.g., most recently) used options and/or user frequently used options. In such implementations, the subset of options may include avatar options (e.g., most recently used avatar options). In some embodiments, the second set of options 922 may include options recommended to the user based on information available for the new contact. For example, the options may include a picture of the contact, a picture/sticker/avatar sent to or received from the contact, an avatar associated with the contact, or other representation previously used for the contact. In some embodiments, the options are recommended based on information available at the device 600, such as, for example, content from messaging metadata communicated with the contacts.
In fig. 9E, device 600 detects input 924 (e.g., a tap gesture) on monkey avatar option 921-1 and, in response, displays an interface for selecting a gesture option of the selected avatar option. In some embodiments, device 600 may display different types of gesture interfaces. For example, if camera 602 of device 600 is configured to capture depth data (e.g., data for capturing changes in facial gestures of a user), device 600 displays a real-time gesture interface that enables the user to control the displayed avatar to achieve a desired gesture. In embodiments where camera 602 is not configured to capture depth data, device 600 displays a pre-recorded gesture interface that includes a plurality of predefined gestures of the selected avatar option.
In fig. 9F, the device 600 displays a real-time gesture interface 926 that includes an avatar 928 having an appearance corresponding to the selected avatar option, a capture affordance 930, and a cancel affordance 932. In fig. 9F, avatar 928 corresponds to monkey avatar option 921-1 selected via input 924. As discussed in more detail below, the avatar 928 tracks movement of the user's face (e.g., captured via the camera 602) and is updated based on changes in the pose of the user's face. In fig. 9F, avatar 928 has a smile pose controlled by the user's face (e.g., the user's face has a similar smile pose).
In FIG. 9F, the device 600 detects the input 934 on the cancel affordance 932 and, in response, returns to the representation editing user interface 915 without updating the current representation 916, as shown in FIG. 9G.
In FIG. 9G, device 600 detects input 936 on female avatar option 921-2 and, in response, displays real-time gesture user interface 926 in FIG. 9H, with avatar 928 having an appearance corresponding to female avatar option 921-2 selected in FIG. 9G. In fig. 9H, the avatar 928 has a posture in which the avatar tongue is extended. In this embodiment, the avatar 928 is controlled by the user's face through changes in facial gestures (e.g., changing facial expressions and moving facial features) detected via the camera 602. Thus, the user may control the display of various gestural options for avatar 928 by moving their facial features in the field of view of camera 602, which causes device 600 to display corresponding changes in the pose of avatar 928.
In fig. 9I, device 600 detects (e.g., via camera 602) that the user's face has a pose that includes a smile and a head tilt, and modifies avatar 928 to assume the same pose. The device 600 detects the input 938 on the capture affordance 930, which causes the device 600 to select the current pose of the avatar 928 (e.g., the pose of the avatar 928 when the capture affordance 930 was selected).
Fig. 9J illustrates an alternative embodiment of a gesture for selecting the selected female avatar option 921-2 using the pre-recorded gesture interface 940. If, for example, the camera 602 is not configured to capture depth data (e.g., data for tracking a user's face), a pre-recorded gesture interface 940 is displayed in place of the real-time gesture interface 926 (e.g., in response to the input 936 on the female avatar option 921-2). In the pre-recorded gestures interface 940, the device 600 displays various predefined avatar gestures 942-1 through 942-6. In FIG. 9J, device 600 detects an input 944 that selects a predefined avatar gesture 642-3 corresponding to a smile head tilt gesture.
After capturing the avatar gesture in fig. 9I or 9J, device 600 displays a zoom user interface 946 for changing the position and scale of the selected avatar gesture 948, as shown in fig. 9K. In some implementations, avatar gesture 948 moves (e.g., moves within a circular frame) in response to a swipe gesture detected while zoom interface 946 is displayed. In some embodiments, avatar gesture 948 zooms (e.g., zooms in or out) in response to a pinch gesture or a spread gesture detected while zoom interface 946 is displayed. After detecting the input 950 to confirm the position and scale of the selected avatar gesture 948, the device 600 displays the background options 952-1 through 952-6 in fig. 9L, detects selection of the background option of the avatar representation (e.g., via input 954 at background option 952-3), and returns to the representation editing user interface 915 in fig. 9M.
As shown in FIG. 9M, the representation editing user interface 915 is updated based on the selection and customization of the female avatar option 921-2. For example, the appearance of the current representation 916 is updated based on various selections and/or inputs made in fig. 9G-9L. In particular, the current representation 916 changes from the letter combination appearance shown in fig. 9G to the appearance of the female avatar option shown in fig. 9M with a smiling and head tilting pose, positioned and scaled and having the selected background option 952-3. Further, the second set of options 922 is updated to include the letter combination option 922-1 representing the previous appearance of the current representation 916. That is, the letter combination option 922-1 is a selectable non-avatar option that, if selected, updates the current representation 916 to have the appearance of the previously displayed letter combination option (e.g., as shown in FIG. 9G). The remaining non-avatar options in the second set of options 922 are shifted to accommodate the display of the letter combination option 922-1 in the set of options, and the previously displayed non-avatar options are removed from the set (e.g., to avoid pushing the first set of options 921 off of display 601).
In some embodiments, the user may create an avatar to select as the current representation 916. For example, in FIG. 9M, device 600 detects input 956 selecting avatar creation option 921-3 and displays avatar creation user interface 958 in FIG. 9N (similar to avatar creation user interface 632 shown in FIG. 6F). The device 600 detects input, generally represented by input 959, in the avatar creation user interface 958 of fig. 9N to build/create a new avatar 960 as shown in fig. 9O. In response to completing the input 962 on the affordance 963, the device 600 exits the avatar creation user interface 958 and returns to the representation editing user interface 915 in FIG. 9P, which shows that the current representation 916 is updated to have the appearance of a new avatar 960. In some embodiments, after the new avatar 960 is created, the new avatar may then be available at the electronic device 600 for inclusion in other applications, such as a messaging application, a camera application, a media viewing application, and other applications on the device 600. In addition, the new avatar 960 may be updated and the new avatar 960 updated, including in other applications.
Further, the first set of options 921 is updated in FIG. 9P to include a new avatar option 921-4, which is a representation of a new avatar 960, and the second set of options 922 is updated to include a female avatar option 922-2, which corresponds to the previous appearance of the current representation 916. As previously described, some of the selectable options in the second set of options 922 are shifted to accommodate the addition of the female avatar option 922-2. In some implementations, previously used presentation options are added to the second set of options 922 at locations in the top row 922a (e.g., adjacent camera option 922-3 or adjacent letter combination option 922-1).
In FIG. 9P, device 600 detects input 964 on letter combination option 922-1 and displays background option 965 in FIG. 9Q. Upon detecting selection of the background option (e.g., via input 967), device 600 displays in fig. 9R a different font option 966 for the letters displayed in the combined letter representation. Upon detecting selection of a font style (e.g., via input 969), in fig. 9S, device 600 again displays a representation editing user interface 915 showing the current representation 916 updated with the selected letter combination option, and a new head portrait option 922-4 added to the second set of options 922.
In FIG. 9S, device 600 detects input 968 on photo option 922-5. In some embodiments, photo option 922-5 represents a thumbnail of a photo available at device 600. In some implementations, the photograph is identified (e.g., via automatic image recognition) as a photograph of the contact (e.g., a photograph of Jane applied). In some embodiments, photo option 922-5 is a contact representation of the most recently used contact.
In response to detecting input 968, device 600 displays in fig. 9T a filter user interface 970 with different filter options 972 that can be selected and applied to the selected photo option. The device detects input 974 that selects one of filter options 972 and applies the selected filter option to the photograph, as shown in fig. 9U. The current representation 915 is displayed in fig. 9U with the selected photo option 922-5, but modified with the selected filter option 972. The second set of options 922 is updated with the most recent letter combination option 922-6 representing the previous appearance of the current representation 916.
In FIG. 9U, the device 600 detects an input 975 on the done affordance 917 and exits the presentation editing user interface 915.
In FIG. 9V, the device 600 displays a contact business card 976 having a contact (e.g., Jane applied) of a contact representation 978 that has the appearance of the current representation 916 in FIG. 9U.
In FIG. 9V, the device 600 detects an input 980 on the edit affordance 982 and, in response, displays the representation editing user interface 915 in FIG. 9W.
In fig. 9W, device 600 detects an input 984 on camera option 922-3 and, in response, displays a camera user interface 986 showing a representation of image data captured in a field of view of a camera (e.g., camera 602) of device 600. The device 600 detects the input 987 on the capture affordance 988 and captures the image 989 displayed in fig. 9Y with the zoom user interface 990.
In FIG. 9Y, the device 600 detects an input 991 to select a zoom (and move) portion of an image 989, which is shown in FIG. 9Z as having various filter options 992. The device 600 detects selection of the unfiltered option 992-1 and, in response, displays the representation editing user interface 915 in FIG. 9AA, setting the unfiltered image captured and selected in FIGS. 9Y-9Z as the current representation 916. The second set of options 922 is updated to include a filtered photo option 922-7 that represents the previous appearance of the current representation 916.
In FIG. 9AA, the device 600 detects input 993 on all of the affordances 994, which are affordances for accessing a photo gallery available on the device 600. In response, the device 600 displays the album user interface 995 in fig. 9 AB. In fig. 9AB and 9AC, the device 600 detects selection of photos from an album available at the device 600 and displays a representation 996 of the selected photos in the zoom user interface 990 in fig. 9 AD.
In response to input 997 in FIG. 9AD, device 600 detects a zoom and a move (e.g., crop) of representation 996, as well as a selection of the zoomed and moved image.
In fig. 9AE, the device 600 displays filter options, detects selection of one of the filter options, and displays the current representation 916 with the images generated in the steps described above and shown in fig. 9AA through 9 AE. The second group option 922 is updated with the previous photo option 922-8.
Device 600 detects input 998 on completion affordance 917 and displays in FIG. 9AG a contact business card 976 having a contact representation 978 updated with the appearance of the contact representation generated by the selection made in FIGS. 9AA through 9 AE.
FIG. 10 is a flow diagram illustrating a method for displaying an avatar in a contacts application user interface using an electronic device, in accordance with some embodiments. Method 1000 is performed at a device (e.g., 100, 300, 500, 600) having a display and one or more input devices (e.g., 601, 602). Some operations in method 1000 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 1000 provides an intuitive way for displaying an avatar in a contacts application user interface. The method reduces the cognitive burden of a user in displaying the avatar in the contact application user interface, thereby creating a more efficient human-machine interface. For battery-driven computing devices, enabling a user to more quickly and efficiently display an avatar in a contacts application user interface conserves power and increases the interval between battery charges.
An electronic device (e.g., 600) displays (1002), via the display device (e.g., 601), a contactable user-editing user interface (e.g., 915) (e.g., an interface for editing information at the electronic device for contactable users (e.g., for contacting via phone, email, message, etc.); single interface screen). In some embodiments, the contactable user-editing user interface includes (e.g., concurrently includes) one or more presentation options (e.g., 921) of the contactable user (e.g., a user interface object (affordance)), including an avatar presentation option (e.g., 921-2) (e.g., the avatar presentation option is a user interface object (e.g., affordance) that, when selected, initiates a process for selecting an avatar to use as a presentation of the contactable user). In some embodiments, the avatar representation options have the appearance of an avatar (e.g., avatars such as, for example, avatars modeled to represent human characters, avatars modeled to represent non-human characters, avatars that can be created and/or customized by a user, and avatars that cannot be created or customized by a user). In some embodiments, an avatar modeled to represent a human includes customizable (e.g., selectable or configurable) avatar characteristics that generally correspond to physical characteristics of the human. For example, such avatars may include representations of people having various body, human features or characteristics (e.g., an elderly female with a dark skin color and having long, straight, brown hair). Such an avatar will also include a representation of a person having various non-human characteristics (e.g., cosmetic enhancements, hats, glasses, etc.) that are typically associated with the appearance of a human. In some embodiments, such an avatar will not include anthropomorphic constructs, such as a stylized animal, a binning robot, or stylized, generally inanimate or generally non-human subject. In some embodiments, an avatar modeled to represent a non-human character includes, for example, a humanoid constructed non-human character (e.g., a stylized animal, a stylized robot, or a stylized of a generally inanimate object or a generally non-human object)). In some embodiments, such an avatar includes an avatar having customizable (e.g., optional or configurable) avatar characteristics that generally correspond to non-human traits and characteristics. In some embodiments, such an avatar will not include representations of people with various physical, human features or characteristics (e.g., young children with rounded faces and short wave-shaped hair), even though some of the customizable features of the human avatar include non-human characteristics (e.g., cosmetic enhancements, hats, glasses, or other non-human objects that are typically associated with human appearance).
In some embodiments, the contactable user-editing user interface (e.g., 915) also includes a first representation of the contactable user (e.g., 916) (e.g., an image, letter combination, or other symbol that provides a visual association to the contactable user). In some embodiments, a representation of the contactable user is displayed in other user interfaces (e.g., in a phone application UI, in a messaging application UI, etc.) to represent the contactable user (typically in a small area on the screen). In some embodiments, the first representation of the contactable user is replaced with an avatar (e.g., 921-2) selected for use as the representation of the contactable user in the contactable user interface.
In some embodiments, the one or more presentation options include a non-avatar option (e.g., 922-1, 922-3, 922-5, 922-6, 922-7, 922-8) (e.g., a contactable user presentation option that does not correspond to an avatar) (e.g., the non-avatar option is associated with a photo, a letter combination, or other option that is not for selecting an avatar to use as a presentation of a contactable user in a contactable user interface). In some embodiments, the electronic device (e.g., 600) detects selection of a non-avatar option (e.g., 964, 968, 984, 993) via the one or more input devices (e.g., 601). In some embodiments, in response to detecting selection of the non-avatar option, the electronic device initiates a process for selecting a representation option other than an avatar (e.g., a photo, a letter combination, etc.) to use as a representation of the contactable user in the contactable user interface.
In some embodiments, the one or more presentation options include a plurality of options (e.g., 922) selected based on information about the contactable user.
In some embodiments, the plurality of options selected based on the information about the contactable users include representations of the most recently used contactable users (e.g., 922-1, 922-2, 922-4, 922-6, 922-7, 922-8) (e.g., representations of contactable users previously used within a predetermined amount of time or a predetermined number of instances of selecting a representation of a contactable user). In some embodiments, after a representation of a contactable user is selected, the representation is added to the set of most recently used representations of contactable users. Adding the most recently selected representation of the contactable user to the collection of most recently used representations of the contactable user reduces the amount of input required to subsequently use the most recently selected representation (e.g., reduces the input required to generate or access the representation). Reducing the number of inputs enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the plurality of options selected based on the information about the contactable user includes media items (e.g., 922-5) available at the electronic device that are identified as being associated with the contactable user (e.g., photos of the contactable user). In some embodiments, the plurality of options selected based on the information about the contactable user include media items available at the electronic device that are identified as being associated with the contactable user and that meet certain quality criteria (e.g., the photograph primarily captures the contactable user, the photograph is in focus, etc.). In some implementations, the media items correspond to contactable users. For example, the media items include photos of the user. As another example, the media item was previously (e.g., most recently) sent to or received from the contactable user. Displaying recently communicated media items for potential use as representations of contactable users reduces the amount of input required for subsequent use of the media items (e.g., reduces the input required to generate or access the representations). Reducing the number of inputs enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the information of the contactable user includes information from a messaging communication session with the contactable user. In some embodiments, the items include stickers, photos, or other content including messaging metadata from communications to and from the contactable user.
In some embodiments, the one or more presentation options include letter combination presentation options (e.g., 922-1, 922-6) (e.g., a representation of the contactable user having an abbreviation corresponding to a name (e.g., first name, last name, middle name) associated with the contactable user).
In some embodiments, the one or more presentation options include a media item option (e.g., 922-5, 922-7, 922-8) (e.g., a photograph associated with the contactable user (e.g., a photograph of the contactable user) is selected from a set of photographs associated with the contactable user (e.g., a set of photographs of the contactable user)).
In some implementations, upon detecting selection of a media item option, the electronic device (e.g., 600) displays, via the display device (e.g., 601), a plurality of filter options for applying a filter effect to media items associated with the selected media item option. In some implementations, the filter effect is applied to the media item by superimposing the filter effect on the media item. In some implementations, the filter effect applies changes to both the background in the media item and any applied visual effects (e.g., head images, stickers, etc.) that may be included in the media item. In some embodiments, the filters change the appearance of the media items (e.g., using comic filters, sketch filters, black and white filters, grayscale filters, etc.). In some embodiments, the filter is a filter that reduces the authenticity of the media item (e.g., a sketch filter or a comic filter). In some implementations, the filter is a filter that reduces the 3D effect (e.g., planarization) of the media item. Displaying the filter option for modifying the media item option after selecting the media item option reduces the number of inputs required to customize the selected media item if a different control scheme is used (e.g., a control scheme that requires navigating to a custom user interface and selecting a different control for displaying and modifying the selected media item). Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
The electronic device detects (1004) a selection (e.g., 936) of a avatar representation option (e.g., 921-2) via the one or more input devices (e.g., 601).
In response to detecting selection of the avatar representation option (e.g., 921-2), the electronic device (e.g., 600) initiates (1006) a process for selecting an avatar to use as a representation of the contactable user in the contactable user interface.
As part of the process for selecting an avatar to use as a representation of the contactable user in the contactable user interface, the electronic device receives (1008) a sequence of one or more inputs (e.g., 936, 938, 944, 950, 954, 956, 959, 962, the image data control avatar 928 in fig. 9H) via the one or more input devices (e.g., 601, 602), the sequence corresponding to a simulated three-dimensional avatar (e.g., 928, 916, 960 in fig. 9M, 916 in fig. 9P).
In some embodiments, in response to selection of the simulated three-dimensional avatar, the electronic device (e.g., 600) displays (1010), via the display device (e.g., 601), a posed user interface (e.g., 926, 940) (e.g., a plurality of user interface objects (e.g., affordances) corresponding to different predefined gestures) including one or more controls (e.g., 942-1 through 942-6) (e.g., 930) for selecting a gesture of the simulated three-dimensional avatar from a plurality of different gestures (e.g., for capturing a capture affordance of the gesture using a camera of the electronic device)). Displaying a gesture interface for selecting a gesture of the avatar from a plurality of different gestures after selecting the avatar reduces the number of inputs required to customize the selected avatar with different control schemes (e.g., control schemes that require navigation to the custom user interface and selection of different controls for displaying and modifying the selected avatar). Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the one or more controls include a first gesture user interface object (e.g., 942-1) (e.g., a first gesture affordance) corresponding to a first predefined gesture and a second gesture user interface object (e.g., 942-2) (e.g., a second gesture affordance) corresponding to a second predefined gesture different from the first predefined gesture. In some embodiments, in response to detecting selection of one of the gesture affordances, a simulated three-dimensional avatar is set (e.g., displayed) to have a gesture corresponding to the selected gesture affordance.
In some embodiments, the one or more input devices include a keyboard (e.g., 602). In some embodiments, the one or more controls include a capture user interface object (e.g., 630) (e.g., a capture affordance) that, when selected, selects a gesture that simulates a three-dimensional avatar, the gesture based on a facial gesture detected in a field of view of the camera when the capture user interface object is selected (e.g., see fig. 9H and 9I). In some embodiments, displaying the posing user interface includes displaying a capture affordance and a simulated three-dimensional avatar that reacts to a change in facial pose detected in a field of view of the camera (e.g., different detected facial poses correspond to a plurality of different poses from which a pose assigned to the three-dimensional avatar may be selected). When the capture affordance is selected, the electronic device selects a gesture that simulates a three-dimensional avatar when the capture affordance is selected. Displaying captured user interface objects that select gestures of an avatar based on faces detected by a field of view of a camera provides a control scheme for composing a user-contactable representation on a display of an electronic device, where the system detects and processes input in the form of changes in user facial features (and the magnitude and/or direction of those changes), and provides desired output in the form of the appearance of the avatar (e.g., modified gestures) through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch input on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose an avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the pose of the avatar than if a different animation control scheme was used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In some embodiments, after selecting a gesture that simulates a three-dimensional avatar from a plurality of different gestures (e.g., 938 in FIG. 9I) (e.g., 944 in FIG. 9J), the electronic device sets the simulated three-dimensional avatar having the selected gesture to a representation of the contactable user (e.g., 916 in FIG. 9M) (e.g., displays a representation of the contactable user having the appearance of the simulated three-dimensional avatar having the selected gesture; associates the simulated three-dimensional avatar in the selected gesture with the contactable user such that the three-dimensional avatar in the selected gesture is used to represent the contactable user).
In some embodiments, displaying the posing user interface (e.g., 926) includes, in accordance with a determination that the first avatar (e.g., 921-1) is selected to simulate the three-dimensional avatar (e.g., selecting an avatar affordance corresponding to the first avatar), displaying at least one representation of the first avatar (e.g., 928 in fig. 9F) in the posing user interface (e.g., displaying a representation of the first avatar with at least one selection gesture). In some embodiments, displaying the posing user interface includes, in accordance with a determination that the second avatar (e.g., 921-2) is selected to simulate the three-dimensional avatar (e.g., selecting an avatar affordance corresponding to the second avatar), displaying at least one representation of the second avatar (e.g., 928 in fig. 9H) in the posing user interface (e.g., displaying a representation of the second avatar with at least one selection gesture) (e.g., not displaying a representation of the first avatar). Displaying the representation of the second avatar with the selection gesture without displaying the representation of the first avatar provides the user with visual feedback that the gesture change affects the appearance of the second avatar instead of the first avatar. Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, prior to displaying the contactable user editing user interface (e.g., 915), the electronic device (e.g., 600) detects a series of inputs (e.g., a series of inputs for creating the first user-created avatar detected, for example, in user interfaces of different applications (e.g., avatar creation user interface 632 in fig. 6F, avatar editing user interface 11002 in fig. 11A-11 AD) (e.g., 956) corresponding to the request to create the first user-created avatar. In some implementations, the electronic device receives a request (e.g., 914) to display a contactable user-editable user interface (e.g., 962). In some embodiments, in response to receiving a request to display a contactable user editing user interface, the electronic device displays a contactable user editing user interface that includes an avatar (e.g., 921-1, 921-4) created by the first user. Displaying the contactable user-editing user interface comprising the first user-created avatar after the user has created/updated the avatar reduces the number of inputs to perform the technical task of generating the avatar to use as a representation of the contactable user. Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the sequence of one or more inputs corresponding to selection of the simulated three-dimensional avatar includes an input (e.g., 921-4, 921-2, girl avatar neighbor option 921-3 in FIG. 9G, eclipse hair style avatar neighbor female avatar option 921-2 in FIG. 9G) corresponding to selection (e.g., 936) of a first user-created (921-4, 921-2) avatar from a set of user-created avatars.
In some embodiments, the sequence of one or more inputs corresponding to selection of the simulated three-dimensional avatar includes a set of inputs (e.g., 956, 959, 962) corresponding to creating a new avatar (e.g., creating an avatar in response to detecting a series of user inputs directed to the avatar creation user interface). In some embodiments, the new avatar is created after the contactable user-editing user interface is displayed (e.g., when the contactable user-editing user interface display is displayed, a series of inputs are received to access the avatar-creating user interface and create the new avatar). In some embodiments, the newly created avatar is selected for use as a simulated three-dimensional avatar. Creating a new avatar in a contactable user editing user interface reduces the amount of input to perform the technical task of generating a representation of a contactable user. This provides an improved control scheme for generating a custom representation that may require less input to generate a custom representation than if a different control scheme were used (e.g., a control scheme that required navigation to a different application to create and customize an avatar, which may then be loaded into a contactable user-editing user interface for selection). Furthermore, this type of control may be done in real time during a conversation, such as a text conversation or a video conversation, for example, whereas manual control to build a sticker would have to be done before the conversation begins or after the conversation ends. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which additionally reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, after selecting a gesture that simulates a three-dimensional avatar from a plurality of different gestures, the electronic device (e.g., 600) displays, via the display device (e.g., 601), a background option (e.g., 952-3 in fig. 9L) that, when selected (e.g., 954), changes the appearance (e.g., color, shape, and/or texture) of a background region of the user-contactable representation (e.g., see updated 916 in fig. 9M).
In some embodiments, displaying the posing user interface including the one or more controls includes, in accordance with a determination that the one or more input devices (e.g., 602) include a depth camera sensor (e.g., depth camera sensor 175 in fig. 1A) (e.g., a depth camera), displaying, via the display device (e.g., 601), a simulated three-dimensional avatar (e.g., avatar 928 in fig. 9H and 9I) having a dynamic appearance, wherein the simulated three-dimensional avatar changes pose in response to a change in facial pose detected in a field of view of the depth camera sensor (e.g., the simulated three-dimensional avatar mirrors a change in facial pose detected with the depth camera). In some embodiments, the one or more controls are displayed with a capture affordance that, when selected, captures a gesture that simulates a three-dimensional avatar, the gesture based on a gesture of a face detected in a field of view of the depth camera when the capture affordance is selected. In some embodiments, displaying a posing user interface including the one or more controls comprises: in accordance with a determination that the one or more input devices do not include a depth camera sensor, displaying, via the display device, a third gesture user interface object (e.g., 942-3) (e.g., a third gesture affordance) (e.g., a first gesture affordance) corresponding to a third predefined gesture (e.g., a first predefined gesture) and a fourth gesture user interface object (e.g., 942-4) (e.g., a fourth gesture affordance) (e.g., a second gesture affordance) corresponding to a fourth predefined gesture (e.g., a second predefined gesture) different from the third predefined gesture (e.g., the one or more controls are displayed as a plurality of affordances, each of the plurality of affordances having an appearance of a simulated three-dimensional avatar having one of the plurality of predefined gestures). In some embodiments, in response to detecting selection of one of the gesture affordances, the simulated three-dimensional avatar is set to a gesture corresponding to the selected gesture affordance. Displaying a posing user interface including a third gesture user interface object corresponding to a third predefined gesture and a fourth gesture user interface object corresponding to a fourth predefined gesture provides a variety of selectable gesture options that reduce a number of inputs for selecting a gesture to perform a technical task that creates a representation of an contactable user. Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the depth camera captures image data corresponding to depth data (e.g., the image data including data captured by the visible light camera and the depth camera) (e.g., including image data of a depth aspect of the captured image or video (e.g., depth data independent of RGB data)) including depth data of an object positioned in the depth camera's field of view (e.g., information about the relative depth position of one or more portions of the object relative to other portions of the object and/or other objects within the field of view of the one or more cameras). In some embodiments, the image data includes at least two components: the RGB components of the visual characteristics of the captured image are encoded, as well as depth data that encodes information about the relative spacing relationship of elements within the captured image (e.g., depth data encodes that the user is in the foreground and background elements are in the background as a tree behind the user). In some embodiments, the image data includes depth data without RGB components. In some implementations, the depth data is a depth map. In some implementations, a depth map (e.g., a depth map image) includes information (e.g., values) related to the distance of objects in a scene from a viewpoint (e.g., a camera). In one embodiment of the depth map, each depth pixel defines the location in the Z-axis of the viewpoint at which its corresponding two-dimensional pixel is located. In some implementations, the depth map is composed of pixels, where each pixel is defined by a value (e.g., 0 to 255). For example, a "0" value represents a pixel located farthest from a viewpoint (e.g., camera) in a "three-dimensional" scene, and a "255" value represents a pixel located closest to the viewpoint in the "three-dimensional" scene. In other examples, the depth map represents a distance between an object in the scene and a plane of the viewpoint. In some implementations, the depth map includes information about the relative depths of various features of the object of interest in the field of view of the depth camera (e.g., the relative depths of the eyes, nose, mouth, ears of the user's face). In some embodiments, the depth map comprises information enabling the apparatus to determine a contour of the object of interest in the z-direction. In some implementations, the depth data has a first depth component that includes a representation of an object in the camera display area (e.g., a first portion of the depth data that encodes a spatial location of the object in the camera display area; a plurality of depth pixels that form a discrete portion of a depth map, such as a foreground or a particular object). In some implementations, the depth data has a second depth component (e.g., a second portion of the depth data encoding a spatial location of the background in the camera display area; a plurality of depth pixels, such as the background, forming a discrete portion of the depth map), the second depth component being separate from the first depth component, the second depth aspect including a representation of the background in the camera display area. In some implementations, the first depth aspect and the second depth aspect are used to determine a spatial relationship between an object in a camera display area and a background in the camera display area. This spatial relationship can be used to distinguish objects from the background. This differentiation may be exploited, for example, to apply different visual effects (e.g., visual effects with depth components) to the object and the background. In some implementations, all regions of the image data that do not correspond to the first depth component (e.g., regions of the image data that are beyond the range of the depth camera) are segmented out of (e.g., excluded from) the depth map. In some implementations, the depth data is in the form of a depth map or a depth mask.
It is noted that the details of the process described above with respect to method 1000 (e.g., fig. 10) also apply in a similar manner to the methods described above and below. For example, methods 700, 800, 1200, 1300, 1500, 1700, and 1800 optionally include one or more features of the various methods described above with reference to method 1000. For example, avatars may be displayed and used in a user interface in a manner similar to that described above. For the sake of brevity, these details are not repeated in the following.
Fig. 11A-11 AD illustrate exemplary user interfaces for displaying an avatar in an avatar-editing application user interface, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 12 and 13.
Fig. 11A illustrates the electronic device 600 displaying (e.g., on the display 601) an avatar editing user interface 11002 for editing features of an avatar 11005. The avatar editing user interface 11002 is similar to the avatar creation user interface 603, avatar editing user interface 670, and avatar creation user interface 958 shown in fig. 6F, 6P, and 9N, respectively. Details of editing the avatar 11005 using the avatar editing user interface 11002 are provided below. Additional details for editing an avatar in a similar editing user interface are provided in U.S. patent application serial No. 16/1162221, which is hereby incorporated by reference herein for all purposes.
Avatar editing user interface 11002 includes an avatar display area 11004 with an avatar 11005 and selectable avatar feature menu options 11006. The avatar editing user interface 11002 also includes an avatar option area 11008 that includes various feature options that may be selected to change features of the avatar 11005. In FIG. 11A, lip menu option 11006-1 is selected and avatar option area 11008 displays selectable lip option 11010. The avatar 11005 has the appearance of including a face without selected skin tones or hair and having facial wrinkles 11007 and eyebrows 11009. The avatar 11005 also has a mouth 11014, the mouth 11014 having lips corresponding to the selected lip option 11010-2.
If the corresponding feature option is selected, the feature option (e.g., lip option 11010) has an appearance that represents the potential appearance of the avatar (e.g., avatar 11005). When the feature option is selected, the appearance of the feature option may be dynamically updated (e.g., in real-time).
The selectable feature options correspond to options for modifying corresponding characteristics of an avatar feature (e.g., an avatar lip feature as shown in fig. 11A). When a feature option (e.g., thick lip option 11010-1) is selected, a corresponding value (e.g., thick) is assigned to the characteristic (e.g., lip shape/size), and then the changed characteristic is reflected in the changes displayed to avatar 11005 and other feature options including the displayed characteristic representation (e.g., in avatar option region 11008). To continue the current example, in response to detecting selection of thick lip option 11010-1, device 600 changes the lips of avatar 11005 to thick lips, and any displayed feature options that display the lips will also be updated to display the thick lips.
In fig. 11A, device 600 detects an input 11012 (e.g., a tap gesture) on mouth menu option 11006-2 and, in response, updates avatar display area 11004 to display an updated avatar feature menu option 11006 (e.g., repositioned with selected mouth menu option 11006-2) and updates avatar option area 11008 to display various feature options for modifying characteristics of avatar mouth 11014, as shown in fig. 11B.
In FIG. 11B, device 600 shows an avatar option region 11008 with multiple sets of feature options for modifying different characteristics of an avatar mouth 11014, shown on avatar 11005 with neutral pose 11014-1. In some implementations, the avatar 11005 is responsive to detecting a change in pose of the user's face positioned in the field of view of the camera 602. In the embodiments disclosed herein, avatar 11005 has a neutral pose because the user maintains a neutral pose while interacting with avatar-editing user interface 11002.
The sets of feature options include a teeth option 11016 and a tongue and tongue nail option 11018. Each set of feature options displays the avatar mouth 11014 with a different pose to display, for the respective set of feature options, mouth characteristics that can be modified by the feature options in the respective set of feature options. For example, avatar mouth 11014 has a neutral pose 11014-1 on avatar 11005, while teeth option 11016 shows avatar mouth 11014 has a smile pose 11014-2 to reveal avatar teeth 11020, so that different teeth options are displayed for easier viewing by the user. Similarly, avatar mouth 11014 has a neutral pose 11014-1 on avatar 11005, while tongue nail option 11018 shows avatar mouth 11014 has a tongue pose 11014-3 to reveal avatar tongue 11022 so that different tongue nail options are displayed for easier viewing by the user. In embodiments where the avatar 11005 tracks the user's face, displaying the mouth 11014 with a smile pose 11014-2 and a tongue spit pose 1114-3 in the respective teeth option 11016 and tongue nail option 11018 allows the user to view the respective feature options without requiring the user to keep their tongue extended or to maintain a smile pose.
Dental option 11016 shows different options for selecting the teeth of avatar 11005. The teeth options 11016 include default teeth 11016-1 (e.g., no missing teeth or modifications), missing teeth 11016-2, golden teeth 11016-3, lucky door crevices 11016-4, orthodontic teeth 11016-5, and decorative braces 11016-6. The additional tooth options may include options for cuspids. The additional teeth options may include options for different decorative braces, including different decorative brace colors (e.g., gold, silver) and locations in the mouth (e.g., lower row decorative braces, upper and lower row decorative braces, partial decorative braces). The additional teeth options may include different missing teeth, including one or more missing teeth that are displayed in different locations in the mouth (e.g., hockey player's teeth, missing lower row of teeth). Various combinations of the foregoing dental options may be included in dental option 11016.
Tongue nail option 11018 shows different options for tongue nails used to select the tongue of the avatar 11005. Examples include a lingless nail 11018-1, a spiked lingo nail 11018-2, and a looped lingo nail 11018-3. Tongue nail options 11018 may include additional options such as spike-type tongue nails and barbell-type tongue nails.
Avatar option field 11008 also includes a tongue nail color option 11024, which corresponds to a different color of the tongue nail.
The appearance of the avatar mouth 11014 shown in teeth option 11016 and tongue nail option 11018 represents the current mouth of the avatar 11005. Thus, when modifications are made to the characteristics of the avatar mouth 11014, these modifications may be displayed in the corresponding teeth option 11016 and tongue nail option 11018 (if the modified characteristics are displayed in the corresponding sets of feature options). For example, if the avatar mouth 11014 is updated to include lipstick, the appearance of the avatar mouth 11014 will be updated to include lipstick in various poses (e.g., 11014-1, 11014-2, and 11014-3).
In FIG. 11B, device 600 detects input 11026 on eye menu option 11006-3 and, in response, updates the avatar display area to show selection of eye menu option 11006-3 and updates avatar option area 11008 to show options for editing the avatar eye characteristics, as shown in FIG. 11C. The eye options include an eye color option 11028 (including an eye color slider control 11028-5) for selecting and adjusting the eye color of the avatar 11005. The eye options also include eye shape options 11030 for selecting different shapes of the eye of the avatar.
In fig. 11C, device 600 detects input 11032 (e.g., a drag gesture corresponding to a scroll command) and, in response, scrolls avatar option area 11008 to display eye makeup options, which are collectively shown in fig. 11D-11F.
As shown in fig. 11D, the make-up options include an upper eyeliner option 11034, a lower eyeliner option 11036, and an eyelash option 11038. Upper eyeliner option 11034 includes an option for selecting an eyeliner pattern that resembles the upper edge of the eyes of the head. Lower eyeliner option 11036 includes an option for selecting an eyeliner pattern that resembles the lower edge of the eyes of the head. Eyelash options 11038 include options for selecting an eyelash pattern, such as, for example, no eyelashes 11038-1, thin eyelashes 11038-2, and thick eyelashes 11038-3.
In fig. 11D, device 600 detects input 11035 (e.g., a drag gesture corresponding to a scroll command) and, in response, scrolls avatar option region 11008 to display additional eye makeup options including eye shadow option 11040, as shown in fig. 11E.
As shown in fig. 11E, eye shadow options 11040 include an eye shadow color option 11042 for selecting eye shadow colors and an eye shadow application option 11044 for selecting eye shadow patterns. Eye shadow color options 11042 include a set of first eye shadow color options 11042-1 and a set of second eye shadow color options 11042-2. The first eye shadow color 11042-1a and the second eye shadow color 11042-2a may be applied simultaneously depending on the eye shadow pattern selected from the eye shadow application option 11044. Eye shadow application options 11044 include no eye shadow 11044-1, first eye shadow pattern 11044-2, and second eye shadow pattern 11044-3.
In FIG. 11E, first eye shadow color 11042-1a is selected, second eye shadow color 11042-2a is selected, and eye shadow application option 11044 is updated to show the application pattern options that may be applied to avatar 11005 using the selected colors 11042-1a and 11042-2 a. The device 600 detects an input 11046 on a first eye pattern 11044-2. In response, device 600 applies first eye shadow pattern 11044-2 to avatar 11005 using selected colors 11042-1a and 11042-2a, as shown in FIG. 11F.
As shown in FIG. 11F, avatar 11005 is now displayed with eye makeup 11048 including the eye shadow color and application pattern selected in FIG. 11E. In some embodiments, facial wrinkles 11007 are shown on eye makeup 11048, as shown in fig. 11F. Device 600 detects input 11050 on facial menu option 11006-4 and, in response, updates avatar display area 11004 to show a selection of facial menu option 11006-4 and updates avatar option area 11008 to display nevus of america option 11052, as shown in fig. 11G.
In fig. 11G, a nevus beauty option 11052 is shown with eye makeup (due to the selection made in the avatar option area 11008 of fig. 11E). Device 600 detects input 11054 on forehead lentigo option 11052-1 and, in response, updates avatar 11005 to have forehead lentigo 11055 (in addition to eye makeup 11048), as shown in fig. 11H.
In FIG. 11H, device 600 detects input 11056 corresponding to a scroll gesture on avatar feature menu option 11006 and, in response, scrolls feature menu option 11006 such that accessory menu option 11006-5 is selected, as shown in FIG. 11I.
In FIG. 11I, device 600 displays earring options 11058 in avatar option region 11008 and detects an input 11060 selecting ring earring 11058-1. In response, device 600 updates avatar option region 11008 to display earring position option 11062 and updates avatar display region 11004 to display avatar 11005 with annular earring 11064, as shown in FIG. 11J. Earring position options 11062 include ears 11062-1, right ear 11062-2, and left ear 11062-3. Device 600 displays annular earrings 11064 on both ears of the avatar because ears 11062-1 are selected.
In FIG. 11J, device 600 detects input 11066 on no earring option 11058-2 and in response removes annular earring 11064 from head image 11005 and stops displaying earring position option 11062, as shown in FIG. 11K.
In FIG. 11K, device 600 detects input 11068, which is an input corresponding to a selection of the ear menu option 11006-6. In response, device 600 updates avatar display area 11004 to show the selection of ear menu option 11006-6 and updates avatar option area 11008 to display another set of options for applying earrings to avatar 11005, as shown in FIG. 11L.
In FIG. 11L, device 600 displays avatar option area 11008 with earring position options represented by both affordance 11070 and custom affordance 11072. In FIG. 11L, two affordances 11070 are selected and avatar option region 11008 displays selectable earring option 11074 (similar to earring option 11058) without earring option 11074-1 being selected. When one of earring options 11074 is selected, the selected earring option is applied to both avatar ears.
Device 600 detects input 11076 on custom affordance 11072 and, in response, selects custom affordance 11072 and replaces selectable earring option 11074 with left earring option 11078 and right earring option 11080. Left earring option 11078 may be selected to apply the selected earring option to the left ear of the avatar and not to the right ear of the avatar. Conversely, the right earring option 11080 may be selected to apply the selected earring option to the right ear of the avatar, but not the left ear of the avatar. Thus, left earring option 11078 and right earring option 11080 allow a user to mix and match different earrings with different avatar ears, allowing custom earrings to be applied to avatar 11005.
In fig. 11M, device 600 detects input 11082 (e.g., a drag gesture) and in response scrolls avatar option area 11008 to display audio option 11084 and location option 11086, as shown in fig. 11N. Audio option 11084 may be selected to display a different audio device in the ear of the avatar. For example, audio options 11084 include no audio option 11084-1, in-ear audio option 11084-2, and hearing aid option 11084-3. The position option 11086 may be selected to determine the ear position of the selected audio option. Position options 11086 include a binaural option 11086-1, a left ear option 11086-2, and a right ear option 11086-3.
In FIG. 11N, device 600 detects input 11087 for selecting hair menu option 11006-7, as shown in FIG. 11O.
In fig. 11O, the device 600 displays an avatar option area 11008 with a color option 11088, a highlight type option 11090, and a hair style option 11092. The color option 11088 may be selected to control the color change of the avatar hair and the highlight applied to the avatar hair. The color options 11088 include a hair color control 11096 for selecting a hair color and a highlight color control 11098 for selecting a highlight color. The hair color control 11096 includes a hair color option 11096-1 for selecting a hair color and a hair color slider 11096-2 for adjusting the gradient of the selected hair color. Similarly, the highlight color control 11098 includes a highlight color option 11098-1 for selecting a highlight color and a highlight color slider 11098-2 for adjusting the gradient of the selected highlight color. In the embodiment shown in fig. 11O-11S, hair color option 11096-1a is selected for hair color, hair color slider 11096-2 is set to the maximum gradient setting (e.g., dark), and highlight color option 11098-1a is selected for highlight color, highlight color slider 11098-2 is set to the minimum gradient setting (e.g., light).
The highlight type option 11090 may be selected to change the type of highlight applied to the avatar's hair (e.g., in the hairstyle option 11092, and on the avatar 11005 if a non-head hairstyle is selected). The highlight type options 11090 include a first type 11090-1, a second type 11090-2, and a third type 11090-3. The first type 11090-1 is currently selected in FIG. 11O.
The hair style option 11092 may be selected to change the hair style applied to the avatar 11005. Hair style options 11092 include an optical head 11092-1, a second hair style 11092-2, and a third hair style 11092-3, but other hair styles may be displayed. The representations of the avatars shown in the second hair style option 11092-2 and the third hair style option 11092-3 show the current state of the selected color option 11088 and highlight type option 11090. As the color option 11088 and highlight type option 11090 change, the representations of the avatar hair (with highlight portions) shown in the second hair style option 11092-2 and third hair style option 11092-3 are updated to reflect these changes.
In fig. 11O, device 600 detects input 11094 selecting third hairstyle 11092-3 and, in response, updates avatar 11005 to display avatar hair 11100 with highlight 11102, as shown in fig. 11P. The avatar hair 11100 corresponds to the selected third hair style selection 11092-3. Highlight 11102 corresponds to the selected color option 11088 and first highlight type 11090-1.
In FIG. 11P, the device 600 detects an input 11104 on the second type 11090-2 and, in response, updates the highlight 11102, the second hair style option 11092-2, and the third hair style option 11092-3 to have a selected highlight type, which is shown in FIG. 11Q as a gradient highlight type.
In FIG. 11Q, the device 600 detects an input 11106 on the third type 11090-3 and, in response, updates the highlight 11102, the second hair style option 11092-2, and the third hair style option 11092-3 to have a selected highlight type, which is shown in FIG. 11R as a heavy highlight type.
In FIG. 11R, the device 600 detects an input 11108 on the head hair style 11092-1 and, in response, updates the avatar 11005 to remove the hair 11100, as shown in FIG. 11S.
In FIG. 11S, the device 600 detects an input 11110, which corresponds to a request to select the facial painted menu option 11006-8, as shown in FIG. 11T.
In fig. 11T, the device 600 displays an avatar display area 11004 that shows an avatar 11005 with a manicure 11055, eye makeup 11048, facial wrinkles 11007, and eyebrows 11009. The avatar option area 11008 is displayed with a facial colored-drawing pattern option 11114 and a facial colored-drawing color option 11112, the facial colored-drawing pattern option 11114 being selectable to apply a facial colored-drawing pattern to the avatar 11005, the facial colored-drawing color option 11112 being selectable to change the color of the facial colored-drawing pattern. In fig. 11T, the avatar 11005 is displayed without facial painting and pattern option 11114-2 (without facial painting) is selected.
The facial colored drawing color option 11112 includes multiple sets of color options that can be selected to change the area of the facial colored drawing pattern. The facial colored-drawing pattern option 11114 represents various pattern options that may be selected to apply a facial colored-drawing pattern to the face of the avatar 11005, the facial colored-drawing pattern having one or more of the colors selected in the facial colored-drawing color option 11112. Some regions of the various facial painted pattern options 11114 correspond to the facial painted color option 11112, but in some cases, some facial painted pattern options 11114 have regions that do not change color in response to selection of the facial painted color option 11112. For example, pattern option 11114-1 represents a haunted facial colored-drawing pattern having a white base region 11114-1a that does not change as a result of selection of color option 11112.
The facial colored-drawing pattern options 11114 each include a representation of the avatar 11005 with an appearance that represents the appearance of the avatar 11005 if the corresponding facial colored-drawing pattern option is selected. For example, the pattern option 11114-2 is a no-face colored option showing a representation of the avatar 11005 without a face colored drawing. Because the avatar 11005 includes manicure 11055, eye makeup 11048, facial wrinkles 11007, and eyebrows 11009, the pattern option 11114-2 (without facial painting) also shows these features on the avatar representation. As long as these features are applied to avatar 11005, pattern option 11114-2 keeps displaying these features even if a different facial painted pattern is displayed on avatar 11005, for example, as shown in FIG. 11U. This is because pattern option 11114-2 is a representation of the avatar 11005 without facial painting.
In contrast, when these pattern options are selected, the beauty nevi 11055 and the makeup 11048 are not displayed on the representation of the avatar 11005 shown in the pattern options 11114-1 and 11114-3 to 11114-6, or on the avatar 11005. This is because these pattern options show that the representation of the avatar 11005 covers or obscures these features as if painting on the face of the avatar 11005. However, even when facial painting is applied, other features of the avatar 11005 are displayed. These other features may include facial wrinkles 11007, glasses, hair, and facial hair. Facial wrinkles 11007 remained visible because the facial painting did not cover the wrinkles on the painted face. Glasses, hair, and facial hair are displayed on the avatar (no facial painting on these features) because the glasses are worn over the face and facial painting is not typically applied to the hair or facial hair. An example of such an embodiment is shown in fig. 11 AA. However, in some cases, the facial colored drawing is displayed over the hair on the avatar. For example, a face painting is applied to the eyebrows 11009, and the eyebrows are displayed to have a blurred appearance due to the underlying color of the eyebrows, which is mixed with the face painting applied to the head portrait 11005. This is shown in pattern options 11114-1 and 11114-3 through 11114-6 in FIG. 11U and on avatar 11005.
As shown in FIG. 11T, the facial colored drawing color option 11112 includes a first color group 11112-1 having a selected color 11112-1a, a second color group 11112-2 having a selected color 11112-2a, and a third color group 11112-3 having a selected color 11112-3 a. The selected color of each color group is displayed on the respective area of the facial painted pattern option 11114 corresponding to the respective color group. For example, pattern options 11114-1, 11114-3, 11114-4, 11114-5, and 11114-6 each have a corresponding region corresponding to the first color group 11112-1, and thus are shown in FIG. 11T as having color 11112-1 a. If the first color group 11112-1 is updated to select a different color (e.g., as shown in FIGS. 11U-11V), the corresponding regions of the pattern options 11114-1, 11114-3, 11114-4, 11114-5, and 11114-6 having the color 11112-1a will be updated to the different color. The facial painted pattern option 11114 having an area corresponding to the second color group 11112-2 reacts in a similar manner to changes in the second color group 11112-2. The facial painted pattern option 11114 having an area corresponding to the third color group 11112-3 reacts in a similar manner to changes in the third color group 11112-3. In addition, the area of the facial colored drawing pattern applied to the avatar 11005 reacts to changes in the corresponding color set in the same manner.
In some embodiments, some color sets do not affect the change of all pattern options. For example, the pattern option 11114-3 includes a region 11114-3a corresponding to the first color group 11112-1 and a region 11114-3b corresponding to the second color group 11112-2, but does not include a region corresponding to the third color group 11112-3. Thus, a change in color group 11112-3 does not affect the appearance of pattern option 11114-3 (or the appearance of avatar 11005 if pattern option 11114-3 is selected).
In FIG. 11T, the device 600 detects an input 11116 on the pattern option 11114-4 and, in response, updates the avatar 11005 to display the facial colored-drawing 11118 based on the selected pattern option 11114-4, as shown in FIG. 11U.
In FIG. 11U, the facial painting 11118 has a pattern 11120 with regions 11120-1, 11120-2, and 11120-3 corresponding to regions 11114-4a, 11114-4b, and 11114-4c, respectively, of the selected pattern option 11114-4. The facial painting 11118 also has a set of colors 11122, with color 11122-1a corresponding to color 11112-1a of color set 11112-1, color 11122-2a corresponding to color 11112-2a of color set 11112-2, and color 11122-3a corresponding to color 11112-3a of color set 11112-3. As shown in FIG. 11U, color 11122-1a is in region 11120-1, color 11122-2a is in region 11120-2, and color 11122-3a is in region 11120-3. The facial colored-drawing pattern 11120 does not cover the entire face of the avatar 11005. Thus, the avatar skin tones 11124 remain displayed for the portion of the avatar face that does not include the face paintings 11118. In addition, the facial colored drawing 11118 is not displayed on the avatar ears 11126.
In some embodiments, the facial colored drawing 11118 has a different texture than the avatar skin color 11124. For example, in fig. 11U, the facial painting 11118 has a glossy texture, as represented by the light effect 11128 (e.g., glare). In some embodiments, the different facial painted patterns 11114 have different paint textures. For example, pattern 11114-1 has a flat texture and therefore does not include light effects.
As previously described, the facial colored drawing 11118 is shown applied to the face of the avatar 11005. Thus, device 600 does not display an avatar 11005 with eye makeup 11048 or a manicure 11055, but rather displays facial wrinkles 11007. In addition, the device 600 displays the eyebrows with a distorted appearance 11009-1 (e.g., distorted color) caused by the mixture of eyebrow color/texture and facial painting 11118.
In FIG. 11U, device 600 detects input 11130 on color 11112-1b of first color group 11112-1 and, in response, selects an update avatar 11005 and pattern option 11114 based on the new color, as shown in FIG. 11V.
In fig. 11V, the facial colored-drawing 11118 is updated based on the new color selection. Specifically, region 11120-1 changes from color 11122-1a to color 11122-1b, and pattern option 11114-4 is updated in a similar manner by changing region 11114-4a to the selected color. Pattern option 11114-3 is also updated and region 11114-3a changes based on the color selected. In addition, a color slider 11131 is displayed for adjusting the gradient of the selected color 11112-1 b.
In FIG. 11V, the device 600 detects an input 11132 on pattern option 11114-3 and, in response, updates the avatar 11005 to display the facial colored-drawing 11118 with the appearance of pattern option 11114-3, as shown in FIG. 11W.
In FIG. 11W, device 600 detects input 11134 on color 11112-3b of third color group 11112-3 and in response displays color slider 11135 for adjusting the gradient of color 11112-3b and selects update avatar pattern option 11114-4 based on the new color, as shown in FIG. 11X.
In FIG. 11X, the pattern option 11114-4 is updated by changing the area 11114-4c to the selected color (e.g., 11112-3 b). It should be noted that the facial colored-drawing 11118 on the avatar 11005 is not updated based on the selection of the color 11112-3b, because the applied pattern (e.g., corresponding to the pattern option 11114-3) does not include an area corresponding to the third color group 11112, as previously described.
In FIG. 11X, the device 600 detects an input 11136 on pattern option 11114-4 and, in response, displays an avatar 11005 with a facial colored-up 11118 having a pattern 11120 and an area 11120-3 updated based on the selection of color 11112-3 b. Device 600 detects input 11138 corresponding to selection of eyewear menu option 11006-9 and, in response, updates avatar display area 11004 to show selection of eyewear menu option 11006-9 and updates avatar option area 11008 to show selectable options for selecting eyewear of avatar 11005. The eyewear options include a lens option 11140 and an eyecup option 11142. Lens option 11140 shows different styles of glasses displayed over a representation of avatar 11005, with different glasses displayed over the facial painting in the respective lens options. Similarly, eye shield option 11142 shows a different position of eye shields displayed over the representation of avatar 11005 with eye shields displayed over the facial painting in the corresponding eye shield option.
In fig. 11Z, the device 600 detects an input 11144 selecting the lens option 11140-1 and, in response, updates the avatar 11005 to include the glasses 11146 displayed over the facial colored-drawing 11118. Further, the avatar option region 11008 is updated to include the thickness option of glasses 11148 and the eyeshade option 11142 is updated to include the selected glasses, showing a different eyeshade option positioned above the representation of the avatar face but below the selected glasses.
In FIG. 11AA, device 600 detects input 11150 corresponding to selection of facial menu option 11006-4 and in response displays avatar display area 11004, facial menu option 11006-4 is selected, and displays facial hair option 11152, as shown in FIG. 11 AB.
In fig. 11AB, facial hair option 11152 shows a representation of an avatar 11005 having a different facial hairstyle. In the various facial hair options 11152, facial hair is shown applied over a facial colored drawing of a head portrait 11005. The device 600 detects the input 11153 corresponding to the selection of the facial hair option 11152-1 and, in response, updates the avatar 11005 to display the facial hair 11155 positioned above the facial colored-drawing 11118, as shown in fig. 11 AC.
In FIG. 11AC, the device 600 detects an input 11154 corresponding to a selection of the facial painted menu option 11006-8 and, in response, updates the avatar display area 11004 to select the facial painted menu option 11006-8, as shown in FIG. 11 AD.
In fig. 11AD, the device 600 displays an avatar 11005 with glasses 11146 and facial hairs 11155 displayed over the facial colored-drawing 11118. In addition, pattern options 11114 are updated to include the selected glasses and facial hair options, each of which is displayed on the corresponding pattern option 11114.
Fig. 12 is a flow diagram illustrating a method for displaying an avatar in an avatar editing application user interface using an electronic device, in accordance with some embodiments. The method 1200 is performed at a device (e.g., 100, 300, 500, 600) having a display and an input device. Some operations in method 1200 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 1200 provides an intuitive way for displaying an avatar in an avatar editing application user interface. The method reduces the cognitive burden of a user in displaying the avatar in the avatar editing application user interface, thereby creating a more efficient human-machine interface. For battery-driven computing devices, enabling a user to more quickly and efficiently display an avatar in an avatar-editing application user interface conserves power and increases the interval between battery charges.
An electronic device (e.g., 600) displays (1202) an avatar editing user interface (e.g., 11002) (e.g., a single interface screen) via the display device (e.g., 601).
The device displays (1204) an avatar editing user interface (e.g., 11002) that includes (e.g., includes concurrently displayed) an avatar (e.g., 11005) (e.g., an avatar such as, for example, an avatar modeled to represent a human character, and/or an avatar that may be created and/or customized by a user) that includes a first feature (e.g., avatar facial sketch 11118) having a first color pattern (e.g., 11120) (e.g., a facial sketch template applied to the avatar) (e.g., a color pattern having a lightning shape) that is colored with a first set of colors (e.g., 11122-1a) (e.g., red) in a first region (e.g., 11120-1) (e.g., lightning center shape) that includes the first color pattern (e.g., 11120-1) (e.g., lightning center shape), a default group color; e.g., a set of colors corresponding to a facial colored-drawing template). In some embodiments, an avatar modeled to represent a human includes customizable (e.g., selectable or configurable) avatar characteristics that generally correspond to the physical characteristics of the human. For example, such avatars may include representations of people having various body, human features or characteristics (e.g., an elderly female with a dark skin color and having long, straight, brown hair). Such an avatar will also include a representation of a person having various non-human characteristics (e.g., cosmetic enhancements, hats, glasses, etc.) that are typically associated with the appearance of a human. In some embodiments, such an avatar will not include anthropomorphic constructs, such as a stylized animal, a binning robot, or stylized, generally inanimate or generally non-human subject.
In some embodiments, the first feature includes a first display texture (e.g., represented by light effect 11128) (e.g., a paint texture (e.g., glossy, flat, matte, etc.)) that is different from a second display texture of a skin feature (e.g., 11124) of the avatar (e.g., the avatar skin has a texture that is different from the texture of the facial colored drawing). Displaying the first feature of the display texture having the display texture different from the skin feature of the avatar provides visual feedback to the user that the facial painting feature is applied to the avatar, and where interaction with the facial painting control will affect the avatar, particularly if the facial painting has a color that can confuse skin colors. Providing improved feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
The device displays (1206) an avatar editing user interface (e.g., 11002) that includes a set of color options (e.g., 11112) for the first feature (e.g., a plurality of sets of color options). In some embodiments, each set of color options corresponds to various facial painting template options that include a pattern (or a portion of the pattern) that changes color if a color option including the respective set of color options is selected.
The device displays (1208) an avatar-editing user interface (e.g., 11002) that includes a plurality of color-pattern options (e.g., 11114) (e.g., selectable options corresponding to different avatar facial-painting templates) for a first feature (e.g., 11118) that includes a second color pattern (e.g., 11114-3) (e.g., a color pattern with vertical stripes) (e.g., the second color pattern corresponds to a color pattern different from the color pattern applied to the avatar) that is different from the first color pattern (e.g., 11120).
In some embodiments, the plurality of color pattern options includes a first color pattern option (e.g., 11114-4) corresponding to a first color pattern (e.g., 11120) (e.g., a selectable color pattern option representing a color pattern currently applied to the avatar). In some embodiments, when the first feature of the avatar has a first color pattern, the first color pattern option is shown in a selected state.
In some embodiments, the plurality of color pattern options includes an option (e.g., 11114-2) that, when selected, causes the first feature (e.g., 11118) to cease to be displayed (e.g., the avatar is displayed without facial colored drawing when the option to cease to display the first feature is selected).
In some embodiments, the electronic device detects selection of an option (e.g., 11114-2) to stop displaying the first feature. In some embodiments, in response to detecting selection of the option to stop displaying the first feature, the electronic device stops displaying the first feature (e.g., removing the facial painting from the avatar while still displaying the color pattern options each having a representation of the avatar with the corresponding color pattern applied). In some embodiments, in response to detecting selection of the option to stop displaying the first feature, the electronic device displays (e.g., brings up the display, reveals) one or more avatar features (e.g., moles, makeup (e.g., blush, lipstick, eye shadow, etc.)) that are hidden while displaying the first feature (e.g., see avatar 11005 in fig. 11T). Revealing the avatar feature hidden by the facial painting provides visual feedback to the user that the facial painting feature is no longer displayed on the avatar and that the user's previous selections and customization of the avatar are preserved. Providing improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), and in addition, it may also reduce power usage and extend the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the avatar (e.g., 11005) includes a third feature displayed above the first feature. In some embodiments, the first feature is an item selected from: an avatar glasses feature (e.g., 11146) (e.g., glasses, sunglasses, one-piece eyecups, goggles, etc.), an avatar hair feature (e.g., hair on top of the avatar head), an avatar facial hair feature (e.g., 11155) (e.g., avatar facial hair other than eyebrows (e.g., ludwi hu, hirsute, yahu, etc.)) and an avatar skin wrinkle feature (e.g., 11007) (e.g., a line in the avatar skin representing wrinkles). In some embodiments, the third feature is displayed on the facial painting and is not responsive to changes in the facial painting (e.g., while still remaining responsive to other features of the avatar such as, for example, movement of the avatar's head, changes in the avatar's facial pose, and movement of facial features of the avatar (e.g., nose, eyebrows, mouth, etc.)). Displaying the third feature over the first feature allows customization and expression (e.g., facial expression, pose) of the avatar to be displayed while still respecting the user's selection of the facial colored-drawing. This provides an improved control scheme for generating an avatar that may require less input to generate the avatar than if a different control scheme were used (e.g., a control scheme that required manipulation of various control points to build the avatar). This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the avatar includes a fourth feature (e.g., the avatar eyebrow 11009). In some embodiments, while displaying the first feature, the electronic device displays a fourth feature having a first appearance based on the first feature (e.g., 11009-1) (e.g., the avatar eyebrows have a color that is a combination of the original eyebrow color and the color of the first feature at a location corresponding to the respective eyebrow, or a portion of the facial sketch corresponding to the eyebrows has a different color or texture at the location and shape of the eyebrows to indicate the presence of eyebrows under the facial sketch). In some embodiments, after ceasing to display the first feature, the electronic device displays a fourth feature (e.g., the avatar's eyebrow has a color determined based on the selected hair color (e.g., eyebrow color)) having a second appearance that is not based on the first feature (e.g., see eyebrow 11009 in fig. 11T). Displaying the fourth feature having the first appearance while displaying the first feature provides the user with an indication that the fourth feature is present while displaying the facial painting. Furthermore, the presence of the fourth feature allows the avatar to provide different facial expressions using the fourth feature (e.g., eyebrows) while still respecting and retaining the user's choice of facial painting. This provides an improved control scheme for generating an avatar that may require less input to generate the avatar than if a different control scheme were used (e.g., a control scheme that required manipulation of various control points to build the avatar). This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the avatar includes a fifth feature (e.g., avatar ear 11126) that is displayed concurrently with the first feature, where the fifth feature is separate from the first feature and does not change in response to changes to the first feature (e.g., the avatar ear has a skin tone and does not change when the first feature is modified).
When the first feature (e.g., 11118) is displayed as having a first color pattern (e.g., 11120) generated with a first set of colors (e.g., 11122) including a first color (e.g., 11122-1a) in a first region (e.g., 11120-1) of the first color pattern, the electronic device (1210) detects, via an input device (e.g., 601), a selection (e.g., 11130) of a color (e.g., 11112-1b) (e.g., blue) option of the set of color options that corresponds to a second color.
In response to (1212) detecting the selection (e.g., 11130), the electronic device changes (1214) the appearance of one or more of the plurality of color pattern options (e.g., 11114-3a and 11114-4 in fig. 11V) having a first portion (e.g., 11114-3a, 11114-4a) corresponding to the set of color options (e.g., a portion of the facial colored drawing template that changes with the selection of the color option) (e.g., changes the appearance of the representation of the avatar displayed in the one or more color pattern options (e.g., does not necessarily change the appearance of the avatar itself)). In some embodiments, changing the appearance includes changing a portion (e.g., 11114-3a) of the second color pattern option (e.g., 11114-3) from the corresponding color (e.g., 11112-1a) to a second color (e.g., 11112-1b) (e.g., changing an area of the second color pattern to blue). In some embodiments, only a subset of the color pattern options have an area corresponding to the set of color options. In some embodiments, a color pattern (e.g., a portion of a color pattern template) corresponds to a set of color options if the color pattern changes color in response to selection of a color option in the set of color options. In some embodiments, the color pattern forms some or all of the facial painting template, depending on the design of the facial painting template. Thus, the face painting template may have a plurality of color patterns forming the template. For example, a facial colored drawing template with three color patterns changes color in response to selecting a different color from three sets of color options. In some embodiments, the color pattern may have various shapes and designs.
In some embodiments, the plurality of color pattern options includes a fifth color pattern option (e.g., 11114-1) having an area (e.g., 11114-1a) that is not responsive to selection of the color option (e.g., 11112) (e.g., an area having a default color) (e.g., the default color is not changeable by the set of color options). In some embodiments, one or more of the color patterns comprise a pattern of a type having one or more default colors that do not change. For example, the camouflage pattern comprises black that is not changeable by the set of color options. As another example, a skeleton pattern has an eye socket and nose area that are always black. As another example, the clown pattern and the haunted pattern have a white base color that is not changeable by the set of color options. As another example, the monster pattern has lips and eye regions that have a black color that is not changeable by the set of color options.
In response to (1212) detecting selection of a color option (e.g., 11112-1b) of the set of color options corresponding to the second color, the electronic device maintains (1216) a display of an avatar (e.g., 11005) including the first feature (e.g., 11118). In some embodiments, the first feature has a first color pattern (e.g., 11120) (e.g., a lightning color pattern) (e.g., the avatar retains the same color pattern (e.g., the first color pattern), however, any region of the retained color pattern optionally changes color depending on whether the region corresponds to the selected color option.
In some embodiments, maintaining the display of the avatar (e.g., 11005) including the first feature (e.g., 11118) includes changing a respective one of the colors (e.g., 11122-1a) (e.g., the first color, a color other than the first color) of the first set of colors of the first color pattern to a second color (e.g., 11122-1b) (e.g., maintaining the color pattern applied to the avatar while changing one of the colors in the set of colors to the second color (e.g., blue)). In some embodiments, the color that is changed in the first set of colors of the first color pattern applied to the avatar is in an area of the first color pattern that is responsive to the set of color options. Thus, when a different color option is selected from the set of color options, the color of the responsive area changes to the selected color. In some embodiments, the color that changes in the first set of colors is a color in a first region of the first color pattern (e.g., a first color). In some embodiments, the color that is changed in the first set of colors is a color in a different region of the first color pattern (e.g., not the first region). Changing a respective one of the first set of colors of the first color pattern to the second color reduces the number of inputs to perform the technical task of generating the virtual avatar. Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting selection of a color option of the set of color options (e.g., 11112-1) that corresponds to the second color (e.g., 11112-1b), the electronic device displays a color adjustment control (e.g., 11131) (e.g., a color slider user interface) for the selected color option. In some embodiments, the electronic device detects an input (e.g., a drag gesture or a flick gesture) corresponding to the color adjustment control. In some embodiments, in response to detecting selection of a color option of the set of color options that corresponds to a second color, and in response to detecting input corresponding to the color adjustment control, the electronic device modifies one or more properties of the second color (e.g., hue, saturation, value, brightness, lightness, shading, mid-tone, highlight, warmth, coldness, etc.) (e.g., modifies the one or more properties based on a magnitude and direction of the input corresponding to the color adjustment control). In some embodiments, modifying one or more attributes of the second color includes modifying the one or more attributes of the second color at a location where the second color is displayed (e.g., displayed in response to being selected (e.g., in one or more of the color pattern options; in the selected color option; in the first feature)). In some embodiments, each set of color options displays a color slider when one of the color options in the set of color options is selected.
The electronic device detects (1218) a selection (11132) of a corresponding color pattern option (e.g., 11114-3) (e.g., a second color pattern option) of the color pattern options having a changed appearance (e.g., see fig. 11V) (e.g., selecting one of the facial painting template options that is changed/updated in response to the selection of the color; selecting a facial painting template having a blue color and a vertical stripe pattern).
In response to detecting selection of the respective color pattern option (e.g., 11114-3) and upon selecting the second color (e.g., 11112-1b) for the set of color options (e.g., 11112-1), the electronic device changes (1220) an appearance of the first feature (e.g., 11118) of the avatar to have an appearance generated based on the respective color pattern option, and the second color is applied to a portion of the respective color pattern option (e.g., the avatar 11005 is updated with a facial painting 11118 having a color pattern corresponding to the pattern option 11114-3, as shown in fig. 11W) (e.g., the appearance of the avatar is changed to include the selected facial painting template (e.g., having a blue color and a vertical stripe pattern)). In some embodiments, changing the appearance of the avatar includes removing the first facial painting template from the avatar and applying the selected facial painting template to the avatar. In some embodiments, changing the appearance of the avatar includes updating a color pattern currently applied to the avatar to include the changed color (e.g., switching from a red color to a blue color) without changing the color pattern (e.g., without changing the design of a facial painting template applied to the avatar). Changing the appearance of the first feature to have an appearance generated based on the color pattern option, applying a second color to a portion of the color pattern option while selecting the second color for the set of color options, providing visual feedback to the user that selection of a color in the respective set of color options results in a corresponding color change of the respective color pattern option. This provides an improved control scheme for creating an avatar that may require less input to generate a custom appearance for the avatar than if a different control scheme were used (e.g., a control scheme that required manipulation of various control points to build the avatar). Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, changing the appearance of a first feature (e.g., 11118) of an avatar (e.g., 11005) to have an appearance generated based on a respective color pattern option to which a second color is applied to a portion includes: in accordance with a determination that the respective color pattern option is a second color pattern option (e.g., 11114-3) (e.g., an option with a pattern of vertical stripes), displaying a first feature of the avatar (e.g., a vertical stripe applied to the face of the avatar) having a second color pattern (e.g., a vertical stripe applied to the face of the avatar) corresponding to the second color pattern option (e.g., avatar 11005 updated with a facial painting 11118 having a color pattern corresponding to the pattern option 11114-3, as shown in fig. 11W) (e.g., the avatar is displayed with a color pattern that matches the selected second color pattern option (e.g., a pattern with vertical stripes)). In some embodiments, changing the appearance of the first feature of the avatar to have an appearance generated based on a respective color pattern option to which the second color is applied to a portion of the respective color pattern option includes: in accordance with a determination that the respective color pattern option is a fourth color pattern option (e.g., a camouflage pattern option) that is different from the second color pattern option, a first feature of the avatar having a fourth color pattern (e.g., a camouflage pattern) that corresponds to the fourth color pattern option is displayed (e.g., the avatar is displayed having a color pattern that matches the selected fourth color pattern option (e.g., a camouflage pattern)). Displaying the first feature of the avatar having the fourth color pattern corresponding to the fourth color pattern option allows the user to switch the colors of the color patterns and then apply the changed color patterns to the avatar. This provides an improved control scheme for creating an avatar that may require less input to generate a custom appearance for the avatar than if a different control scheme were used (e.g., a control scheme that required manipulation of various control points to build the avatar). Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the plurality of color pattern options further comprises a third color pattern option different from the second color pattern option. In some embodiments, changing a portion of the second color pattern option from the respective color to the second color comprises changing a portion of the third color pattern option from the third color to the second color. Changing a portion of the third color pattern option from the third color to the second color allows the user to update the colors of the plurality of color pattern options by selecting a single color option from the set of color options when changing a portion of the second color pattern option from the respective color to the second color. This provides an improved control scheme for creating an avatar that may require less input to generate a custom appearance for the avatar than if a different control scheme were used (e.g., a control scheme that required manipulation of various control points to build the avatar). Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
Note that the details of the process described above with respect to method 1200 (e.g., fig. 12) also apply in a similar manner to the methods described below. For example, methods 700, 800, 1000, 1300, 1500, 1700, and 1800 optionally include one or more characteristics of the various methods described above with reference to method 1200. For example, avatars may be displayed and used in a user interface in a manner similar to that described above. For the sake of brevity, these details are not repeated in the following.
Fig. 13 is a flow diagram illustrating a method for displaying an avatar in an avatar-editing application user interface using an electronic device, in accordance with some embodiments. The method 1300 is performed at a device (e.g., 100, 300, 500, 600) having a display and an input device. Some operations in method 1300 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 1300 provides an intuitive way for displaying an avatar in an avatar editing application user interface. The method reduces the cognitive burden of a user in displaying the avatar in the avatar editing application user interface, thereby creating a more efficient human-machine interface. For battery-driven computing devices, enabling a user to more quickly and efficiently display an avatar in an avatar-editing application user interface conserves power and increases the interval between battery charges.
The electronic device (e.g., 600) displays (1302) an avatar editing user interface (e.g., 1102) (e.g., a single interface screen) via the display device (e.g., 601).
The electronic device (e.g., 600) displays (1304) an avatar editing user interface that includes (e.g., includes simultaneous display of) an avatar (e.g., 11005) (e.g., an avatar such as, for example, an avatar modeled to represent a human character, and/or an avatar that may be created and/or customized by a user) that includes a corresponding avatar feature (e.g., 11014) (e.g., mouth, tongue, face) having a first pose (e.g., 11014-1) (e.g., mouth, tongue, face) (e.g., a default pose or state of the feature; e.g., mouth closed; e.g., tongue inside mouth; e.g., facial expression). In some embodiments, an avatar modeled to represent a human includes customizable (e.g., selectable or configurable) avatar characteristics that generally correspond to physical characteristics of the human. For example, such avatars may include representations of people having various body, human features or characteristics (e.g., an elderly female with a dark skin color and having long, straight, brown hair). Such an avatar will also include a representation of a person having various non-human characteristics (e.g., cosmetic enhancements, hats, glasses, etc.) that are typically associated with the appearance of a human. In some embodiments, such an avatar will not include anthropomorphic constructs, such as a stylized animal, a binning robot, or stylized, generally inanimate or generally non-human subject.
An electronic device (e.g., 600) displays (1306) an avatar editing user interface including an avatar option selection area (e.g., 11008) (e.g., a visually distinguished area including options selectable for modifying avatar characteristics), the avatar option selection area including a plurality of avatar characteristics (e.g., 11016-1 to 11016-6) (e.g., 11018-1 to 11018-3) (e.g., a set of candidates or candidate options) corresponding to characteristics (e.g., tooth style, tongue pin type) of avatar characteristics (e.g., selected avatar characteristics other than the respective avatar characteristics) of the avatar characteristics (e.g., 11020, 11022) and including a plurality of avatar characteristic options (e.g., 11016, 11018) (e.g., available modified display representations of corresponding avatar characteristics), these feature options include graphical depictions of different feature options that can be selected to customize aspects or values of a particular avatar feature. ).
The electronic device detects (1308), via the input device, a request (e.g., 11012) to display an option for editing a respective avatar feature (e.g., a selection of a "mouth" affordance for modifying a feature of the respective avatar feature and/or scrolling through a set of options for modifying a feature of the respective avatar feature, such as a mouth of the avatar).
In response to detecting the request, the electronic device updates (1310) an avatar option selection area (e.g., 11008) to display avatar feature options (e.g., 11016, 11018) corresponding to a set of candidate values of characteristics (e.g., teeth, tongue nails, etc.) of a respective avatar feature (e.g., avatar mouth 11014). In some embodiments, updating the avatar option selection area to display an avatar feature option corresponding to a set of candidate values for a characteristic of a respective avatar feature comprises: concurrently displaying (1312) a representation of the first option (e.g., 11018-1) (e.g., the avatar tongue and tongue nail option) for the corresponding avatar feature (e.g., 11014) having the second pose (e.g., 11014-3) and displaying (1314) a representation of the second option (e.g., 11016-1) (e.g., the avatar teeth option) for the corresponding avatar feature having a third pose (e.g., 11014-2) different from the second pose (e.g., opening (e.g., pulling lips back) the avatar mouth in a smile pose to reveal the avatar teeth). Displaying respective representations of the first option and the second option having different gestures enhances the displayed options so that the user can more easily see and accurately edit the characteristics of the respective avatar characteristics. This provides an improved control scheme for creating or editing an avatar that may require less input to generate a custom appearance for the avatar than if a different control scheme were used (e.g., a control scheme that required manipulation of various control points to build the avatar). Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, updating the avatar option selection area (e.g., 11008) to display an avatar feature option corresponding to a set of candidate values for a characteristic of a respective avatar feature further comprises: displaying a plurality of representations of alternatives (e.g., 11018-2, 11018-3) for the first option (e.g., different avatar tongue and tongue nail options) of the respective avatar feature, wherein the respective avatar feature has a second gesture (e.g., 11014-3) in each of the plurality of representations of alternatives of the first option (e.g., the avatar mouth is open, the tongue is protruding). Displaying a plurality of representations of alternatives for a first option of a respective avatar feature, wherein the respective avatar feature has a second gesture in each of the plurality of representations of alternatives for the first option enhances a display appearance of the plurality of representations such that a user can more easily see and accurately edit different characteristics of the respective avatar feature based on the plurality of representations of alternatives for the first option. This provides an improved control scheme for creating or editing an avatar that may require less input to generate a custom appearance for the avatar than if a different control scheme were used (e.g., a control scheme that required manipulation of various control points to build the avatar). Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, updating the avatar option selection area to display an avatar feature option of a set of candidate values corresponding to characteristics of a respective avatar feature further comprises: displaying a plurality of representations of alternatives (e.g., 11016-2 through 11016-6) for a second option (e.g., a different avatar press teeth option) of the respective avatar feature, wherein the respective avatar feature has a third pose (e.g., 11014-2) different from the second pose in each of the plurality of representations of alternatives of the second option (e.g., the avatar mouth is open, the lips are pulled back in a smile pose to reveal the avatar teeth). Displaying a plurality of representations of alternatives for a second option of a respective avatar feature, wherein the respective avatar feature has a third gesture in each of the plurality of representations of alternatives for the second option enhances a display appearance of the plurality of representations such that a user can more easily see and accurately edit different characteristics of the respective avatar feature based on the plurality of representations of alternatives for the second option. This provides an improved control scheme for creating or editing an avatar that may require less input to generate a custom appearance for the avatar than if a different control scheme were used (e.g., a control scheme that required manipulation of various control points to build the avatar). Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the plurality of representations of alternatives of the first option and the plurality of representations of alternatives of the second option each have an appearance based on the appearance of the avatar (e.g., the appearance of the avatar is selected based on avatar editing input (e.g., selecting avatar characteristics such as skin tone, lipstick color, age, facial hair color and style, etc.), and the representations of alternatives each include an appearance that includes the appearance of the selected avatar (e.g., having the same selected avatar characteristics)).
In some embodiments, the first option corresponds to an option for editing a first portion (e.g., 11022) of a respective avatar feature (e.g., 11014) (e.g., a tongue portion of the avatar mouth). In some embodiments, the second gesture (e.g., 11014-3) increases the degree of visibility (e.g., prominence, level of detail) of the first portion of the respective avatar feature. In some embodiments, the second option corresponds to an option to edit a second portion (e.g., 11020) of the respective avatar feature that is different from the first portion (e.g., a tongue portion of the avatar mouth). In some embodiments, the third gesture (e.g., 11014-2) increases the degree of visibility of the second portion of the respective avatar feature. In some embodiments, increasing the visibility of a portion of a feature includes enlarging a view of the portion or displaying additional content of the portion so that the portion is more easily seen by a user. Increasing the degree of visibility of the first portion or the second portion allows a user to more easily view the respective portion to more accurately edit the features of the avatar corresponding to the respective portion.
In some embodiments, when the respective avatar feature (e.g., 11014) has a first pose (e.g., 11014-1), the first portion (e.g., 11022) has a first degree of visibility, and the degree of visibility of the first portion in the second pose (e.g., 11014-3) is greater than the first degree of visibility of the first portion in the first pose (e.g., the first portion has an increased degree of visibility in the second pose as compared to the first portion in the first pose). In some embodiments, the second portion (e.g., 11020) has a second degree of visibility when the corresponding avatar feature has a first pose, and the degree of visibility of the second portion in a third pose (e.g., 11014-2) is greater than the second degree of visibility of the second portion in the second pose (e.g., the second portion has an increased degree of visibility in the third pose as compared to the second portion in the first pose). In some embodiments, the pose of the respective avatar feature is not determined based on tracking the face of the user, and the first pose of the respective avatar feature is a neutral pose of the respective avatar feature or a predetermined pose of the respective avatar. In some embodiments, the first pose of the respective avatar feature is determined based on a pose of a face detected within a field of view of a camera of the electronic device.
In some embodiments, when the respective avatar feature has a third pose (e.g., 11014-2), the first portion (e.g., 11022) has a third degree of visibility, and the degree of visibility of the first portion in the second pose (e.g., 11014-3) is greater than the third degree of visibility of the first portion in the third pose (e.g., the first portion has an increased degree of visibility in the second pose than the first portion in the third pose). In some embodiments, when the respective avatar feature has a second pose (e.g., 11014-3), the second portion (e.g., 11020) has a fourth degree of visibility, and the degree of visibility of the second portion in the third pose (e.g., 11014-2) is greater than the second degree of visibility of the second portion in the second pose (e.g., the second portion has an increased degree of visibility in the third pose than the second portion in the second pose).
In some embodiments, the respective avatar feature is an avatar mouth (e.g., 11014). In some embodiments, the first option is a tongue nail option (e.g., tongue nail option 11018-1) that is like a tongue (e.g., 11022). In some embodiments, the second gesture (e.g., 11014-3) is a gesture in which the avatar mouth is displayed with the avatar tongue extending out of the avatar mouth (e.g., the avatar mouth is open, the tongue is extending out). In some embodiments, the avatar mouth has a first pose with no tongue extended, and the second pose shows a tongue extended. Displaying the avatar mouth with the pose of the avatar tongue extending from the avatar mouth enhances the displayed appearance of the avatar tongue, so that the user can more easily see and accurately edit the different tongue stud characteristics of the avatar mouth based on the pose of the tongue extending. This provides an improved control scheme for creating or editing an avatar that may require less input to generate a custom appearance for the avatar than if a different control scheme were used (e.g., a control scheme that required manipulation of various control points to build the avatar). Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the respective avatar feature is an avatar mouth (e.g., 11014). In some embodiments, the second option is an avatar teeth option (e.g., 11016-1) (e.g., teeth options such as braces, teeths (e.g., missing teeth, lucky door teeths), decorative braces, cuspids, single incisors, etc.). In some embodiments, the third pose (e.g., 11014-2) is a pose in which the avatar mouth is displayed with the avatar lips positioned to reveal the avatar teeth (e.g., 11020) (e.g., the avatar mouth is open, the lips are pulled back (e.g., in a smile pose) to reveal the avatar teeth). In some embodiments, the avatar mouth has a first pose with lips in a closed (e.g., neutral mouth pose or smiling without revealing teeth) position, and the third pose shows the lips in a different position revealing the avatar teeth. Displaying the avatar mouth with the pose with the avatar lips positioned to reveal the avatar teeth enhances the displayed appearance of the avatar teeth so that the user can more easily see and accurately edit different tooth characteristics of the avatar mouth based on the pose with the lips positioned to reveal the avatar teeth. This provides an improved control scheme for creating or editing an avatar that may require less input to generate a custom appearance for the avatar than if a different control scheme were used (e.g., a control scheme that required manipulation of various control points to build the avatar). Reducing the number of inputs required to perform a task enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
It is noted that the details of the process described above with respect to method 1300 (e.g., fig. 13) also apply in a similar manner to the methods described above and below. For example, methods 700, 800, 1000, 1200, 1500, 1700, and 1800 optionally include one or more characteristics of the various methods described above with reference to method 1200. For example, avatars may be displayed and used in a user interface in a manner similar to that described above. For the sake of brevity, these details are not repeated in the following.
Fig. 14A-14E illustrate an exemplary user interface for displaying a virtual avatar, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the process in FIG. 15.
Fig. 14A-14E illustrate exemplary user inputs and corresponding changes to an exemplary virtual avatar (e.g., smile avatar 1405), which may be displayed on an electronic device, such as the electronic device 600 shown in fig. 6A and having a display 601 (in some cases, a touch-sensitive display) and a camera 602 (including at least an image sensor) capable of capturing a portion (e.g., visible, infrared, or ultraviolet) representing a spectrum of light. In some embodiments, camera 602 includes multiple image sensors and/or other types of sensors. In addition to capturing data representing sensed light, in some embodiments, camera 602 can capture other types of data such as depth data. For example, in some embodiments, the camera 602 also captures depth data using speckle, time-of-flight, parallax, or focus based techniques. Image data captured by device 600 using camera 602 includes data corresponding to a portion of the spectrum of a scene within the camera's field of view. Additionally, in some embodiments, the captured image data further includes depth data for the light data. In some other embodiments, the captured image data comprises data sufficient to determine or generate depth data for the data of the portion of the spectrum. In some embodiments, electronic device 600 includes one or more elements and/or features of devices 100, 300, and 500.
In some examples, the electronic device 600 includes a depth camera, such as an infrared camera, a thermal imaging camera, or a combination thereof. In some examples, the device further includes a light emitting device (e.g., a light projector), such as an IR floodlight, a structured light projector, or a combination thereof. Optionally, the light emitting device is used to illuminate the object during capturing of images by the visible light camera and the depth camera (e.g. an IR camera), and information from the depth camera and the visible light camera is used to determine depth maps of different parts of the object captured by the visible light camera. In some implementations, a depth map (e.g., a depth map image) includes information (e.g., values) related to the distance of objects in a scene from a viewpoint (e.g., a camera). In one embodiment of the depth map, each depth pixel defines the location in the Z-axis of the viewpoint at which its corresponding two-dimensional pixel is located. In some examples, the depth map is composed of pixels, where each pixel is defined by a value (e.g., 0 to 255). For example, a "0" value represents a pixel located farthest from a viewpoint (e.g., camera) in a "three-dimensional" scene, and a "255" value represents a pixel located closest to the viewpoint in the "three-dimensional" scene. In other examples, the depth map represents a distance between an object in the scene and a plane of the viewpoint. In some implementations, the depth map includes information about the relative depths of various features of the object of interest in the field of view of the depth camera (e.g., the relative depths of the eyes, nose, mouth, ears of the user's face). In some embodiments, the depth map comprises information enabling the apparatus to determine a contour of the object of interest in the z-direction. In some implementations, the lighting effects described herein are displayed using parallax information from two cameras (e.g., two visible light cameras) for backward images, and depth information from a depth camera is used in conjunction with image data from the visible light cameras for forward images (e.g., self-portrait images). In some implementations, the same user interface is used when determining depth information using two visible light cameras and when determining depth information using depth cameras, thereby providing a consistent experience for the user even when using distinct techniques to determine information used in generating a lighting effect. In some embodiments, when the camera user interface is displayed with one of the lighting effects applied, the device detects selection of the camera switching affordance and switches from a forward-facing camera (e.g., a depth camera and a visible light camera) to a backward-facing camera (e.g., two visible light cameras spaced apart from each other) (and vice versa) while maintaining display of user interface controls for applying the lighting effect and replacing the field of view of the forward-facing camera with the field of view display of the backward-facing camera.
In some embodiments, a virtual avatar (also referred to as an "avatar") is a user representation that can be graphically depicted. In some embodiments, the virtual avatar is non-realistic (e.g., cartoon). In some embodiments, the avatar is an anthropomorphic construct, such as an animated emoticon (e.g., a smiley face). In some embodiments, the virtual avatar includes an avatar face having one or more avatar features (e.g., avatar facial features). In some embodiments, the avatar features correspond to (e.g., map to) one or more physical features of the user's face, such that detected movement of the one or more physical features of the user affects the avatar features (e.g., affects a graphical representation of the features).
In some implementations, the user can manipulate characteristics or features of the virtual avatar using camera sensors (e.g., camera 602) (e.g., camera module 143, optical sensor 164, depth camera sensor 175). As the physical features (such as facial features) and position (such as head position, head rotation, or head tilt) of the user change, the electronic device detects these changes and, in response, modifies the displayed image of the virtual avatar (e.g., to reflect the changes to the physical features and position of the user). In some embodiments, changes to the user's physical characteristics and location are indicative of various expressions, emotions, contexts, moods, or other non-verbal communication. In some embodiments, the electronic device modifies the displayed image of the virtual avatar to represent these expressions, emotions, contexts, moods, or other non-verbal communication.
In some embodiments, the virtual avatar may be displayed in the context of various applications, such as, for example, messaging applications (e.g., messaging user interface 603, avatar creation user interface 632, avatar editing user interface 670), contact applications (e.g., real-time gesture interface 926, contact business card 976, etc.), camera applications, media viewing applications (e.g., photo applications or other applications for viewing media content such as photos or videos), and video communication applications. For example, in the context of a messaging application, the virtual avatar may be used to generate visual effects (e.g., multimedia content) including stickers, static virtual avatars, and animated virtual avatars, which may be communicated to a user of the messaging application. Examples of such embodiments are described above and shown in fig. 6A-6V. As another example, in the context of a messaging application, a contacts application, a camera application, a media viewing application, or a video communication application, a virtual avatar may be used to display various visual effects when displaying image data (e.g., image data captured by a camera (e.g., 602) of an electronic device (e.g., devices 100, 300, 500, 600)). Details for generating and sending visual effects (e.g., including virtual avatars) in messaging applications and displaying visual effects in messaging applications, camera applications, media viewing applications, and video communication applications are provided in U.S. patent publication No. us2018/0335927, which is hereby incorporated by reference for all purposes.
Fig. 14A to 14E show various detected states of the user 1401 and corresponding states of the smile avatar 1405. The representations (e.g., user states 1411-1 to 1411-19) on the left side of fig. 14A-14E represent a user detected by the electronic device while the user is within the field of view of one or more cameras (e.g., camera 602) (e.g., camera module 143, optical sensor 164, depth camera sensor 175) and/or other sensors (e.g., infrared sensors). In other words, the representation of the user is from the perspective of a camera (e.g., camera 602) (e.g., camera module 143, optical sensor 164, depth camera sensor 175), which in some embodiments may be located on the electronic device (e.g., devices 100, 300, 500, 600), and in other embodiments may be located separately from the electronic device (e.g., an external camera or sensor that communicates data to the electronic device). In some embodiments, the boundaries of the representations on the left side of fig. 14A-14E represent the boundaries of the field of view of one or more cameras (e.g., 602) (e.g., camera module 143, optical sensor 164, depth camera sensor 175) and/or other sensors (e.g., infrared sensors). In some implementations, the representation of the user is displayed as image data on a display (e.g., touch screen 112, display 340, display 450, display 504, display 601) of the electronic device. In some implementations, the image data is transmitted to an external electronic device for display. In some embodiments, the external electronic device includes one or more elements and/or features of devices 100, 300, 500, and 600. In some embodiments, the image data is collected and processed by the device (e.g., 100, 300, 500, 600), but is not immediately displayed on the device or transmitted to an external electronic device.
14A-14E each show a virtual avatar (e.g., smile avatar 1405) in a state rendered (e.g., displayed after modification) based on a corresponding detected state (e.g., user states 1411-1 through 1411-19) of a user located on the left side of the diagram (e.g., avatar states 1412-1 through 1412-19). In some embodiments, the virtual avatar is displayed from the perspective of the user viewing the virtual avatar. In some embodiments, the virtual avatar is displayed on a display (e.g., touchscreen 112, display 340, display 450, display 504, display 601) of the electronic device. In some embodiments, the virtual avatar is transmitted to an external electronic device for display (e.g., with or without image data of the user). In some embodiments, the representation on the right of fig. 14A-14E represents a location of the virtual avatar within a display area of a display (e.g., touchscreen 112, display 340, display 450, display 504, display 601) of the electronic device, and the boundary of the representation on the right of fig. 14A-14E represents a boundary of the display area that includes the virtual avatar. In some embodiments, the display area of the right representation corresponds to an avatar display area of an application user interface, such as a virtual avatar interface, a message composition area, or a message area (or a portion thereof) that may be presented, for example, in the context of a messaging application.
In some embodiments, the magnitude of the avatar feature (e.g., discrete elements of the avatar that may be discretely moved or modified relative to other avatar features) response corresponds to the magnitude of a change in a physical feature of the user (e.g., a detected or tracked feature, such as a user's muscle, muscle group, or anatomical feature, such as an eye). For example, in some embodiments, a magnitude of the change in the physical feature is determined from a potential range of motion of the physical feature, where the magnitude represents a relative position of the physical feature within the range of motion (e.g., a predicted or modeled range of motion) of the physical feature. In such embodiments, the magnitude of the response (e.g., change in position) of the avatar feature is similarly the relative position of the avatar feature within the range of motion of the avatar feature. In some embodiments, the magnitude of the change is determined based on a comparison or measurement (e.g., distance) of the starting and ending locations of the change in the physical characteristic. In such embodiments, the change in the physical characteristic may be translated into a modification to the first avatar characteristic by applying the change in the measured physical characteristic to the avatar characteristic (e.g., directly, or as a scaled or adjusted value).
In some embodiments, the modification to the avatar feature has a magnitude component and a direction component, the direction component of the modification to the avatar feature being based on the direction component of the change in the one or more physical features that the avatar feature (e.g., facial features of the user's face) reacts to. In some embodiments, the direction of the reaction of the avatar feature corresponds to (e.g., directly corresponds to or otherwise corresponds to) the relative direction of change of the physical feature of the user, wherein the relative direction of change of the physical feature is determined based on the direction of movement of the physical feature from an initial position (e.g., a neutral position, a rest position of the physical feature, or in some embodiments, a position of the physical feature initially detected by the device). In some embodiments, the direction of the reaction of the avatar feature directly corresponds to the relative direction of change of the physical feature (e.g., the physical feature moves upward, so the avatar feature also moves upward). In other embodiments, the direction of reaction of the avatar feature corresponds opposite the relative direction of change of the physical feature (e.g., the avatar feature moves downward if the physical feature moves upward).
In some embodiments, the varying directional component of the avatar characteristic is mirrored relative to the varying directional component of the physical characteristic. For example, when a physical feature (e.g., the user's mouth) moves to the left, an avatar feature (e.g., avatar mouth) moves to the right. In some embodiments, the direction component of the change of the avatar characteristic is the same as the direction component of the change of the physical characteristic for movement along the vertical axis, and the direction component of the change of the avatar characteristic is a mirror image of the direction component of the change of the physical characteristic for movement along the horizontal axis, similar to the effect seen when looking into a mirror. In some embodiments, the change in the relative position of the physical feature (e.g., the user's iris or eyebrow) is in a direction determined from the neutral resting position of the physical feature. For example, in some embodiments, a neutral resting position of the user's iris is determined as a particular position relative to the user's eyeball periphery (e.g., a centered position).
Fig. 14A-14E illustrate embodiments in which an electronic device displays a smiley avatar 1405 having the appearance of an animated emoticon that changes posture in response to a detected change in a facial feature of a user 1401. In particular, the smile avatar 1405 tracks the facial features of the user by detecting movement of the facial features of the user within the range of movement of the respective facial features. The electronic device modifies (e.g., changes the pose of) the corresponding avatar feature within the range of movement (e.g., the range of poses) of the respective avatar feature. When the avatar feature moves to a predetermined position within the range, the avatar feature abruptly changes (e.g., via an animated transition) to a predetermined pose that is maintained for a sub-portion of the range of movement of the corresponding facial feature. Thus, this sub-portion of the range of motion of the facial feature is considered to be a gesture that maps to (e.g., corresponds to) an abrupt change in the avatar feature. Additional movement of the facial features within the subsection does not result in a change in the location of the abrupt change in the avatar characteristic (e.g., the electronic device maintains the pose of the abrupt change in these locations of the facial features).
In some embodiments, a suddenly changing pose of an avatar feature may be said to have greater inertia than other avatar poses, such that once triggered, a greater degree of facial feature pose change is required to modify the pose away from the sudden change. In some embodiments, there is a degree of hysteresis associated with the abruptly changing pose of the avatar feature such that, once triggered, the feature is held (e.g., unmodified) for a period of time even after the change in the corresponding facial feature is detected (e.g., while the corresponding facial feature is still within the sub-portion of the facial pose corresponding to the abruptly changing avatar pose). In some embodiments, when a facial feature reaches a threshold for a sub-portion, changes in the facial feature near the threshold distort the pose corresponding to the sudden change in the avatar feature (e.g., as shown in avatar state 1412-5 of fig. 14B).
The predefined gesture (e.g., a suddenly changing gesture) may be a gesture associated with an emoticon character (e.g., a static/non-animated emoticon) or a portion of the emoticon character, such as an emoticon character that the electronic device is configured to communicate to the user (e.g., via messaging user interface 603 in fig. 6A). Through the tracking and abrupt change behaviors described above, the user can control various features of the smiley avatar 1405 to track the user's corresponding facial features and capture one or more predefined gestures that match various emoticons. In some implementations, this can be done when one or more additional features of the smile avatar 1405 track facial features of the user. For example, the user can control the smiley avatar 1405 to have a mouth pose that matches a predefined emoticon smile, while the eyes of the smiley avatar 1405 track the user's eyes. These behaviors allow the user to control the smiley avatar 1405 to convey an expression using both an expression that matches the user's face and an expression that matches different emoticons in an avatar, which may generally be more expressive than a human facial feature. In the case where the change of the smile avatar 1405 is transmitted to another user, the expression of the smile avatar 1405 is less likely to be misinterpreted by the recipient user because the expression of the smile avatar 1405 may include recognized facial expressions of various existing and highly recognized emoticons.
Fig. 14A-14E demonstrate the behavior outlined above by showing various examples of electronic devices that modify a smile avatar 1405 in response to detecting a change in facial features of a user 1401. The user 1401 is shown in user states 1411-1 to 1411-19 and the smiley avatar 1405 is shown in avatar states 1412-1 to 1412-19. The smile avatar 1405 includes various avatar features including avatar eyes 1415, avatar mouth 1425, avatar eyebrows 1435, avatar head 1445, lighting effects 1455, and avatar teeth 1465. The user 1401 includes various detected physical features (e.g., facial features) including, for example, eyes 1410, mouth 1420, eyebrows 1430, and head 1440. In some embodiments, the tracked physical features may include other facial features, such as eyelids, lips, muscles, muscle groups, and the like. In some embodiments, device 600 ignores changes in particular facial features when such features' movements interfere with the display of a suddenly changing gesture or cause unnatural behavior. For example, changes in the user's jaw position are not used to modify the smiling avatar 1405 because jaw movements may cause the mouth of the smiling avatar 1405 to abruptly change to a different position in an unnatural manner.
FIG. 14A illustrates a transition of the smile avatar 1405 from a neutral smile avatar pose to a pose corresponding to an emoticon with a outright smile. The smile avatar 1405 is shown as having four display states (1412-1, 1412-2, 1412-3, and 1412-4), each of the four display states of the smile avatar 1405 corresponding to the four detected states (1411-1, 1411-2, 1411-3, and 1411-4) of the user 1401, respectively. In the user state 1411-1, the electronic device detects that the user 1401 is in a neutral position with the user's head 1440 facing forward (e.g., not tilted or rotated), that the user's mouth 1420 is in a closed position with a slight smile, that the user's eyes 1410 are in a neutral face-forward position (e.g., the user's eyes are looking forward and not looking up, down, or to the side), and that the user's eyebrows 1430 are in a neutral resting position (e.g., the eyebrows 1430 are not raised and lowered). Based on the location of these detected features of the user 1401, the electronic device displays a smile avatar 1405 with a neutral pose in avatar state 1412-1 with avatar mouth 1425 closed and smiling slightly, avatar eyes 1415 in a neutral forward-facing position (e.g., the eyes have a neutral, rounded shape and look forward and not up, down, or sideways), avatar head 1445 facing forward (e.g., not rotating or tilting), and lighting effect 1455 centered on top of avatar head 1445.
In the user state 1411-2, the electronic device detects movement of the user's mouth 1420 to a slightly laughing position. The position of the user's mouth 1420 is not a position that triggers an abrupt change of the avatar mouth 1425 to a predefined gesture. Thus, the electronic device modifies the avatar mouth 1425 to mirror the movement of the user's mouth 1420 by slightly increasing the smile of the avatar mouth 1425, as shown by avatar state 1412-2, without abruptly changing to a predefined gesture. The electronic device does not detect any other user feature changes in the user state 1411-2 and therefore does not modify any other avatar features of the smiling face 1405 in the avatar state 1412-2.
In user states 1411-3, the electronic device detects movement of the user's mouth 1420 to a slightly open smile position and movement of the user's eyes 1410 to a slightly squinted position. The position of the user's mouth 1420 is the position that triggers the avatar mouth 1425 to abruptly change to a predefined gesture. Thus, the electronic device modifies the avatar mouth 1425 to abruptly change to a predefined gesture that is a laugh 1425-1 exposing the avatar teeth 1465, as shown by avatar state 1412-3. In some embodiments, this abrupt change behavior is displayed as an animated change in the avatar mouth 1425 transitioning from a pose in the avatar state 1412-2 to a pose in the avatar state 1412-3. The abrupt change gesture of the avatar mouth 1425-1 does not mirror the gesture of the user's mouth 1420 in the user state 1411-3. For example, the avatar mouth 1425-1 is a large open mouth that reveals the avatar teeth 1465, while the user's mouth 1420 is a slightly open position with a smile, with little or no teeth shown.
Although the electronic device detects a change in position of the user's eyes 1410 in the user state 1411-3, the eyes are still within a range of positions corresponding to a neutral eye pose 1415-1 of the avatar's eyes. Thus, the electronic device does not modify the avatar eyes 1415 in the avatar state 1412-3.
In the user state 1411-4, the electronic device detects that the user's mouth 1420 continues to move to a wider smile pose, but does not further modify the appearance of the smile avatar 1405 in the avatar state 1412-4. This is because the position of the user's mouth 1420 is still within the range of the user's mouth position that triggers the display of the avatar mouth 1425-1, which has a smile that exposes the avatar teeth 1465, but not at the edges of the range that cause the sudden change in pose distortion.
Fig. 14B illustrates a smile head 1405 transitioning from a pose having an emoticon with a smile 1425-1 with teeth 1465 to a pose corresponding to the emoticon having a surprise expression. The smile avatar 1405 is shown as having four display states (1412-5, 1412-6, 1412-7, and 1412-8), each of the four display states of the smile avatar 1405 corresponding to the four detected states (1411-5, 1411-6, 1411-7, and 1411-8) of the user 1401, respectively. In the user state 1411-5, the electronic device detects a gesture in which the user's mouth 1420 continues to move to a position in which a corner 1420a of the user's mouth 1420 is pulled slightly upward. In response, the electronic device distorts the corners 1425a of the avatar mouth 1425-1 while still maintaining the same general appearance of the avatar mouth pose with the abrupt change of laugh that exposes the avatar teeth 1465. This is because the detected movement of corner 1420a is at the edge of the range of user mouth gestures that trigger the avatar mouth gesture 1425-1, which has a laugh that exposes the avatar teeth. Thus, detected movement of the user's mouth 1420 at the edge of the range causes the electronic device to distort the avatar mouth 1425-1 while still maintaining the abruptly changing pose. When the user's mouth 1420 moves beyond this range, the electronic device transitions the avatar's mouth from a posture that changes abruptly to a posture that is determined based on the position of the user's mouth 1420. Examples of such transitions are described below and shown in avatar state 1412-6.
In the user states 1411-6, the electronic device detects movement of the user's mouth 1420 to the open position, the user's eyes 1410 are displaced laterally, and the user's eyebrows 1430 are lifted slightly. The detected movement of the user's mouth 1420 is a gesture that is beyond a sub-portion of the gesture corresponding to the abruptly changing gesture (e.g., 1425-1), and thus causes the electronic device to display movement of the avatar mouth 1425 from a predefined gesture in the avatar state 1412-5 to a gesture determined based on the position of the user's mouth 1420, as shown in avatar state 1412-6. The electronic device also modifies the avatar eyes 1415 to laterally shift to shifted eye positions 1415-3 to mirror the movement of the user's eyes 1410 in the user states 1411-6. The electronic device does not modify the smile avatar 1405 in response to the detected slight lift of the user's eyebrow 1430.
In user states 1411-7, the electronic device detects movement of the user's eyes 1410 to a forward-looking extended gesture. The expanded pose of the user's eyes 1410 corresponds to a pose that triggers the expanded eye pose 1415-2 of the avatar's eyes 1415. In response, the electronic device modifies the avatar eyes 1415 to abruptly change to an extended eye pose 1415-2 as shown in the avatar state 1412-7. The electronic device also detects a gesture in which further expansion of the user's mouth 1420 triggers an abrupt change in the display of the avatar mouth 1425 to the expanded mouth gesture 1425-2. The electronic device also detects a gesture in which the user's eyebrow 1430 is further raised to touch the display of the avatar's eyebrow 1435 appearing on the smile avatar 1405, as shown in avatar state 1412-7. In some embodiments, the display of the avatar eyebrow 1435 is triggered by a sudden change of the avatar eyes 1415 to the expand eye pose 1415-2, and is not responsive to the detected position of the user's eyebrow 1430. In some embodiments, the display of the avatar eyebrow 1435 is triggered by a detected combination of an abrupt change to the extended eye pose 1415-2 and an abrupt change to the extended mouth pose 1425-2, and is not responsive to the detected position of the user's eyebrow 1430.
In some embodiments, the electronic device displays the avatar eyebrow 1435 appearing in animation, where the pair of eyebrows appear as a hole (starting from thin and growing to the full size of the eyebrow) that opens and darkens in the head 1445 of the smile avatar 1405. The animation is represented in avatar states 1412-7 and 1412-8. The electronic device displays the avatar eyebrow 1435 over the smile avatar 1405 until the gesture that triggered the display of the pair of eyebrows is no longer detected. In some implementations, if the electronic device does not detect that the pose is being held for at least a predetermined amount of time (e.g., 0.5 seconds), the eyebrows are on the smile avatar 1405 for the predetermined amount of time and then fade away. In some embodiments, the electronic device displays that the avatar eyebrow 1435 disappears in an animation in which the eyebrow shrinks in size and darkens in color so that the appearance of the eyebrow gradually increases in the smile avatar head 1445.
In the user state 1411-8, the electronic device detects that the user's eyes 1410 continue to move to an even more extended pose and maintains the display of the avatar eyes with the extended eye pose 1415-2 and the avatar eyebrows 1435 displayed on the avatar head 1445.
In some embodiments, some avatar features appear to be less responsive to changes in detected corresponding facial features than other avatar features. This is a result of the hysteresis effect discussed above. For example, in the avatar states 1412-1 through 1412-12, the avatar mouth 1425 responds to slight changes in the user's mouth 1420 (e.g., abruptly changes to a different pose and mirrors slight movements in the user's mouth), while the avatar eyes 1415 do not respond to the expanding changes of the user's eyes 1410 until the avatar eyes are detected in the user states 1411-7 to be in an expanded pose. This is because the range of user feature gestures that trigger the avatar feature to abruptly change to a predefined gesture may be a range of different magnitudes for different avatar features. Here, the user eye pose range that triggers the neutral avatar eye pose 1415-1 shown in the avatar states 1412-1 through 1412-6 is larger than the individual range of user mouth poses that trigger different avatar mouth poses. Accordingly, a greater amount of detected movement of the user's eyes 1410 is required to change the avatar eyes 1415 from a neutral gesture to a different eye gesture, such as an extended eye gesture 1415-2 in the avatar state 1412-7 (or a squinting eye gesture 1415-3 described below with respect to the avatar state 1412-14), while moving the avatar mouth 1425 to a different gesture (e.g., abruptly changing gestures 1425-1 and 1425-2 and a non-abruptly changing gesture 1425) requires less detected movement of the user's mouth 1420.
FIG. 14C illustrates a smiley avatar 1405 transitioning from a facial position with a surprised emoticon to a neutral position. The smile avatar 1405 is shown as having four display states (1412-9, 1412-10, 1412-11, and 1412-12), each of the four display states of the smile avatar 1405 corresponding to the four detected states (1411-9, 1411-10, 1411-11, and 1411-12) of the user 1401, respectively. In the user states 1411-9, the electronic device detects the user 1401 with the same facial pose as in the user states 1411-8. In response, the electronic device maintains a display of the smile avatar 1405 in the avatar state 1412-9 with the same appearance as in the avatar state 1412-8.
In the user states 1411-10, the electronic device detects that the user's eyes 1410 have slightly diminished from the expanded pose in the user states 1411-9, and the eyebrows 1430 have returned to a slightly raised position. The user's eye 1410 gesture still triggers the display of the extended eye gesture 1425-2, but the slightly raised position of the eyebrow 1430 no longer triggers the display of the avatar eyebrow 1435. Thus, the electronic device maintains the display of the avatar's eyes with the extended eye pose 1415-2, but stops displaying the avatar's eyebrows 1435. As shown in the avatar state 1412-10, the avatar eyebrow 1435 is displayed as fading over the smile avatar 1405, as described above. The electronic device continues to detect the user's mouth in the expanded gesture 1420, which triggers display of an expanded mouth gesture 1425-2. Thus, the electronic device maintains the display of the expanded mouth gesture 1425-2 in the avatar state 1412-10.
In the user states 1411-11, the electronic device detects that the user's eyes 1410 return to a neutral posture, while the user's mouth 1420 remains in an expanded posture. In response, the electronic device modifies the avatar eyes 1415 to return to the neutral eye pose 1415-1 and maintains display of the extended mouth pose 1425-2 in the avatar state 1412-11.
In the user states 1411-12, the electronic device detects that the user 1401 has returned to a neutral gesture detected in the user state 1411-1. In response, the electronic device modifies the smile avatar 1405 in the avatar state 1412-12 to return to the neutral pose previously discussed with respect to the avatar state 1412-1.
Fig. 14D illustrates a smiley head portrait 1405 transitioning from a pose with an emoticon having a laugh 1425-1 with teeth 1465 to a pose corresponding to the emoticon having a kissing face (e.g., squinting eyes and a beeping lip). The smile avatar 1405 is shown as having four display states (1412-13, 1412-14, 1412-15, and 1412-16), each of the four display states of the smile avatar 1405 corresponding to the four detected states (1411-13, 1411-14, 1411-15, and 1411-16) of the user 1401, respectively. In the user states 1411-13, the electronic device detects that the user 1401 has the same facial gesture as was detected in the user states 1411-4. In response, the electronic device displays a smile avatar 1405 in avatar state 1412-13, the smile avatar having the same pose as discussed with respect to avatar state 1412-4. That is, the smile avatar 1405, which is a laugh exposing the displayed avatar teeth 1465, has a neutral eye position 1415-1 and a mouth position 1425-1.
In user states 1411-14, the electronic device detects that the user's mouth 1420 has moved to a slightly larger smile gesture and the user's eyes 1410 have moved to a squint position. The detected movement of the user's mouth 1420 is still within the mouth pose range corresponding to the smiling mouth pose 1425-1 exposing the avatar teeth. Thus, the electronic device continues to display the avatar mouth 1425 with the laugh mouth pose 1425-1 exposing the avatar teeth 1465. The detected squinting location of the user's eyes 1410 is within an eye gesture range that triggers a sudden change of the avatar eyes 1415 to a predefined squinting eye gesture 1415-3. Accordingly, the electronic device modifies the avatar eyes to transition from a neutral eye gesture to an squinting eye gesture 1415-3, as shown by avatar state 1412-14.
In user states 1411-15, the electronic device detects that the user's eyes 1410 continue to move to the closed position and that the movement of the user's mouth 1420 diminishes as the user moves their mouth toward the beeping gesture. The detected position of the eyes is still within the gesture range that triggers the squinting-eye gesture 1415-3. Thus, the electronic device continues to display the smiley avatar 1405 with squinting eye gesture 1415-3 as shown in avatar state 1412-15. The detected movement of the user's mouth 1420 is not within a range of mouth positions corresponding to the predefined gesture of the avatar mouth 1425. Accordingly, the electronic device modifies the avatar mouth 1425 to a pose determined based on the detected position of the user's mouth 1420 in the user states 1411-15. Thus, the avatar mouth 1425 is shown with a reduced smile in the avatar state 1412-15.
In the user states 1411-16, the electronic device detects that the user's mouth 1420 continues to move to the beeping gesture, and the user's eyes 1410 remain in the same closed position as in the previous user state. The detected beep gesture of the user's mouth 1420 corresponds to a predefined beep gesture of the avatar mouth 1425. Thus, the electronic device displays a smile avatar 1405 with a beeping mouth gesture 1425-3 in avatar state 1412-16. Because the detected position of the user's eyes 1410 remains within the range of the user's eye gestures that trigger the squinting eye gesture 1415-3, the electronic device continues to display avatar eyes 1415 with squinting eye gesture 1415-3 in avatar state 1412-16.
FIG. 14E shows the smile avatar 1405 moved from the neutral pose to a different orientation to show the movement of the smile avatar head 1445 without moving the lighting effect 1455. The lighting effect is a visual effect that gives the appearance that the smiling avatar 1405 is in a spherical shape. The smile avatar 1405 is shown with a different orientation to exhibit the position of the visual effect 1455 without changing with the movement of the smile avatar 1405. The smile avatar 1405 is shown as having three display states (1412-17, 1412-18, and 1412-19), each of the three display states of the smile avatar 1405 corresponding to the three detected states (1411-17, 1411-18, and 1411-19) of the user 1401, respectively. In user states 1411-17, the electronic device detects the user 1401 in a neutral gesture detected in user states 1411-1 and 1411-12. In response, the electronic device displays a smile avatar 1405 in avatar state 1412-17, the smile avatar having the same neutral pose displayed in avatar states 1412-1 and 1412-12. In the neutral pose, the lighting effect 1455 is displayed at a centered position on top of the avatar head 1445.
In the user states 1411-18, the electronic device detects rotation of the user's head 1440. In response, the electronic device modifies the smiley head 1405 to rotate the head 1445 of the head to mirror the movement of the user's head 1440. The electronic device displays the light effect 1455 with a rest position as the avatar head 1445 rotates, as shown by avatar states 1412-18.
In the user states 1411-19, the electronic device detects movement of the user's mouth 1420 to the open position and tilting of the user's head 1440. The open position of the user's mouth 1420 does not trigger a sudden change in the pose of the avatar mouth 1425. In response, the electronic device modifies the smiley avatar 1405 to have a tilted head position that mirrors the tilted head movement of the user's head 1440, and to have an open mouth position 1425 that mirrors the open mouth position of the user's mouth 1420. The electronic device displays the light effect 1455 with a rest position when the avatar head 1445 is tilted, as shown by avatar states 1412-19.
The foregoing embodiments illustrate a few examples of abrupt change behavior that may be displayed using the disclosed techniques. It should be understood that the abrupt change gestures are not limited to those discussed above, and that modifications to the virtual avatar may include additional gestures, different combinations of gestures for different avatar characteristics, and other behaviors described in detail below. For example, in some embodiments, abruptly changing the avatar characteristic may include replacing the display of the avatar characteristic with a different version of the avatar characteristic (e.g., replacing the mouth of the display without lips with the mouth of the display with lips that beep to achieve the beep mouth gesture 1425-3 in the avatar state 1412-16). In some embodiments, features may be replaced in an animation, where a first feature gradually fades and a second feature gradually appears on the avatar. In some embodiments, the change in the avatar characteristic may be driven by detecting a change in a user characteristic that does not anatomically correspond to the avatar characteristic. For example, a change in the user's mouth triggers a change in the pose of the avatar's eyes.
FIG. 15 is a flow diagram illustrating a method for displaying a virtual avatar using an electronic device, in accordance with some embodiments. The method 1500 is performed at a device (e.g., 100, 300, 500, 600) having a display and one or more cameras. Some operations in method 1500 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 1500 provides an intuitive way for displaying a virtual avatar. The method reduces the cognitive burden of the user in displaying the virtual avatar, thereby creating a more efficient human-machine interface. For battery-driven computing devices, enabling a user to more quickly and efficiently display a virtual avatar conserves power and increases the interval between battery charges.
An electronic device (e.g., 600) displays (1502), via the display device (e.g., 601), a virtual avatar (e.g., 1405) (e.g., a user representation that can be graphically depicted) having one or more avatar characteristics (e.g., avatar glasses 1415, avatar mouth 1425, avatar eyebrow 1435, avatar head 1445) (e.g., facial characteristics (e.g., mouth, eyes); e.g., macroscopic features (e.g., head)) that change appearance in response to changes (e.g., orientation, translation) (e.g., changes in facial expressions) in pose of a face (e.g., user 1401 including user eyes 1410, user mouth 1420, user eyebrows 1430, and user head 1440) detected in the field of view of one or more cameras (e.g., 602), smiling face)). In some embodiments, the one or more avatar features include a first avatar feature (e.g., avatar mouth 1425) having a first appearance (e.g., avatar mouth 1425 in avatar states 1412-1, 1412-2, 1412-6, 1412-12, 1412-15, and 1412-15-1412-19) that is modified (e.g., avatar mouth 1425 does not abruptly change to the appearance of a predefined gesture) in response to a change (e.g., orientation, translation) (e.g., a change in facial expression) in a facial gesture detected in the field of view of the one or more cameras. In some embodiments, the avatar features correspond to (e.g., map to) one or more physical features of the user's face, such that detected movement of the one or more physical features of the user affects the avatar features (e.g., affects a graphical representation of the features). In some embodiments, the avatar feature corresponds anatomically to the physical feature (e.g., the avatar feature is modeled based on one or more of a location, movement characteristics, size, color, and/or shape of the physical feature) (e.g., the avatar feature and the physical feature are both eyebrows).
When a face is detected in the field of view of the one or more cameras (e.g., 602), which includes one or more detected facial features (e.g., a user's mouth 1420) (e.g., one or both of the user's eyes 1410), the electronic device detects (1504) movement of the one or more facial features of the face.
In response to (1506) detecting movement of the one or more facial features, in accordance with a determination that the detected movement of the one or more facial features is such that a first pose criterion is satisfied (e.g., a detected change in the user's face triggers an abrupt change in the avatar feature to a first pose (e.g., an open-mouth smile 1425-1)) (e.g., the detected pose of the facial feature is within an acceptable range of poses corresponding to the first pose criterion, thereby triggering an abrupt change in the one or more avatar features to the first pose), the electronic device modifies (1508) the virtual avatar (e.g., 1405) to display the avatar mouth 1425 having a second appearance (e.g., an avatar mouth 1425 with a laugh pose 1425-1) (e.g., based on a detected change in the user's face that satisfies the first pose criterion (e.g., a pose corresponding to a open-mouth smile pose such as an open-mouth avatar having an smile sign at laugh) Triggered appearance) that is modified (e.g., distorted within a first range of appearance values) in response to a change (e.g., orientation, translation) in facial pose (e.g., a change in facial expression) detected in the field of view of the one or more cameras. In some embodiments, the one or more facial features are moved to a first position within a range of positions that satisfies the first gesture criteria, and in response, the electronic device modifies the first avatar feature to assume a gesture associated with the range of positions (e.g., a gesture representing a facial expression associated with the first gesture criteria). This is referred to herein as "abrupt change" to a gesture or position. Such abrupt change behavior makes it easier for the user to implement a particular gesture with the virtual avatar (or avatar characteristics), as the virtual avatar (or avatar characteristics) may be biased toward implementing a particular gesture (e.g., depending on the degree of the range of positions for satisfying the gesture criteria). This provides a control scheme for operating and/or composing a virtual avatar on the display of an electronic device, where the system detects and processes input in the form of changes in user facial features (and the magnitude and/or direction of those changes), and provides the desired output in the form of the appearance of the virtual avatar through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch input on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose a virtual avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the animation of the virtual avatar than if a different animation control scheme were used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of animation control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In response to (1506) detecting movement of the one or more facial features, in accordance with a determination that the detected movement of the one or more facial features causes a second gesture criterion different from the first gesture criterion to be satisfied (e.g., a detected change in the user's face triggers an abrupt change in the avatar characteristics to a second gesture (e.g., a sad mouth gesture) different from the first gesture) (e.g., a detected gesture of the facial features is within an acceptable range of gestures corresponding to the second gesture criterion, thereby triggering an abrupt change in the one or more avatar characteristics to the second gesture), the electronic device modifies (1510) the virtual avatar to display a third appearance (e.g., an extended mouth gesture 1425-2) having a different first appearance and second appearance (e.g., based on a detected sad gesture in the user's face that satisfies the second gesture criterion (e.g., a sad gesture of the mouth corresponding to an avatar of the mouth, e.g., having an emoticon with a sad facial expression) Appearance triggered by the change), the first avatar feature being modified (e.g., distorted within a first range of appearance values) in response to a change (e.g., orientation, translation) in facial pose (e.g., change in facial expression) detected in the field of view of the one or more cameras. In some embodiments, the one or more facial features are moved to a second position within a range of positions that satisfies the second gesture criteria, and in response, the electronic device modifies the first avatar feature to assume a gesture associated with the range of positions (e.g., a gesture representing a facial expression associated with the second gesture criteria). In some embodiments, the first avatar feature is anchored to a pose associated with the respective first or second pose criteria, but the first avatar feature is slightly modified from the pose in response to the detected change in the one or more facial features when the detected change is within a threshold amount of deviation from the pose of the one or more facial features that satisfies the respective first or second pose criteria. In some embodiments, when the detected change in the one or more facial features exceeds a threshold amount of deviation, the electronic device transitions the first portrait feature from a pose associated with the first/second pose criteria to a pose determined based on the location of the one or more facial features (e.g., based on the magnitude and/or direction of movement of the one or more facial features).
In some embodiments, modifying the virtual avatar to display the first avatar feature having the second appearance includes displaying a third avatar feature (e.g., avatar eyebrow 1435, avatar teeth 1465). In some embodiments, the third avatar feature is not displayed until a movement of one or more facial features (e.g., initially displayed, introduced to display) of the third avatar feature (e.g., avatar eyebrow, avatar tongue, avatar teeth, avatar mouth, etc.) is detected. The introduction of facial features in modifying different avatar features provides a control scheme for manipulating and/or composing a virtual avatar on the display of an electronic device, where the system detects and processes input in the form of changes in user facial features (and the magnitude and/or direction of those changes) and provides desired output in the form of the appearance of the virtual avatar through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch input on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose a virtual avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the animation of the virtual avatar than if a different animation control scheme were used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of animation control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In some embodiments, displaying the third avatar feature includes the third avatar feature gradually appearing on the virtual avatar (e.g., the avatar eyebrow 1435 gradually appears on the smile avatar 1405 in the avatar state 1412-7) (e.g., the third avatar feature appears and optical intensity increases to give the gradually appearing appearance on the avatar). In some embodiments, the third avatar feature is displayed to gradually appear on the virtual avatar in an animation effect in which the feature appears as a hole that opens in the virtual avatar (e.g., at the location of the third avatar feature), enlarges (enlarges) to the shape of the third avatar feature and increases in optical intensity (e.g., darkens in appearance). In some embodiments, the optical intensity of the subject is the degree of visual materialization of the subject. The optical intensity may be measured along a scale between a predefined minimum and a predefined maximum. In some embodiments, the optical intensity may be measured along a logarithmic scale. In some embodiments, the optical intensity may be perceived by the user as a transparent effect (or lack thereof) applied to the object. In some embodiments, the minimum optical intensity means that the object is not displayed at all (i.e., the object is not perceptible to the user), and the maximum optical intensity means that the object is displayed without any transparent effect (e.g., the object has been completely visually materialized and is perceptible to the user). In some embodiments, the optical intensity may be a visual difference between an object and an object behind it based on color, hue, color saturation, brightness, contrast, transparency, and any combination thereof. In some embodiments, the optical intensity of the third avatar feature increases as the third avatar feature appears on the avatar, and decreases as the third avatar feature fades away on the avatar. In some embodiments, the optical intensity increases or decreases smoothly. Displaying the third avatar characteristics, which gradually appear on the virtual avatar, provides a control scheme for manipulating and/or composing the virtual avatar on the display of the electronic device, wherein the system detects and processes input in the form of changes in the user's facial features (and the magnitude and/or direction of those changes), and provides the desired output in the form of the appearance of the virtual avatar through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch input on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose a virtual avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the animation of the virtual avatar than if a different animation control scheme were used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of animation control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In some embodiments, while displaying the third avatar feature, the electronic device detects movement of the one or more facial features. In some embodiments, in response to detecting the movement of the one or more facial features, in accordance with a determination that the detected movement of the one or more facial features is such that the first gesture criteria is no longer satisfied, the electronic device stops displaying the third avatar feature (e.g., the avatar eyebrow 1435 fades the smile avatar 1405 in the avatar states 1412-10) by fading the third avatar feature from the virtual avatar (e.g., the optical intensity of the third avatar feature decreases and disappears to give a faded appearance from the avatar). In some embodiments, the third avatar characteristic is displayed to fade away from the virtual avatar with an animation effect in which the characteristic appears to shrink in size and darken in appearance to give the avatar characteristic an appearance that gradually appears in the avatar. Displaying the third avatar feature that fades away on the virtual avatar provides a control scheme for manipulating and/or composing the virtual avatar on the display of the electronic device, wherein the system detects and processes input in the form of changes in the user's facial features (and the magnitude and/or direction of those changes) and provides the desired output in the form of the appearance of the virtual avatar through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch input on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose a virtual avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the animation of the virtual avatar than if a different animation control scheme were used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of animation control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In some embodiments, displaying the third avatar characteristic includes maintaining the display of the third avatar characteristic (e.g., the avatar eyebrow 1435) for at least a predetermined period of time. In some embodiments, the third avatar characteristic is displayed for a duration that is the longer of: a) a predetermined period of time, or b) a duration of a gesture for which the user maintains satisfaction of the respective first or second gesture criteria. For example, if the user does not hold the gesture for at least a predetermined period of time, the third avatar feature remains for the predetermined period of time (e.g., if the user quickly raises and lowers their eyebrows, the eyebrows of the avatar appear on the avatar and remain for the predetermined period of time, and then disappear). However, if the user continues to hold their eyebrows in the raised position for a period of time longer than a predetermined period of time, the head image of the eyebrows continues to be held until the user stops holding the posture of raising the eyebrows. Maintaining the display of the third avatar characteristic for at least the predetermined period of time prevents the third avatar characteristic from having a flickering, jittery appearance when the user fails to maintain a gesture that triggers the display of the third avatar characteristic for at least the predetermined period of time. The flickering appearance detracts from the visual appearance of the virtual avatar and the expressions that are intended to be conveyed using the virtual avatar. Thus, maintaining the display of the third avatar characteristic for at least a predetermined period of time enables the avatar to capture the user's expression if the expression duration is too short, so that the user may be interested in the virtual avatar if the third avatar characteristic does not last. This provides a control scheme for operating and/or composing a virtual avatar on the display of an electronic device, where the system detects and processes input in the form of changes in user facial features (and the magnitude and/or direction of those changes), and provides the desired output in the form of the appearance of the virtual avatar through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch input on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose a virtual avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the animation of the virtual avatar than if a different animation control scheme were used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of animation control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In some embodiments, modifying the virtual avatar to display the first avatar feature having the second appearance includes displaying a first animation of the first avatar feature having a gradually diminishing first appearance (e.g., the avatar mouth with visually indistinguishable lips (e.g., the avatar mouth 1425 in the avatar state 1412-15)) and displaying a second animation of the first avatar feature having a gradually enhancing second appearance (e.g., the avatar mouth with lips that beep (e.g., the avatar mouth 1425 with the beep pose 1425-3 in the avatar state 1412-16)), where the second animation is displayed simultaneously with at least a portion of the first animation (e.g., the first appearance and the second appearance gradually appear across). Displaying a first animation of first avatar characteristics having a gradually diminishing first appearance, and displaying a second animation of first avatar characteristics having a gradually enhancing second appearance, wherein the second animation is displayed simultaneously with at least a portion of the first animation, provides a control scheme for manipulating and/or composing a virtual avatar on a display of an electronic device, wherein the system detects and processes inputs of varying forms of user facial features (and magnitudes and/or directions of these variations), and provides a desired output of the apparent form of the virtual avatar through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch inputs on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose a virtual avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the animation of the virtual avatar than if a different animation control scheme were used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of animation control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In some embodiments, the movement of the one or more facial features includes movement of a fourth facial feature (e.g., the user's mouth 1420), and the first avatar feature is a representation of a facial feature (e.g., avatar eyes 1415) that is different from the fourth facial feature. In some embodiments, a pose change of the user's mouth triggers a change in the avatar appearance, wherein the avatar eyes transition from a first set of eyes in a first appearance to a different set of eyes in a second appearance. For example, when the first avatar feature is shown to have a first appearance and the user's mouth is in a neutral pose, the avatar eyes are shown to be in a neutral state. When the user moves their mouth to a laugh gesture, the avatar eyes transition to a second appearance with squinting eyes. Moving the first avatar feature based on detected changes in facial features other than the facial feature of which the first avatar feature is representative allows the device to modify the avatar to achieve different poses while tracking fewer facial features. This is because the device may change the avatar mouth and eyes in response to detecting only changes in the user's mouth. This provides a control scheme for operating and/or composing a virtual avatar on the display of an electronic device, where the system detects and processes input in the form of changes in user facial features (and the magnitude and/or direction of those changes), and provides the desired output in the form of the appearance of the virtual avatar through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch input on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose a virtual avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the animation of the virtual avatar than if a different animation control scheme were used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of animation control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In response to (1506) detecting movement of the one or more facial features, in accordance with a determination that the detected movement of the one or more facial features satisfies criteria for maintaining display of a first avatar feature having a first appearance (e.g., the avatar mouth 1425) (e.g., the detected change in the user's face does not trigger a sudden change in the one or more avatar features to a particular pose), the electronic device modifies (1512) the virtual avatar (e.g., 1405) to display the first avatar feature (e.g., orientation, translation) (e.g., a change in facial expression) (e.g., the avatar mouth 1425 in avatar states 1412-1, 1412-2, 1412-6, 1412-12, 1412-15, and 1412-17-1412-19) (e.g., when a feature of the head image does not change abruptly to a particular pose, the feature is modified in response to detecting a change in the user's face (e.g., modified based on the direction and/or magnitude of movement of one or more facial features)). In some embodiments, the first avatar characteristic is an avatar mouth, the second appearance is a smile pose of the mouth, the third appearance is a sad pose of the mouth, and the first appearance of the mouth includes various positions of the mouth between the smile pose and the sad pose (e.g., a neutral mouth, a position between open and closed while speaking, etc.).
In some embodiments, the detected movement of the physical feature (e.g., a change in facial pose; movement of the facial feature) has both a direction component and a magnitude component. In some implementations, the modification to the avatar characteristics has both a magnitude component and a direction component. In some embodiments, the modified directional component of the avatar feature is based on a changed directional component of one or more physical features (e.g., facial features of the user's face) to which the avatar feature reacts. In some embodiments, the direction component of the change in the avatar characteristic is the same as the direction component of the change in the physical characteristic. For example, as a physical feature (e.g., mouth) moves downward, the corresponding (reacting) avatar feature (e.g., avatar mouth) moves downward. In some embodiments, the directional component of the change in the avatar characteristic is in a mirrored relationship with respect to the directional component of the change in the corresponding physical characteristic (e.g., the physical characteristic to which the avatar characteristic reacts to the change detected). For example, when a physical feature (e.g., the user's eye (e.g., iris)) moves to the left, a responsive avatar feature (e.g., the avatar's eye (e.g., iris)) moves to the right. In some embodiments, the direction component of the change of the avatar characteristic is the same as the direction component of the change of the corresponding physical characteristic for movement along the vertical axis, and the direction component of the change of the avatar characteristic is in a mirror image relationship with the direction component of the change of the corresponding physical characteristic for movement along the horizontal axis, similar to the effect seen when looking into a mirror. In some embodiments, the change in the relative position of the physical feature (e.g., the user's iris or eyebrow) is in a direction determined from the neutral resting position of the physical feature. In some embodiments, a neutral resting position of the user's iris is determined as a particular position relative to the user's eyeball periphery (e.g., a centered position). In some embodiments, the direction of the reaction of the avatar characteristic corresponds to (e.g., directly corresponds to or otherwise corresponds to) the relative direction of change of the physical characteristic of the user. In some embodiments, the relative direction of change of the physical feature is determined based on a direction of movement of the physical feature from a neutral rest position of the physical feature. In some embodiments, the direction of the reaction of the avatar feature directly corresponds to the relative direction of change of the physical feature (e.g., the physical feature moves upward, then the avatar feature moves upward). In some embodiments, the direction of the reaction of the avatar feature corresponds opposite the relative direction of change of the physical feature (e.g., the physical feature moves up, then the avatar feature moves down).
In some embodiments, the magnitude of the change in the avatar characteristic corresponds to the magnitude of the change in the physical characteristic of the user. In some embodiments, the magnitude of the change in the physical feature is determined from a possible range of motion of the physical feature, where the magnitude represents the relative position of the physical feature within the range of motion (e.g., a predicted or modeled range of motion) of the physical feature. In such embodiments, the magnitude of the response (e.g., change in position) of the avatar feature is similarly the relative position of the avatar feature within the range of motion of the avatar feature. In some embodiments, the magnitude of the change is determined based on a comparison or measurement (e.g., distance) of the starting and ending locations of the change in the physical characteristic. In such embodiments, the change in the physical characteristic is translated into a modification to the first avatar characteristic by applying the change in the measured physical characteristic to the avatar characteristic (e.g., directly, or as a scaled or adjusted value).
In some embodiments, the one or more cameras include a depth camera (e.g., with a depth camera sensor 175). In some embodiments, the one or more cameras capture image data corresponding to depth data (e.g., the image data includes data captured by a visible light camera and a depth camera) (e.g., image data including depth aspects of a captured image or video (e.g., depth data independent of RGB data)), the depth data including depth data of an object positioned in a field of view of the depth camera (e.g., information about relative depth positioning of one or more portions of the object relative to other portions of the object and/or relative to other objects within the field of view of the one or more cameras). In some embodiments, the image data includes at least two components: the RGB components of the visual characteristics of the captured image are encoded, as well as depth data that encodes information about the relative spacing relationship of elements within the captured image (e.g., depth data encodes that the user is in the foreground and background elements are in the background as a tree behind the user). In some embodiments, the image data includes depth data without RGB components. In some implementations, the depth data is a depth map. In some implementations, a depth map (e.g., a depth map image) includes information (e.g., values) related to the distance of objects in a scene from a viewpoint (e.g., a camera). In one embodiment of the depth map, each depth pixel defines the location in the Z-axis of the viewpoint at which its corresponding two-dimensional pixel is located. In some implementations, the depth map is composed of pixels, where each pixel is defined by a value (e.g., 0 to 255). For example, a "0" value represents a pixel located farthest from a viewpoint (e.g., camera) in a "three-dimensional" scene, and a "255" value represents a pixel located closest to the viewpoint in the "three-dimensional" scene. In other examples, the depth map represents a distance between an object in the scene and a plane of the viewpoint. In some implementations, the depth map includes information about the relative depths of various features of the object of interest in the field of view of the depth camera (e.g., the relative depths of the eyes, nose, mouth, ears of the user's face). In some embodiments, the depth map comprises information enabling the apparatus to determine a contour of the object of interest in the z-direction. In some implementations, the depth data has a first depth component (e.g., a first portion of the depth data that encodes a spatial location of an object in the camera display area; a plurality of depth pixels that form a discrete portion of the depth map, such as a foreground or a particular object) that includes a representation of the object in the camera display area. In some implementations, the depth data has a second depth component (e.g., a second portion of the depth data encoding a spatial location of the background in the camera display area; a plurality of depth pixels, such as the background, forming a discrete portion of the depth map), the second depth component being separate from the first depth component, the second depth aspect including a representation of the background in the camera display area. In some implementations, the first depth aspect and the second depth aspect are used to determine a spatial relationship between an object in a camera display area and a background in the camera display area. This spatial relationship can be used to distinguish objects from the background. This differentiation may be exploited, for example, to apply different visual effects (e.g., visual effects with depth components) to the object and the background. In some implementations, all regions of the image data that do not correspond to the first depth component (e.g., regions of the image data that are beyond the range of the depth camera) are segmented out of (e.g., excluded from) the depth map. In some implementations, the depth data is in the form of a depth map or a depth mask.
In some embodiments, the detected movement of the one or more facial features includes movement of a first facial feature (e.g., the user's mouth 1420). In some embodiments, when the movement of the first facial feature is within a first range of possible first facial feature values (e.g., pose values of the user's mouth 1420 that do not cause the avatar mouth 1425 to abruptly change to a certain pose) based on a predetermined range of motion of the first facial feature (e.g., a range of motion expressed as a magnitude relative to an initial (e.g., stationary) value), the detected movement of the one or more facial features satisfies a criterion for maintaining the display of the first avatar feature having the first appearance (e.g., the avatar mouth 1425 has a position (e.g., a non-abruptly changing pose) that tracks the movement of the user's mouth 1420). In some embodiments, when the movement of the first facial feature is within a second range of possible first facial feature values that is different from the first range of possible first facial feature values (e.g., a range of values of the user's mouth 1420 that cause the avatar mouth 1425 to abruptly change to a predefined pose), the detected movement of the one or more facial features causes the first pose criterion to be satisfied. In some embodiments, modifying the first appearance of the first avatar feature (e.g., the avatar mouth 1425) in response to the detected change in facial pose in the field of view of the one or more cameras includes modifying the first appearance of the first avatar feature (e.g., moving the avatar mouth 1425 along a non-abrupt change pose) within a first range of appearance values (e.g., a range of positions of the first avatar feature) corresponding to a first range of possible first facial feature values. In some embodiments, modifying the virtual avatar to display the first avatar feature having a second appearance includes displaying the first avatar feature having a second appearance value within a second range of appearance values (e.g., values corresponding to a predefined abrupt change gesture (e.g., laugh gesture 1425-1) of the avatar mouth 1425), the second range of appearance values being different from the first range of appearance values and corresponding to a second range of possible first facial feature values. In some embodiments, the second appearance value range of the second appearance is a limited range of values so as to still associate the pose of the first avatar feature with the second appearance (e.g., such that a distortion of the second appearance (e.g., in response to a detected change in facial pose in the field of view of the camera) is still associated with the second appearance). In other words, the second range is limited to the range of locations that the user will still recognize or recognize as having the second appearance. For example, when the first avatar characteristic is an avatar mouth (e.g., 1425) and the second appearance is a smile pose (e.g., laugh pose 1425-1), the second appearance value range is a mouth smile pose range for an initial smile pose (e.g., a smile pose to which the mouth abruptly changes when the first pose criteria is satisfied) that is similar to the second appearance. For example, in the avatar state 1412-5, the avatar mouth 1425 is distorted at the corner 1425a, but the avatar mouth 1425 still maintains the abruptly changing laugh gesture 1425-1. This serves to anchor the avatar characteristics to the second appearance so that the user can more easily maintain the second appearance of the first avatar characteristics, which provides a control scheme for operating and/or composing a virtual avatar on the display of an electronic device in which the system detects and processes inputs of varying forms of the user's facial features (and the magnitude and/or direction of these variations) and provides a desired output of the appearance form of the virtual avatar through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch inputs on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose a virtual avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the animation of the virtual avatar than if a different animation control scheme were used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of animation control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In some embodiments, when the movement of the first facial feature (e.g., the user's mouth 1420) is within a third range of possible first facial feature values that is different from the first range of possible first facial feature values and the second range of possible first facial feature values (e.g., a range of user mouth gestures that causes the avatar mouth 1425 to abruptly change to a different predefined gesture (e.g., open-mouth gesture 1425-2)), the detected movement of the one or more facial features causes the second gesture criteria to be satisfied. In some embodiments, the electronic device modifying the virtual avatar to display the first avatar feature having the third appearance (e.g., the avatar mouth 1425) includes displaying the first avatar feature having a third appearance value within a third range of appearance values (e.g., a value of the avatar mouth 1425 corresponding to an abruptly changing mouth pose (e.g., the mouth-open pose 1425-2)), the third range of appearance values being different from the first range of appearance values and the second range of appearance values and corresponding to a third range of possible first facial feature values. In some embodiments, the modification to the third appearance is similarly limited to the range of appearance values of the third appearance, such that distortion of the third appearance (e.g., in response to a detected change in facial pose in the field of view of the camera) remains associated with the third appearance. In some embodiments, the range of appearance values for the third appearance (e.g., the range of sad mouth positions) is different from the range of appearance values for the second appearance and the range of appearance values for the first appearance. This provides a control scheme for operating and/or composing a virtual avatar on the display of an electronic device, where the system detects and processes input in the form of changes in user facial features (and the magnitude and/or direction of those changes), and provides the desired output in the form of the appearance of the virtual avatar through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch input on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose a virtual avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the animation of the virtual avatar than if a different animation control scheme were used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of animation control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In some embodiments, the electronic device is configured to transmit (e.g., transmit in a messaging application) a first predefined emoticon (e.g., a smiley emoticon) and a second predefined emoticon (e.g., a sad emoticon). In some embodiments, the second appearance of the first avatar feature corresponds to (e.g., has the appearance of) the first predefined emoticon (e.g., an emoticon having an open smile) (e.g., the avatar's mouth responds to changes in the user's face with a slight animation to present the position of the mouth of the emoticon having an open smile) (e.g., the entire virtual avatar responds to changes in the user's face with a slight animation of the avatar's mouth, the avatar's eyes, and the avatar's head rotation to present the appearance of the emoticon having an open smile). In some embodiments, the third appearance of the first avatar feature corresponds to (e.g., has the appearance of) a second predefined emoticon (e.g., an emoticon having a sad emoticon) (e.g., the avatar's mouth responds to changes in the user's face with a slight animation to present the position of the mouth of the emoticon having the sad emoticon) (e.g., the entire virtual avatar responds to changes in the user's face with a slight animation of the avatar's mouth, avatar's eyes, and avatar's head rotation to present the appearance of the sad emoticon). Displaying the first and second features of the avatar with the appearance of the features corresponding to different predefined emoticon characters allows the avatar to achieve a more easily recognized facial expression because the features correspond to well-known emoticon characters. This provides a control scheme for operating and/or composing a virtual avatar on the display of an electronic device, where the system detects and processes input in the form of changes in user facial features (and the magnitude and/or direction of those changes), and provides the desired output in the form of the appearance of the virtual avatar through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch input on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose a virtual avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the animation of the virtual avatar than if a different animation control scheme were used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of animation control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In some embodiments, when a first avatar feature (e.g., avatar mouth 1425) is displayed as having a second appearance (e.g., laugh gesture 1425-2), the electronic device detects a change in facial gestures in the field of view of the one or more cameras. In some embodiments, in response to detecting a change in facial pose in the field of view of the one or more cameras, in accordance with a determination that the detected change in facial pose in the field of view of the one or more cameras includes a gesture (e.g., a gesture in which the second facial feature is outside of a first gesture range (e.g., a gesture range that triggers laugh gesture 1425-2) of the second facial feature (e.g., a gesture in which the second facial feature is outside of a gesture range associated with the second appearance), the electronic device modifies the first avatar feature to have a first appearance (e.g., the avatar mouth 1425 has a non-abruptly changing gesture, such as in avatar state 1412-6). In some embodiments, in response to detecting a change in facial pose in the field of view of the one or more cameras, in accordance with a determination that the detected change in facial pose in the field of view of the one or more cameras includes the second facial feature moving to a pose within a first pose range of the second facial feature (e.g., the second facial feature has a pose within a pose range associated with the second appearance), the electronic device maintains display of the first portrait feature having the second appearance (e.g., despite distortion at corner 1425a, the avatar mouth 1425 has a smile pose 1425-1 in the avatar state 1412-5) (e.g., slightly modifies the first portrait feature based on movement of the second facial feature) (e.g., forgoes modifying the first portrait feature based on movement of the second facial feature) (e.g., when the detected movement of the second facial feature causes the pose of the second facial feature to be within the pose range associated with the second appearance, the first avatar feature maintains the second appearance (e.g., an abrupt change pose), and the first avatar feature transitions to the first appearance (e.g., a non-abrupt change pose) when the detected movement of the second facial feature causes the pose of the second facial feature to be outside of a pose range associated with the second appearance. In some embodiments, a gesture that requires movement of facial features (e.g., facial features corresponding to avatar features) outside of the gesture range in order to modify the avatar features to some appearance other than the second appearance serves as a hysteresis to anchor the avatar features to the second appearance so that the user can more easily maintain the second appearance of the first avatar features. Thus, a detected slight change in the user's facial features (e.g., effecting a gesture of the facial features within a range of gestures associated with the second appearance) does not change the avatar characteristics or change the position of the avatar characteristics to a position that is not recognized as the second appearance (e.g., the avatar characteristics are slightly distorted but still recognized as having the second appearance). This behavior is used to bias the respective avatar characteristics to various gestures, making it easier for the user to achieve (e.g., via abrupt change behavior) and maintain (e.g., via hysteresis) avatar gestures, such as gestures that are common to communicating with other users (e.g., gestures corresponding to different emoticon expressions). The abrupt change/lag in pose may be applied on a single avatar feature basis (e.g., individually affecting the avatar feature, such as by abruptly changing a single avatar feature (e.g., mouth) to a different feature pose (e.g., a different emoticon mouth pose) without abruptly changing a different avatar feature (e.g., eye)), or to the entire virtual avatar (e.g., affecting the entire virtual avatar (e.g., multiple avatar features), such as by abruptly changing multiple avatar features to different emoticon facial expressions at the same time).
In some embodiments, the one or more avatar features further include a second avatar feature (e.g., avatar eyes 1415) having a fourth appearance (e.g., a non-abrupt change pose (e.g., a squinting eye 1415 in avatar state 1412-6)), the fourth appearance being modified in response to a detected change in facial pose in the field of view of the one or more cameras.
In some embodiments, further in response to detecting a change in facial gestures in the field of view of the one or more cameras, in accordance with a determination that the detected movement of the one or more facial features causes the third gesture criteria to be satisfied, the electronic device modifies the virtual avatar to display a second avatar feature (e.g., avatar eyes 1415) having a fifth appearance (e.g., an abrupt change gesture (e.g., squinting gesture 1415-3)) different from a fourth appearance (e.g., a non-abrupt change gesture), the fifth appearance modified in response to the detected change in facial gestures in the field of view of the one or more cameras. In some embodiments, further in response to detecting a change in facial pose in the field of view of the one or more cameras, in accordance with a determination that the detected movement of the one or more facial features satisfies criteria for maintaining display of a second avatar feature having a fourth appearance, the electronic device modifies the virtual avatar to display the second avatar feature by modifying the fourth appearance of the second avatar feature in response to the detected change in facial pose in the field of view of the one or more cameras (e.g., modifying avatar eyes 1415 based on movement of user eyes 1410 in avatar state 1412-6).
In some embodiments, when the second avatar feature is displayed as having a fifth appearance (e.g., a sudden change gesture (e.g., squinting gesture 1415-3)), the electronic device detects a second change in facial gestures in the field of view of the one or more cameras. In some embodiments, in response to detecting the second change in facial pose in the field of view of the one or more cameras, in accordance with a determination that the detected change in facial pose in the field of view of the one or more cameras includes the third facial feature (e.g., the user's mouth 1425) moving to a pose outside of a second pose range (e.g., a range of user's mouth poses that trigger an abrupt change pose of the avatar mouth 1425) (e.g., a range of third facial feature values) of the third facial feature that is different than the first pose range of the second facial feature (e.g., a pose range having a third facial feature that is greater or less than the first pose range of the second facial feature) (e.g., the third facial feature has a pose outside of the pose range associated with the fifth appearance), the electronic device modifies the second avatar feature to have a fourth appearance (e.g., the avatar eyes 1415 return to a non-abrupt change posture). In some embodiments, in response to detecting the second change in facial pose in the field of view of the one or more cameras, in accordance with a determination that the detected change in facial pose in the field of view of the one or more cameras includes the third facial feature moving to a pose within the second pose range of the third facial feature (e.g., the third facial feature has a pose within the pose range associated with the fifth appearance), the electronic device maintains display of the second avatar feature having the fifth appearance (e.g., avatar eyes 1415 remain in an abrupt change pose). For example, the user's eyes 1410 slightly squint under the user states 1411-3 and 1411-4, but the avatar eyes 1415 remain in a neutral avatar eye pose 1415-1 in the avatar states 1412-3 and 1412-4) (e.g., slightly modifying the second avatar feature based on movement of the third facial feature) (e.g., forgoing modifying the second avatar feature based on movement of the third facial feature) (e.g., when the detected movement of the third facial feature causes the pose of the third facial feature to be within a pose range associated with the fifth appearance, the second avatar feature maintains the fifth appearance (e.g., an abrupt change pose), and when the detected movement of the third facial feature causes the pose of the third facial feature to be outside of the pose range associated with the fifth appearance, the second avatar feature transitions to the fourth appearance (e.g., a non-abrupt change pose)). In some embodiments, the pose ranges of the second and third facial features have different ranges of values (e.g., the first range is shorter than the second range) to achieve different hysteresis for each respective avatar feature. Applying different ranges for different features allows different avatar characteristics to have different hysteresis ranges. This allows some features to be more biased towards a particular pose, and other features to more easily track the user's facial pose. This provides a control scheme for operating and/or composing a virtual avatar on the display of an electronic device, where the system detects and processes input in the form of changes in user facial features (and the magnitude and/or direction of those changes), and provides the desired output in the form of the appearance of the virtual avatar through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch input on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose a virtual avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the animation of the virtual avatar than if a different animation control scheme were used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of animation control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In some embodiments, the first avatar characteristic is an avatar mouth (e.g., 1425). In some embodiments, the second avatar characteristic is one or more avatar eyes (e.g., 1415). In some embodiments, the second range of poses is greater than the first range of poses. In some embodiments, in addition to several limited gestures where the eyes have a squinting shape, the avatar eyes tend to be circular in shape, and the avatar mouth tends to suddenly change to a wider range of gestures (e.g., sadness, neutrality, smiling, laughing, outbiting laughing, etc.). Thus, for such embodiments, the lag for the avatar eyes is greater than the lag for the avatar mouth, such that the avatar mouth may more easily transition to different gestures (including two predefined gestures (e.g., a sudden change gesture) and a gesture based on the position of the user's mouth) while the avatar eyes tend to lean towards a circular shape or squinting shape (e.g., when the movement of the user's eyes is a substantial degree of movement, such as when squinting). This provides a control scheme for operating and/or composing a virtual avatar on the display of an electronic device, where the system detects and processes input in the form of changes in user facial features (and the magnitude and/or direction of those changes), and provides the desired output in the form of the appearance of the virtual avatar through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch input on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose a virtual avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the animation of the virtual avatar than if a different animation control scheme were used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of animation control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In some embodiments, when the virtual avatar is displayed with a first orientation (e.g., avatar state 1412-17) (e.g., relative to a fixed virtual point positioned relative to the virtual avatar) (e.g., the fixed virtual point is a pivot point located at a center position of the virtual avatar), the electronic device displays a three-dimensional effect (e.g., light effect 1455) (e.g., a light effect, such as glare, that gives the impression that the virtual avatar has the shape of a three-dimensional object, such as a sphere) at a first location on the virtual avatar (e.g., a forehead area on the virtual avatar) (e.g., the first location on the virtual avatar has a first relationship with the fixed virtual point). In some embodiments, the electronic device detects a change in orientation of a face (e.g., user states 1411-18 or 1411-19) (e.g., rotational movement of the face) in the field of view of the one or more cameras. In some embodiments, in response to detecting a change in face orientation, the electronic device modifies the virtual avatar based on the detected change in face orientation (e.g., avatar states 1412-18 or 1412-19) (e.g., rotates the virtual avatar based on a rotation of the face). In some embodiments, the virtual avatar is a spherical shape (e.g., a smiley face), and modifying the virtual avatar based on the change in facial orientation includes rotating the face of the avatar about a pivot point located at a central position of the virtual avatar (as opposed to a pivot point located at a base of the virtual avatar, such as a neck region). In some embodiments, modifying the virtual avatar based on the detected change in facial orientation includes changing the orientation of one or more features of the avatar (e.g., facial features such as eyes 1415, eyebrows, and/or mouth 1425) by a respective amount determined based on the magnitude of the detected change in facial orientation (e.g., the head of the avatar rotates based on rotation of the face) (e.g., the head of the avatar rotates 5, 10, 15, 25, or 40 degrees (e.g., the avatar looks to the left) in response to 5, 10, 15, 25, or 40 degrees of rotation of the user's face while changing the orientation of the three-dimensional effect by less than the respective amount (e.g., forgoing rotation of the three-dimensional effect), hi some embodiments, changing the orientation of the three-dimensional effect by less than the respective amount includes a first location on the virtual avatar having a first relationship to a fixed virtual point (e.g., at the side of the avatar head) (e.g., the first location no longer has a first relationship with the fixed virtual point) (e.g., the location of the three-dimensional effect remains fixed with respect to the fixed virtual point while the face of the avatar rotates (e.g., the three-dimensional effect does not rotate with the face of the avatar)). The orientation of the one or more avatar features is displayed while changing the orientation of the three-dimensional effect by a small amount, which causes the appearance of the virtual avatar to have a three-dimensional shape that dynamically changes within the environment (e.g., turns and rotates while still maintaining the three-dimensional shape such as a sphere). This provides a control scheme for operating and/or composing a virtual avatar on the display of an electronic device, where the system detects and processes input in the form of changes in user facial features (and the magnitude and/or direction of those changes), and provides the desired output in the form of the appearance of the virtual avatar through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch input on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose a virtual avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the animation of the virtual avatar than if a different animation control scheme were used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of animation control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In some embodiments, the one or more avatar features further include a fourth avatar feature (e.g., an avatar feature different from the first avatar feature) (e.g., an avatar eye 1415) having a sixth appearance (e.g., a non-abrupt change pose) that is modified in response to a detected change in facial pose in the field of view of the one or more cameras. In some embodiments, further in response to detecting the movement of the one or more facial features, in accordance with a determination that the detected movement of the one or more facial features causes the first gesture criteria to be satisfied, the electronic device modifies the virtual avatar to display a fourth avatar feature having a seventh appearance different from the sixth appearance (e.g., the first avatar feature (e.g., avatar mouth 1425) abruptly changes to the second appearance (e.g., 1425-1) and the avatar eyes (e.g., 1415) abruptly changes to the seventh appearance (e.g., squint gesture 1415-3)), the seventh appearance being modified in response to the detected change in facial gestures in the field of view of the one or more cameras. In some embodiments, further in response to detecting the movement of the one or more facial features, in accordance with a determination that the detected movement of the one or more facial features causes the second pose criteria to be satisfied, the electronic device modifies the virtual avatar to display a fourth avatar feature having an eighth appearance different from the sixth appearance and the seventh appearance (e.g., the first avatar feature (e.g., avatar mouth 1425) abruptly changes to the third appearance (e.g., 1425-2) and the avatar eyes abruptly change to the eighth appearance (e.g., eye glaring pose 1415-2)), the eighth appearance being modified in response to the detected change in facial pose in the field of view of the one or more cameras. In some embodiments, further in response to detecting the movement of the one or more facial features, in accordance with a determination that the detected movement of the one or more facial features satisfies criteria for maintaining display of a fourth avatar feature having a sixth appearance (e.g., a change in the detected user's face does not trigger the first avatar feature or the fourth avatar feature to suddenly change to a particular pose), the electronic device modifies the virtual avatar to display the fourth avatar feature by modifying the sixth appearance of the fourth avatar feature in response to the detected change in facial pose in the field of view of the one or more cameras. The second avatar characteristic may abruptly change to a different pose independent of the first avatar characteristic. This provides a control scheme for operating and/or composing a virtual avatar on the display of an electronic device, where the system detects and processes input in the form of changes in user facial features (and the magnitude and/or direction of those changes), and provides the desired output in the form of the appearance of the virtual avatar through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch input on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose a virtual avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the animation of the virtual avatar than if a different animation control scheme were used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of animation control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In some embodiments, the first avatar characteristic (e.g., avatar mouth 1425) includes a first state (e.g., 1425-2) (e.g., a state in which the avatar mouth suddenly changes to a sad gesture) and a second state (e.g., 1425-1) (e.g., a state in which the avatar mouth suddenly changes to a laugh gesture). In some embodiments, the state of the avatar feature corresponds to the appearance of the respective avatar feature (e.g., corresponds to the first appearance, the second appearance, and the third appearance of the first avatar feature). In some embodiments, the one or more avatar features further include a fifth avatar feature (e.g., avatar eyes 1415) modified in response to detecting a change in facial pose in the field of view of the one or more cameras, the fifth avatar feature including a third state (e.g., a state in which avatar eyes suddenly change to surprise gestures (e.g., 1415-2)) and a fourth state (e.g., a state in which avatar eyes suddenly change to squint gestures (e.g., 1415-3)).
In some embodiments, further in response to detecting the movement of the one or more facial features, in accordance with a determination that the first set of criteria is satisfied, the electronic device displays a first avatar feature having a first state (e.g., an avatar mouth suddenly changing to a sad gesture) and displays a fifth avatar feature having a third state (e.g., an avatar eyes suddenly changing to a surprised gesture (e.g., 1415-2)). In some embodiments, further in response to detecting the movement of the one or more facial features, in accordance with a determination that the second set of criteria is satisfied, the electronic device displays a first avatar feature having a second state (e.g., an avatar mouth suddenly changing to a laugh gesture) and displays a fifth avatar feature having a third state (e.g., an avatar eyes suddenly changing to a surprise gesture (e.g., 1415-2)). In some embodiments, further in response to detecting movement of the one or more facial features, in accordance with a determination that the third set of criteria is satisfied, the electronic device displays a first avatar feature having a first state (e.g., an avatar mouth suddenly changing to a sad gesture) and displays a fifth avatar feature having a fourth state (e.g., an avatar eye suddenly changing to a squint gesture (e.g., 1415-3)). In some embodiments, further in response to detecting movement of the one or more facial features, in accordance with a determination that the fourth set of criteria is satisfied, the electronic device displays a first avatar feature having a second state (e.g., an avatar mouth suddenly changing to a laugh gesture) and displays a fifth avatar feature having a fourth state (e.g., an avatar eyes suddenly changing to a squint gesture (e.g., 1415-3)). Different avatar characteristics may change abruptly or be modified to achieve different poses independent of each other, depending on the magnitude and direction (e.g., non-abruptly changing appearance) of the movement of the facial features. This provides a control scheme for operating and/or composing a virtual avatar on the display of an electronic device, where the system detects and processes input in the form of changes in user facial features (and the magnitude and/or direction of those changes), and provides the desired output in the form of the appearance of the virtual avatar through an iterative feedback loop, while eliminating the need for manual processing of the user interface (e.g., providing touch input on the display). This provides the user with improved visual feedback on how to manipulate the display to control and/or compose a virtual avatar using facial movements. This enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, the control scheme may require less input to generate or control the animation of the virtual avatar than if a different animation control scheme were used (e.g., a control scheme that required manipulation of separate control points for each frame of the animated sequence). Furthermore, this type of animation control may be done in real-time during a conversation, such as a text conversation or a video conversation, for example, whereas manual animation control of the avatar would have to be done before the conversation begins or after the conversation ends.
In some embodiments, the first avatar characteristic is one or more avatar eyes (e.g., 1415). In some embodiments, the first state is a state (e.g., 1415-2) in which the one or more head-like eyes have a rounded eye appearance (e.g., eyes are wide open; surprised posture). In some embodiments, the second state is a state (e.g., 1415-3) in which the one or more avatar eyes have a squinting appearance (e.g., the eyes are squinting, such as when smiling; squinting gestures).
In some embodiments, the first avatar characteristic is an avatar mouth (e.g., 1425). In some embodiments, the first state is a state (e.g., 1425-1) in which the avatar's mouth has a first expression (e.g., frown, apathy (e.g., "kaya"), smile, laugh, outmoded smile). In some embodiments, the second state is a state in which the avatar mouth has a second expression (e.g., 1425-2) different from the first expression (e.g., the avatar mouth is frown in the first state and smile in the second state) (e.g., the avatar mouth is a "kaya" pose in the first state (the pose of the avatar mouth when the avatar face is in a indifferent expression)) (e.g., the avatar mouth is a tooth-exposing smile pose in the first state and a smile pose in the second state). In some embodiments, as the user moves their mouth, the mouth transitions between different gestures (e.g., to different states). For example, when a user moves their mouth from frown to laugh, the avatar's mouth transitions between different mouth poses. For example, the head portrait mouth starts with a frown pose, then transitions to a "kazier" pose, then transitions to a smile pose, then transitions to a laugh pose, and then finally transitions to an outman laugh pose. In some embodiments, the avatar's mouth becomes a mirror image of the user's mouth when moving between different mouth poses, and then abruptly changes to the mouth pose when the user's mouth moves to various mouth positions that cause the avatar to abruptly change to the mouth pose.
In some embodiments, the first avatar characteristic is a set of avatar eyebrows (e.g., 1435). In some embodiments, the first state is a state in which the set of avatar is displayed (e.g., avatar states 1412-8). In some embodiments, the second state is a state in which the set of avatar eyebrows is not displayed (e.g., avatar state 1412-6).
Note that the details of the process described above with respect to method 1500 (e.g., fig. 15) also apply in a similar manner to the method described above. For example, methods 700, 800, 1000, 1200, 1300, 1700, and 1800 optionally include one or more features of the various methods described above with reference to method 1500. For example, avatars may be displayed and used in a user interface in a manner similar to that described above. For the sake of brevity, these details are not repeated in the following.
Fig. 16A-16X illustrate exemplary devices and user interfaces for sharing contact information, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 17 and 18.
Fig. 16A to 16X show three different devices, each belonging to a respective user. The electronic device 600 is Johnny applied phone 600 that is configured to receive communications 415 and 555 and 1234. The electronic device 1602 is a Jack Smith's phone 1602 that is configured to receive communications. The electronic device 1604 is Jane Smith's phone 1604 that is configured to receive communications 415-.
At fig. 16A, Johnny's phone 600 is displaying Johnny's address book 1610 as part of the address book application. Address book 1610 includes a contact entry 1610a for Jack that includes Jack's name ("JACK SMITH") and phone number. However, as shown on Johnny's phone 600 in fig. 16A, Johnny's address book 1610 does not include Jane's contact information (e.g., name, phone number, email).
At FIG. 16A, Jack's phone 1602 is displaying detailed information 1612 about Johnny's contact entry. The contact entry details 1612 include a representation 1632c of Johnny, a name 1612b of Johnny ("JONATHAN APPLEEEED"), and a phone number 1612c of Johnny. Representation 1632c is a combined alphabet representation (e.g., Jack selects for representing Johnny using techniques such as those described with respect to fig. 9A-9 AG).
In fig. 16A, Jane's phone 1604 is displaying Jane's address book 1614 as part of an address book application. The address book 1614 includes a contact entry 1614a for Jack that includes Jack's name ("Jack SMITH") and phone number. However, as shown on Jane's phone 1604 in fig. 16A, Jane's address book 1614 does not include Jane's contact information (e.g., name, phone number).
At fig. 16B-16D, Johnny uses his phone 600 to configure name and photo sharing (of his own contact information) during the setup process. At fig. 16B, Johnny's phone 600 displays a settings user interface 1616a that includes an option 1616B for Johnny to select a name and photo for himself, and an option 1616c to later set up name and photo sharing. The device detects a tap 1660a on option 1616b to select a name and a photo. Accordingly, Johnny's phone 600 displays the select photo and name user interface 1616d of FIG. 16C.
In fig. 16C, Johnny's phone 600 has received user input from Johnny (e.g., via a virtual keyboard at phone 600) to create avatar 1616e representing Johnny, such as by using the techniques and user interfaces described above with respect to fig. 11A-11 AD. At fig. 16C, Johnny also updated its name such that its name 1616f is "JONATHAN" (e.g., instead of Johnny, John, Jon, etc.). In some embodiments, the device provides the user with a number of name options from which the user should select (e.g., "j. Johnny does not change its surname 1616g, still "APPLESEED". At fig. 16C, Johnny's phone 600 detects a tap 1660b on the affordance 1616h to select a user with whom Johnny's new contact information should be shared.
At FIG. 16D, Johnny's phone 600 displays a sharing user interface 1616i that includes a plurality of sharing options 1616j-1616 l. Contact only option 1616j is an option that enables Johnny's phone 600 to automatically share Johnny's updated contact information with individuals (e.g., Jack, but not Jane) that have contact entries in Johnny's address book 1610. Owner option 1616k is an option to enable Johnny's phone 600 to share Johnny's updated contact information with the owner, regardless of whether it has entries (e.g., Jack and Jane) in Johnny's address book 1610. Always ask option 1616l is an option that enables Johnny's phone 600 to prompt whether Johnny's updated contact information should be shared with everyone, regardless of whether it has entries (e.g., Jack and Jane) in Johnny's address book 1610.
At fig. 16D, Johnny's phone 600 has received a tap at contact only option 1616j, as indicated by check mark 1616 m. At FIG. 16D, Johnny's phone 600 detects a tap 1660c on the completion affordance 1616n to select the contact only option 1616j and complete the name and photo sharing setup process. Upon detecting tap 1660c, Johnny's phone 600 is configured to share Johnny's contact information (e.g., avatar 1616e, name 1616f-1616g) with individuals with whom Johnny communicates.
At fig. 16E, Johnny's phone 600 displays a home screen 1618 including application icons for some of the applications on Johnny's phone 600. Johnny's phone 600 detects a tap 1660d on the message icon 1618 a. In response to detecting a tap 1660d on the message icon 1618a, Johnny's phone 600 displays a list of messaging conversations at fig. 16F as part of a conversation list user interface 1620 a. The conversation list user interface 1620a includes a set affordance 1620b, a new message affordance 1620c (for starting a new message conversation), and multiple representations of messaging threads, including a representation 1620d of a messaging thread with an instant message conversation between Jack and Johnny. At FIG. 16F, Johnny's phone 600 detects a tap 1660e on the representation 1620d of the messaging thread between Jack and Johnny.
At FIG. 16G, Johnny's phone 600 displays a conversation user interface 1622 in response to detecting a tap 1660e on representation 1620 d. Dialog user interface 1622 includes Jack's name 1622b (e.g., as entered by Johnny and stored in Johnny's address book) and Jack's representation 1622c (e.g., as selected by Johnny). As shown in fig. 16G, Johnny has just sent (e.g., after setting up name and photo sharing) a message 1622a to Jack. Because Johnny has recently updated his contact information (his name and his photo) and because Jack has an entry in Johnny's address book (because Johnny has chosen to share his contact information with "contact only"), Johnny's phone 600 transmits Johnny's updated contact information to Jack's phone 1602. In some embodiments, the updated contact information is transmitted with message 1622 a. In some embodiments, the updated contact information is transmitted at a predetermined time after transmission of message 1622 a. In contrast, Johnny's phone 600 does not transmit Johnn's updated contact information to Jane's phone 1604 because Johnny has not yet sent a message to Jane (and also because Jane is not in Johnny's address book). In this example, because Johnny has updated his name and his photograph, Johnny's phone 600 transmits both the new name and the new photograph to Jack's phone 1604.
At fig. 16G, Jack's phone 1602 displays a conversation user interface 1632 (e.g., in response to receiving a request from Jack to display a messaging conversation). The conversation user interface 1632 includes Johnny's name 1632b (e.g., as entered by Jack) and Johnny's representation 1632c (e.g., the letter combination "JA" as selected by Jack), both of which are retrieved from Jack's address book entry 1612 for Johnny. As shown in fig. 16G, Jack's phone 1602 has received Johnny's message 1632a and updated contact information. Jack's phone 1602 displays both message 1632a (corresponding to message 1622a) and notification 1634. Notification 1634 includes Johnny's new photo 1634a (avatar) and Johnny's new name 1634b ("JOHNNY APPLEEEED"). Although Jack's phone 1602 has received the information, Jack's address book has not been automatically updated to include the information.
At FIG. 16H, Jack's phone 1602 detects a tap 1670a on acceptance affordance 1634d to initiate the process of updating Jack's address book with Johnny's updated contact information. In contrast, clear affordance 1634c, when activated, causes notification 1634 to be cleared, updating Jack's address book with Johnny's updated contact information.
At FIG. 16I, Jack's phone 1602 displays menu 1636 in response to detecting a tap 1670a on acceptance affordance 1634 d. The menu 1636 includes a first option 1636a for updating Jack's address book entry 1612 for Johnny using both the updated photograph and name received from Johnny, a second option 1636b for updating Jack's address book entry 1612 for Johnny using only the updated photograph received from Johnny without using the name, a third option 1636c for updating Jack's address book entry 1612 for Johnny using only the updated name received from Johnny without using the photograph, and a fourth option 1636d for updating Jack's address book entry 1612 for Johnny without using the name or photograph. At fig. 16I, Jack's phone 1602 detects a tap 1670b on the first option 1636a to update Jack's address book entry 1612 for Johnny with both the photo and the name received from Johnny.
At fig. 16J, in response to detecting a tap 1670b on the first option 1636a to update Jack's address book entry 1612 for Johnny using both the photo and the name, Jack's phone 1602 updates Jack's address book entry 1612 for Johnny using both the name and the photo received from Johnny. This update is reflected in FIG. 16J as representation 1632c of Johnny now reflects the updated photograph received from Johnny and name 1632b ("JOHNNY APPLEEEED") reflects the updated name received from Johnny.
At FIG. 16J, Jack's phone 1602 displays an auto-update menu 1638 in response to detecting a tap 1670b on the first option 1636 a. The automatic update menu 1638 includes a first update option 1638a to configure Jack's phone 1602 to prompt Jack to approve future photo updates received from Johnny (e.g., to automatically update Jack's address book entry for Johnny 1612 without using future photos received from Johnny) and a second update option 1638b to configure device 1602 to automatically update Jack's address book entry for Johnny 1612 with future photos received from Johnny. In some embodiments, Jack's phone 1602 also provides a corresponding option to configure the device 1602 to automatically update Jack's address book entry 1612 for Johnny with future names received from Johnny. At fig. 16J, Jack's phone 1602 detects a tap 1670c on the second update option 1638b and, in response, configures the device 1602 to automatically update Jack's address book entry 1612 for Johnny using future photos received from Johnny (e.g., received with the received message as part of the updated contact information).
As shown in fig. 16K, Jack's phone 1602 has updated Jack's address book with updated contact information received from Johnny. Accordingly, Jack's phone displays a conversational user interface 1632, including an updated name "JOHNNY APPLESEED" 1632b (e.g., as received from JOHNNY) and an updated representation 1632c of JOHNNY (e.g., as received from JOHNNY).
At fig. 16L, Johnny has transmitted a second message 1622b to Jack. However, because Johnny's phone 600 has not received an update to Johnny's contact information since the last time Johnny's phone 600 transmitted the updated contact information to Jack (e.g., Johnny has not changed his photo or name), Johnny's phone 600 transmits second message 1622b to Jack without transmitting an update to Johnny's contact information. Thus, in some embodiments, updates to contact information are transmitted with a message to those recipients of the message when the contact information has been updated since the last transmission of the contact information to the recipients (rather than when the contact information has not been updated since the last transmission of the contact information to the recipients). Thus, at fig. 16L, Jack's phone 1602 displays the second message 1632b without displaying a notification of updated contact information (e.g., as compared to fig. 16G).
At fig. 16M, Johnny's phone 600 receives user input (e.g., via a displayed keyboard) and in response transmits a message 1640a to both Jack and Jane via a group message conversation 1640. Jack is an approved recipient of Johnny's contact information because Jack is in Johnny's address book. Johnny's phone 600, however, does not transmit updated contact information to Jack's phone 1602 because Johnny has not updated his contact information since the last time Johnny's phone 600 transmitted updated contact information to Jack, as previously described. In contrast, given Johnny's selection of contact only option 1616j, Jane is not an approved recipient of Johnny's contact information because Johnny's address book does not have an entry for Jane. Thus, while Johnny has updated his contact information, Johnny's phone 600 does not transmit the updated contact information to Jane's phone 1604.
As shown in fig. 16M, Johnny's phone 600 shows that message 1640a has been transmitted to Jack and 415 + 555-1234, which is Jane's phone number, as indicated by the name and number 1644a of the group message conversation 1640 and the photograph 1644b representing Jack and Jane. Further, in accordance with a determination that Johnny's updated contact information is available to send to Jane and that Jane is not an approved recipient of Johnny's contact information, Johnny's phone 600 displays a notification 1642 that Johnny's updated contact information is transmitted to Jane. The notification 1642 includes an indication 1642c of the suggested recipient of the contact information ("415 and 555 and 5555") and suggested shared contact information 1642a-1642b (photo and name of Johnny). Clear affordance 1642d clears notification 1642 when activated without transferring Johnny's updated contact information to Jane. Shared affordance 1642e, when activated, transmits Johnny's updated contact information to Jane.
At fig. 16M, Jack's phone 1602 displays (as part of the group message conversation 1650) the message 1650a received from Johnny, but does not display any notification about the updated contact information (because the updated contact information was not received). The group message conversation 1650 includes name indications 1654a and photographs 1654b of other conversation participants.
At fig. 16M, Jane's phone 1604 displays (as part of a group message conversation 1680) a message 1680a received from Johnny. The group message conversation 1680 also includes name/number indications 1684a and photos 1684b of other conversation participants. Because Jane has received the message from Johnny and because Jane has updated contact information to be shared with Johnny, Jane's phone 1604 displays a notification 1682 that communicates Jane's updated contact information to Johnny. Notification 1682 includes an indication 1682c of the suggested recipient of the contact information ("415-" 555- "1234") and the suggested shared contact information 1682a-1682b (Jane's photo and name). Clear affordance 1682d, when activated, clears notification 1682 without transmitting Jane's contact information to Johnny. Sharing affordance 1682e, when activated, transmits Jane's contact information to Johnny.
At FIG. 16N, Johnny's phone 600 detects a tap 1660f on the shared affordance 1642 e. In response to detecting a tap 1660f on the shared affordance 1642e, Johnny's phone 600 transmits Johnny's contact information to Jane. As shown in fig. 16N, in response to receiving Johnny's contact information, Jane's phone 1604 displays a second notification 1686 concurrently with notification 1682 and message 1680a (corresponding to message 1640 a).
Notification 1686 includes Johnny's new photograph 1686a (corresponding to the avatar of 1616e of fig. 16C) and Johnny's name 1686b (corresponding to 1616f-1616g of fig. 16C), as received from Johnny. Although Jane's phone 1604 has received the new contact information, Jane's address book has not been automatically updated to include the information.
At fig. 16O, Johnny's phone 600 stops displaying notification 1642 because Johnny's contact information has been transferred to Jane. At FIG. 16O, Jane's phone 1604 detects tap 1690a on acceptance affordance 1686d to initiate a process for updating Jane's address book to include Johnny's contact information. In contrast, clear affordance 1686c, when activated, causes notification 1686 to be cleared, and does not initiate the process for updating Jane's address book to include Johnny's contact information.
At FIG. 16P, Jane's phone 1604 displays menu 1624 in response to detecting tap 1690a on acceptance affordance 1686 d. Menu 1624 includes a first option 1624a to update Jane's address book to add a new entry for Johnny (e.g., using the photo and/or name received from Johnny) and a second option 1624b to update an existing entry in Jane's address book using the photo and/or name received from Johnny (without adding a new entry in the address book). At fig. 16P, Jane's phone 1604 detects tap 1690b on the first option 1624a to update Jane's address book with contact information received from Johnny to add a new entry for Johnny.
At fig. 16Q, in response to detecting a tap 1690b on the first option 1624a, Jane's phone 1604 displays an automatically updated menu 1626. The automatic update menu 1626 includes a first update option 1626a to configure the Jane's phone 1604 to prompt the Jane to approve future photo updates received from Johnny (e.g., to automatically update Jane's directory entries for Johnny without using future photos received from Johnny) and a second update option 1626b to configure the device 1604 to automatically update Jane's directory entries for Johnny with future photos received from Johnny. In some embodiments, Jane's phone 1604 also provides a corresponding option to configure Jane's phone 1604 to automatically update Jane's address book entries for Johnny with future names received from Johnny. At fig. 16Q, Jane's phone 1604 added a new entry for Johnny in Jane's address book using the photo and name received from Johnny, as evidenced by the group message dialog 1680, which is being updated to include Johnny's name ("Johnny") in the indication 1684a and the photo of Johnny as part of the photo 1684 b.
In some embodiments, Jane's phone 1604 also provides (e.g., prior to displaying the automatic update menu 1626) a first option for updating Jane's directory entry for Johnny using both the photograph and the name received from Johnny, a second option for updating Jane's directory entry for Johnny using only the photograph received from Johnny without the name, and a third option for updating Jane's directory entry for Johnny using only the name received from Johnny without the photograph.
At fig. 16Q, Jane's phone 1604 detects tap 1690c on the first update option 1626a and, in response, configures the device 1604 to prompt Jane to approve future photo updates received from Johnny before updating Jane's address book entry for Johnny with the updated photo (e.g., without automatically updating Jane's address book entry for Johnny with the future photo received from Johnny). At fig. 16R, Jane's phone 1604 stops displaying notification 1686, but continues to display notification 1682 because Jane's phone 1604 has not received a tap on clear affordance 1682d (which clears notification 1682 when activated, does not transmit Jane's contact information to Johnny) or shared affordance 1682e (which transmits Jane's contact information to Johnny when activated).
At FIG. 16R, both Johnny and Jack begin the process of changing their names and/or photos. At fig. 16R, Johnny's phone 600 displays a messaging conversation list as part of a conversation list user interface 1620a that includes a settings affordance 1620 b. Johnny's phone 600 detects a tap 1660g on the setup affordance 1620 b. Similarly, at FIG. 16R, Jack's phone 1602 displays the messaging conversation list as part of a conversation list user interface 1620e that includes a setting affordance 1620 f. Jack's phone 1602 detects a tap 1670d on the set affordance 1620 f.
At FIG. 16S, in response to a tap 1660g on the settings affordance 1620b, Johnny 'S phone 600 blurs the dialog list user interface 1620a and displays a menu with options 1620g to change Johnny' S name and/or photo. Similarly, at FIG. 16S, in response to a tap 1620f on the setup affordance 1670d, Jack 'S phone 1602 blurs the dialog list user interface 1620e and displays a menu with options 1620h to change Jack' S name and/or photo.
At fig. 16S, Johnny 'S phone 600 detects a tap 1660h on option 1620g to change Johnny' S name and/or photo, and Jack 'S phone 1602 detects a tap 1670e on option 1620h to change Jack' S name and/or photo.
At fig. 16T, Johnny's phone 600 displays a name/photo change user interface and detects a set of inputs for: (1) change Johnny's name from "Johnny ap pleseed" to "JOHN ap pleseed" 1616q, (2) change Johnny's photo to a monkey photo 928 (e.g., corresponding to 928 of fig. 9F, using the techniques described above with respect to fig. 9E-9 AG), and (3) select all option 1616k (as compared to contact only option 1616 j). Owner option 1616k is an option to enable Johnny's phone 600 to share Johnny's updated contact information (e.g., name, photo) with the owner, regardless of whether the person has an entry (e.g., Jack and Jane) in Johnny's address book 1610. At FIG. 16T, Johnny's phone 600 detects a tap 1660i on the completion affordance 1616 o.
At FIG. 16T, Jack's phone 1602 similarly detects a set of inputs for: (1) change the photo of Jack to a new photo 1616t (e.g., using the techniques described above with respect to fig. 9E-9 AG), and (2) select all options 1616 r. Jack does not change its name 1616 u. Owner option 1616r is an option to enable Jack's phone 1602 to share Jack's updated contact information (e.g., name, photo) with the owner, regardless of whether the person has an entry in Jack's address book. At FIG. 16T, Jack's phone 1602 detects a tap 1670f on the completion affordance 1616 s.
As shown in fig. 16T-16U, Johnny's phone 600 does not transfer updated contact information to Jane or Jack (because Johnny has not sent a message to Jane or Jack after having updated his contact information) and Jack's phone 1602 does not transfer updated contact information to Jane or Johnny (because Jack has not sent a message to Jane or Johnny after having updated his contact information).
At FIG. 16U, Jane's phone 1604 detects tap 1690d on clear affordance 1682d and, in response, Jane's phone 1604 clears (e.g., stops displaying) notification 1682, as shown in FIG. 16V.
At fig. 16V, Johnny's phone 600 receives user input (e.g., via a displayed keyboard) and in response transmits a message 1640b to both Jack and Jane via a group message conversation 1640. Jack is an approved recipient of Johnny's contact information because Johnny has chosen to share his contact information with all people, whether or not those people are in Johnny's address book (and in this example Jack is in Johnny's address book). Jane is also an approved recipient of Johnny's contact information because Johnny has chosen to share his contact information with all people, whether or not those people are in Johnny's address book (and in this example Jack is not in Johnny's address book). Johnny's phone 600 transmits Johnny's updated contact information to Jack's phone 1602 and Jane's phone 1604 because Johnny has updated his contact information since the last time Johnny's phone 600 transmitted contact information to Jack and Jane. Johnny's phone 600 transmits Johnny's updated contact information to Jack's phone 1602 and Jane's phone 1604 along with a transmission message 1640 b.
As shown in fig. 16V, Johnny's phone 600 shows that message 1640b has been transmitted to Jack and 415-.
At fig. 16V, Jack's phone 1602 displays (as part of group message conversation 1650) the message 1650b (corresponding to message 1640b) received from Johnny, and simultaneously displays a notification 1646 about Johnny's updated name (as part of Johnny's updated contact information). Jack's phone 1602 has received both Johnny's updated photograph (monkey photograph) and an updated name ("JOHN APPLEEEED"). Because Jack's phone 1602 is configured to automatically update Jack's address book entry 1612 for Johnny using the photograph received from Johnny (e.g., based on tap 1670c in fig. 16J), Jack's address book entry 1612 for Johnny has been automatically updated using Johnny's new photograph (e.g., without requiring additional user input at Jack's phone 1602 after receiving the updated photograph) as reflected by the monkey photograph 1654b in fig. 16V. Because Jack's phone 1602 is not configured to automatically update Jack's directory entry 1612 for Johnny using the updated name received from Johnny, Jack's directory entry 1612 for Johnny has not been automatically updated to reflect Johnny's new name ("JOHN ap pleseed"), as evidenced by name indication 1654a (still including "Johnny"). At FIG. 16V, Jack's phone 1602 displays notification 1646 instead of automatically updating Johnny's name in Jack's address book. Notice 1646 includes a new photo 1646a of Johnny and a new name ("JOHN APPLEEEED") 1646b of Johnny.
At fig. 16V, Jane's phone 1604 displays (as part of a group message conversation 1680) the message 1680b received from Johnny (corresponding to message 1640b) and, at the same time, displays a notification 1688 of Johnny's updated contact information (name and photo). Jane's phone 1604 has received both Johnny's updated photograph (monkey photograph) and an updated name ("JOHN APPLESEED"). Because Jane's phone 1604 is not configured to automatically update Jane's directory entry for Johnny using the name or photo received from Johnny (e.g., based on tap 1690c in fig. 16Q), Johnny's new name or photo has not been used to automatically update Jane's directory entry for Johnny, as reflected by Johnny's old photo 1684b in fig. 16V (as compared to Johnny's new photo 1688a in notification 1688) and Johnny's original name ("Johnny") in name indication 1684a (as compared to Johnny's new name 1688b ("JOHN APPLESEED") in notification 1688). In fig. 16V, Jane's phone 1604 displays notification 1688 instead of automatically updating Johnny's name and photo in Jane's address book. Notice 1688 includes Johnny's new photograph 1688a and Johnny's new name ("JOHN APPLEEEED") 1688 b.
At fig. 16W, Jack's phone 1602 receives user input (e.g., via a displayed keyboard) and in response transmits a message 1650c to both Johnny and Jane via a group message conversation 1650. Johnny is an approved recipient of Jack's contact information because Jack has chosen to share his contact information with all people, whether or not those people are in Jack's address book (and in this example, Johnny is in Jack's address book). Jane is also an approved recipient of Jack's contact information because Jack has chosen to share his contact information with all people, whether or not those people are in Jack's address book (and in this example Jane is in Jack's address book). Jack's phone 1602 transmits Jack's updated photo to Johnny's phone 600 and Jane's phone 1604 as part of the updated contact information because Jack is transmitting messages and Jack has updated his photo since the last time Jack's phone 1602 transmitted contact information to Johnny and Jane (although his name was not updated). Jack's phone 1602 transmits Jack's updated contact information (new photos) along with a transmission message 1650c to Johnny's phone 600 and Jane's phone 1604.
At fig. 16W, Johnny's phone 600 displays (as part of the group message conversation 1640) the message 1640c (corresponding to message 1650c) received from Jack and simultaneously displays a notification 1652 of Jack's updated photograph (as part of Jack's contact information). Johnny's phone 600 has received Jack's updated photo (photo 1652a), but Johnny's phone 600 has not received Jack's updated name because Jack has not updated his name since the last time contact information was sent to Johnny. Because Johnny's phone 600 has not been configured to automatically update Johnny's address book entries for Jack using the updated photos received from Jack, Johnny's address book entries for Jack have not been automatically updated using Jack's new photos, as reflected by Jack's old photo 1644b (without a cap) in fig. 16W (as compared to Jack's new photo with a cap 1652 a). At fig. 16W, Johnny's phone 600 displays notification 1652 instead of automatically updating the photo of Jack in Johnny's address book. Notification 1652 includes Jack's new photograph 1652a and identifies Jack by name 1652 b.
At fig. 16W, Jack's phone 1602 detects a tap 1670g on acceptance affordance 1646c and, in response, updates Jack's directory entry for Johnny to include Johnny's updated name ("JOHN ap pleseed") as reflected in name indication 1654a of group message dialog 1650 in fig. 16X.
At fig. 16W, Jane's phone 1604 displays (as part of a group message conversation 1640) the message 1680c (corresponding to message 1650c) received from Jack and simultaneously displays a group update notification 1656 that replaces notification 1688. Group update notification 1656 indicates that updated contact information has been received from multiple people (e.g., "2 people," in this case, from both Johnny and Jack). For example, Jane's phone 1604 receives Jack's updated contact information (Jack's updated photograph 1656a) along with the received message 1680 c.
At FIG. 16W, Jane's phone 1604 detects tap 1690e on group update notification 1656. At FIG. 16X, in response to detecting tap 1690e on group update notification 1656, Jane's phone 1604 replaces the display of group update notification 1656 and optionally the display of messages 1680a-1680b with the display of multiple notifications 1688 and 1692. Notification 1692 includes updated contact information (new photo with hat) for Jack received with Jack's message 1680c (corresponding to 1650 c). Notification 1692 includes Jack's updated photograph 1692a, Jack's indication 1692b, and acceptance affordance 1692c, which when activated, initiates a process for updating Jane's address book with Jack's updated contact information.
Fig. 17 is a flow diagram illustrating a method for providing contact information using an electronic device, according to some embodiments. The method 1700 is performed at a device (e.g., 100, 300, 500, 600, 1602, and 1604) having one or more communication devices (e.g., wireless communication devices, such as cellular antennas, wifi antennas). In some examples, the user is associated with an electronic device. For example, the electronic device may store contact information for a user of the electronic device in a contact business card that is identified as being the user of the device. Some operations in method 1700 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
Method 1700 provides an intuitive way for providing contact information, as described below. The method reduces the cognitive burden on the user to provide contact information, thereby creating a more efficient human-machine interface. For battery-powered computing devices, enabling users to provide contact information more quickly and efficiently conserves power and increases the interval between battery charges.
In some embodiments, the electronic device receives (1702) a request (e.g., tap input on a "send" affordance in a messaging user interface) to transmit a first message (e.g., instant message, email, not including contact information of a user associated with the electronic device) to a group of contactable users (e.g., a group including only the first contactable user and not other users, a group including the first contactable user and a second contactable user). In some embodiments, the set of contactable users includes a first contactable user (which is different from a user of the electronic device).
In some embodiments, in response to (1704) receiving the request to transmit the first message, in accordance with a determination (1706) that the first contactable user satisfies a set of sharing criteria, the set of sharing criteria including a first sharing criterion that is satisfied when the first contactable user corresponds to an approved recipient (and that is not satisfied when not corresponding to an approved recipient): the electronic device transmits (1708) to the first contactable user via one or more communication devices: a first message (e.g., 1622a, 1622b, 1640a) and contact information (e.g., a graphical representation such as an avatar, a photograph, and/or a letter combination representing a user of the electronic device, and/or a name of the user of the electronic device) of a user associated with the electronic device. For example, the contact information is contact information for a user of the electronic device accessed (in a communication address database or application) from contact business cards identified as being users of the device.
In some embodiments, the contact information includes information corresponding to an avatar (e.g., a simulated three-dimensional avatar). In some embodiments, the information corresponding to the avatar includes gesture information identifying a gesture of the avatar (e.g., from a plurality of different gestures). The user interface for initiating the process for selecting an avatar to use as a representation is described in more detail above, such as with respect to fig. 9A-9 AG.
In some embodiments, in response to (1704) receiving the request to transmit the first message, in accordance with (1710) determining that the first contactable user does not satisfy the set of sharing criteria: the electronic device transmits (1712) the first message (e.g., 1622b) to the first contactable user via one or more communication devices without transmitting contact information for a user associated with the electronic device.
In some embodiments, determining whether the first contactable user should receive contact information enables the device to selectively share contact information to only approved recipients, thereby increasing security. Selecting to transmit contact information to an approved recipient improves the security of the device by preventing sharing of contact information with unintended recipients. Further, selectively transmitting contact information to approved recipients while transmitting the first message to all recipients alleviates the user from providing a different set of inputs for transmitting the message and transmitting the contact information, thereby reducing the number of inputs required to perform the operation. Reducing the number of inputs enhances the operability of the device and makes the user-device interface more efficient (e.g., by reducing user errors in operating/interacting with the device, by reducing negative misidentifications of authentication), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the request to be transmitted is a request transmitted using a primary source identifier (e.g., a unique identifier associated with a communication protocol or application used to transmit the communication, such as an email address, a phone number, an account name) that identifies the source of the message (e.g., for that particular communication). For example, in traditional SMS, the primary source identifier may be the telephone number of the sending device. In some embodiments, a user may configure their device to have a primary source identifier as their email address for instant messaging technology, so an instant message sent using the device will include the email address of the user as the source of the message (e.g., in the "from" field). In contrast, contact information is information other than a primary source identifier that is used to identify a user not associated with an electronic device to a contactable user, regardless of whether the contact information is a unique identifier (e.g., a first and/or last name of the user, a set of acronyms of the user, a photo of the user, and/or a virtual avatar created or selected by the user). In some embodiments, after receiving the contact information, the receiving device associates the contact information with the primary source identifier. For example, the receiving device associates a name and graphical representation received as part of the contact information with a primary source identifier (e.g., the primary source identifier of the message in which the contact information was received).
In some embodiments, in response to receiving the request to transmit the first message, in accordance with a determination that the first contactable user does not satisfy the set of sharing criteria, the electronic device concurrently displays the first message (e.g., 1640a, a voice balloon in a messaging application showing content of the first message) and an indication (e.g., 1642) that contact information is not transmitted to the first contactable user. In some embodiments, the indication that contact information is not transmitted comprises an affordance that, when activated, initiates a process for transmitting contact information of a user associated with the electronic device to the first contactable user. Initiating a process for transmitting updated contact information to a first contactable user when the user activates an indication that contact information is not transmitted enables the user to transmit any new/updated contact information to the contactable user without accessing an unnecessary number of user interfaces and providing an unnecessary number of usage inputs. Reducing the number of user inputs to perform functions enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs when operating/interacting with the device), which additionally reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
Providing a visual indication to a user that contact information for the device user has not been transmitted to a first contactable user provides feedback to the user that the first contactable user does not satisfy the set of sharing criteria (e.g., the first contactable user does not correspond to an approved recipient) and that the first contactable user has not received updated contact information (e.g., a name and/or a graphical representation). Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the set of contactable users includes a second contactable user. In some embodiments, in response to receiving the request to transmit the first message, in accordance with a determination that the second contactable user satisfies a set of sharing criteria, the set of sharing criteria including a first sharing criterion that is satisfied when the second contactable user corresponds to an approved recipient (and is not satisfied when not corresponding to an approved recipient): the electronic device transmits, via one or more communication devices, a first message and contact information (e.g., a graphical representation such as an avatar, a photograph, and/or a letter combination representing a user of the electronic device, and/or a name of the user of the electronic device) of a user associated with the electronic device to the second contactable user. For example, the contact information is contact information for a user of the electronic device accessed (in a communication address database or application) from contact business cards identified as being users of the device. In some examples, the device sends different contact information to the first contactable user than to the second contactable user. For example, if a first contactable user has recently received an update to a name instead of an update to the graphical representation of the device user, the device transmits the updated graphical representation (to the first contactable user) without re-transmitting the updated name, and if a second contactable user has not received an update to the device user's name nor an update to the graphical representation, the device transmits both the updated name and the updated graphical representation (to the second contactable user). In some embodiments, in response to receiving the request to transmit the first message, in accordance with a determination that the second contactable user does not meet the set of sharing criteria: the electronic device transmits the first message (e.g., 1640a) to the first contactable user via one or more communication devices without transmitting contact information for a user associated with the electronic device. In some embodiments, the set of sharing criteria includes recipient sharing criteria that are met when the respective contactable user is a recipient of the message. Thus, the updated contact information is not transmitted to contacts that are not in the set of contactable users to which the message is transmitted.
Determining which of a plurality of contactable users identified as recipients of a message should receive contact information enables a device to selectively share a single message with the plurality of contactable users while potentially limiting transmission of the contact information to only approved recipients. Selectively transmitting contact information to approved recipients while transmitting the first message to all recipients alleviates the user from providing a different set of inputs for transmitting the message and transmitting the contact information, thereby reducing the number of inputs required to perform the operation. Reducing the number of inputs enhances the operability of the device and makes the user-device interface more efficient (e.g., by reducing user errors in operating/interacting with the device, by reducing negative misidentifications of authentication), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, after receiving the request to transmit the first message to the set of contactable users (and optionally after transmitting the first message and contact information of the user associated with the electronic device to the first contactable user), the electronic device receives a second request to transmit a second message (e.g., 1640a) to a second set of one or more contactable users, wherein the second set of one or more contactable users includes the first contactable user. In some embodiments, the set of contactable users is different from the second set of contactable users. In some embodiments, in response to receiving the second request to transmit the second message, in accordance with a determination that the first contactable user satisfies a set of sharing criteria, the set of sharing criteria including a second sharing criterion that is satisfied when the contact information has been updated (modified) since the most recent transmission to the first contactable user (and that is not satisfied when the contact information has not been modified since the most recent transmission to the first contactable user): the electronic device transmits, via one or more communication devices, a second message and contact information (e.g., a graphical representation such as an avatar, a photograph, and/or a letter combination representing a user of the electronic device, and/or a name of the user of the electronic device) of a user associated with the electronic device to the first contactable user. For example, the contact information is contact information for a user of the electronic device accessed (in a communication address database or application) from contact business cards identified as being users of the device. In some embodiments, in response to receiving the request to transmit the second message, in accordance with a determination that the first contactable user does not meet the set of sharing criteria: the electronic device transmits the second message to the first contactable user via one or more communication devices without transmitting contact information of a user associated with the electronic device.
In some embodiments, prior to receiving the request to transmit the first message to the set of contactable users, the electronic device receives user input to update contact information (e.g., a graphical representation such as an avatar, photograph, and/or letter combination representing a user of the electronic device, and/or a name of the first contactable user) of a user (also referred to as a device user) associated with the electronic device. In some embodiments, in response to receiving a user input to update contact information of a user associated with an electronic device, the electronic device updates the contact information of the user associated with the electronic device (e.g., stores the update to the contact information at the electronic device, transmits the update to the contact information to a remote server for storage), and does not transmit the contact information (e.g., an updated portion of the contact information) of the user associated with the electronic device to the first contactable user (or to any contactable user) in response to the user input to update the contact information. Thus, the electronic device receives input from the device user that updates the device user's contact information, but does not transmit the updated contact information to any contactable users. Instead, the device maintains a record of the updated contact information and whether the updated contact information has been sent to a particular contactable user (e.g., to the first contactable user). The updated contact information is maintained and transmitted (e.g., to the first contactable user) when the device user sends a message to the first contactable user. In some embodiments, to receive user input to update contact information, the device displays a user editing user interface (e.g., an interface for editing information of a user associated with the electronic device at the electronic device (e.g., for others to contact via phone, email, short message, etc.); a single interface screen), such as the contactable user editing user interface described above with respect to FIGS. 9A-9 AG, but for the user associated with the electronic device and not for the contactable user.
In some embodiments, updating the device user's contact information without transmitting the contact information enables the update to be stored without using communication bandwidth (e.g., cellular bandwidth) and processing power to transmit the update. This is particularly useful when the device maintains a long list of available users, as the device can avoid sending updated contact information to available users with whom the device user is no longer in communication. Avoiding sending updated contact information reduces bandwidth usage and processor usage, thereby reducing power usage and extending the battery life of the device.
In some embodiments, prior to receiving the request to transmit the first message to the set of contactable users (and optionally, prior to receiving user input to update contact information of a user associated with the electronic device), the electronic device provides a plurality of predetermined options (e.g., by displaying one or more user interfaces including affordances for selecting from among the predetermined options) to identify whether the respective contactable user corresponds to an approved recipient. In some embodiments, the plurality of predetermined options includes one or more of: a first recipient option, that is, an option that a contactable user of a set of contactable users associated with a user of the electronic device (e.g., a contact list, such as a virtual address book including an entry for the first contactable user including a contact name and a communication method (e.g., phone number, email address) for the first contactable user corresponds to (e.g., is identified as, is set to) an approved recipient, and a contactable user of the set of contactable users not associated with the user of the electronic device does not correspond to an approved recipient (e.g., an option that, when selected, configures the device using the selected relationship/correspondence); a second recipient, i.e. all contactable users (whether or not they are listed in the address book) correspond to approved recipients, and a third recipient option, i.e. no contactable user (whether or not they are listed in the address book) corresponds to an approved recipient. Thus, when a device user sends a message to a contactable user, the device user can specify in advance which contactable users should automatically receive updates to the device user's contact information.
Providing a user with the ability to select which contactable users are automatically provided with the user's private contact information enables the user to securely control the dissemination of the private contact information. Providing features to securely control the dissemination of the private contact information enhances the security of the device by preventing private information from being transmitted to unintended contactable users.
In some embodiments, prior to receiving the request to transmit the first message to the set of contactable users (and optionally, prior to receiving user input to update contact information of users associated with the electronic device), the electronic device receives a set of one or more inputs (e.g., 1660 a-1660 c) that includes an input to select a graphical object (e.g., during a setup process, a contactable user editing user interface as described above with respect to fig. 9A-9 AG, but for a user associated with the electronic device, rather than for a contactable user) to select a graphical representation of a user associated with the electronic device (such as an avatar, photograph, and/or letter combination that represents a user of the electronic device). In some embodiments, prior to receiving the request to transmit the first message to the set of contactable users, in response to receiving a user input selecting the graphical representation, the electronic device updates contact information of a user associated with the electronic device (e.g., stores the update to the contact information at the electronic device, transmits the update to the contact information to a remote server for storage) to include the selected graphical representation (e.g., replaces a previous graphical representation of the device user with the selected graphical representation of the device user) without transmitting (e.g., to the first contactable user, to any contactable user) the contact information (e.g., an updated portion of the contact information) of the user associated with the electronic device.
In some embodiments, to receive user input to update contact information, the device displays a user editing user interface (e.g., an interface for editing information of a user associated with the electronic device at the electronic device (e.g., for others to contact via phone, email, short message, etc.); a single interface screen), such as the contactable user editing user interface described above with respect to FIGS. 9A-9 AG, but for the user associated with the electronic device and not for the contactable user.
In some embodiments, prior to receiving the request to transmit the first message to the set of contactable users (and optionally, prior to receiving user input to update contact information for a user associated with the electronic device), the electronic device accesses a name of a user associated with the electronic device from a set of contactable users associated with the user of the electronic device (e.g., entries in an address book corresponding to the user of the device). In some embodiments, the electronic device displays the user's name in an editable format (e.g., in an editable text field) prior to receiving the request to transmit the first message to the set of contactable users. In some embodiments, prior to receiving the request to transmit the first message to the set of contactable users, the electronic device receives user input (e.g., modifying a name and a confirmation input (such as "save" or "confirm") during a setup process) to edit (or confirm) a name of a user associated with the electronic device. In some embodiments, instead of (or in addition to) the editable pre-populated name, the device provides the user with an option to select from among a plurality of predefined suggested names that are displayed simultaneously. In some embodiments, prior to receiving the request to transmit the first message to the set of contactable users, in response to receiving the user input to edit the name, the electronic device updates contact information of a user associated with the electronic device (e.g., stores the update to the contact information at the electronic device, transmits the update to the contact information to a remote server for storage) to include the selected name (e.g., replaces a previous name of the device user with the selected name) without transmitting (e.g., to the first contactable user, to any contactable user) the contact information (e.g., an updated portion of the contact information) of the user associated with the electronic device. In some embodiments, prior to receiving the request to transmit the first message to the set of contactable users, in response to receiving user input to edit a name, the electronic device provides contact information including the selected name (or otherwise makes the contact information available) to a plurality of applications of the electronic device (e.g., a phone application, an email application, an instant messaging application, a mapping application, a first party application provided by a manufacturer of the electronic device).
In some embodiments, prior to receiving the request to transmit the first message to the set of contactable users, the electronic device concurrently displays the first message (e.g., the first message in an input field of an instant message conversation as received from the user prior to transmission) and an affordance that, when selected, causes the device to display a user interface that includes one or more options for configuring whether the first contactable user corresponds to an approved recipient. In some embodiments, the electronic device provides the affordance such that the device user configures whether to automatically send the device user's updated contact information to one or more recipients of the message. In some embodiments, the affordance includes an indication of whether the set of contactable users is an approved recipient.
In some embodiments, the set of sharing criteria includes a second sharing criterion that is satisfied when the contact information has been updated (e.g., modified, changed) since the last transmission to the first contactable user (and that is not satisfied when the contact information has not been modified since the last transmission to the first contactable user). Thus, rather than sending the device user's contact information to approved contactable users each time the device user sends a message to an approved contactable user, the electronic device transmits a new update to the contactable user. In some embodiments, only the updated (modified) portion of the contact information is transmitted, rather than the complete contact information for the device user. In some embodiments, the device determines which portions of contact information (or complete contact information) to transmit to a particular contactable user based on which portions have previously been transmitted to the particular contactable user.
In some embodiments, the electronic device provides (or otherwise makes available) contact information including the selected graphical representation to a plurality of applications of the electronic device (e.g., a phone application, an email application, an instant messaging application, a mapping application, a first party application provided by a manufacturer of the electronic device).
Note that the details of the process described above with respect to method 1700 (e.g., fig. 17) may also be applied in a similar manner to the methods described below and above. For example, methods 700, 800, 1000, 1200, 1300, 1500, and 1800 optionally include one or more features of the various methods described above with reference to method 1700. For the sake of brevity, these details are not repeated in the following.
Fig. 18 is a flow diagram illustrating a method for receiving contact information using an electronic device, according to some embodiments. The method 1800 is performed at a device (e.g., 100, 300, 500, 600, 1602, and 1604) having a display device and one or more communication devices (e.g., wireless communication devices, such as cellular antennas, wifi antennas). Some operations in method 1800 may optionally be combined, the order of some operations may optionally be changed, and some operations may optionally be omitted.
As described below, the method 1800 provides an intuitive way for receiving contact information. The method reduces the cognitive burden of the user in receiving the contact information, thereby creating a more effective human-computer interface. For battery-powered computing devices, enabling a user to receive contact information more quickly and efficiently conserves power and increases the interval between battery charges.
The electronic device receives (1802) (e.g., from a first contactable user) a first message (e.g., 1632a, instant message, email) (e.g., received as part of a messaging conversation that includes the first contactable user) via one or more communication devices.
After receiving the first message, the electronic device receives (1804) a request to display the first message (e.g., a tap input on the displayed identifier of the first message).
In response to (1806) receiving the request to display the first message, it is determined from (1808) that the first contactable user satisfies a set of prompting criteria, wherein the set of prompting criteria includes a first prompting criteria that is satisfied when updated (e.g., different from contact information stored at the electronic device in a contact card of the first contactable user (e.g., in a contact list or address book)) contact information (e.g., graphical representations such as head portraits, photos, and/or letter combinations representing a user of the electronic device, and/or names of the first contactable user) corresponding to the first contactable user has been received (e.g., from the first contactable user), the electronic device simultaneously displays (1810) the first message (e.g., 1632a) on the display device and a visual indication that the updated contact information is available to the first contactable user (e.g., 1634). In some embodiments, the visual indication that updated contact information is available to the first contactable user includes at least a portion of the received updated contact information (e.g., an updated name and/or an updated graphical representation of the first contactable user).
In some embodiments, the electronic device is associated with a user. In response to receiving the request to display the first message, in accordance with a determination that the first contactable user does not satisfy a set of sharing criteria, the set of sharing criteria including the first sharing criteria that are satisfied when the first contactable user corresponds to an approved recipient (and that are not satisfied when the first contactable user does not correspond to an approved recipient), the electronic device displays an indication (e.g., 1682) that updated contact information about the user of the electronic device is available for transmission to the first contactable user concurrently with the first message (and optionally, concurrently with the visual indication that updated contact information is available for the first contactable user). In some embodiments, the indication that the updated contact information is available for transmission includes an affordance that, when activated, initiates a process for transmitting contact information of a user associated with the electronic device to the first contactable user.
Providing a visual indication to the user that contact information about the device user has not been transmitted (but is available for transmission) to the first contactable user provides the user with feedback about the contact information sharing status. Providing improved feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the set of prompt criteria includes a second prompt criterion that is satisfied when the first message is part of a messaging conversation in which the first contactable user is a participant (and is not satisfied when the first message is not part of a messaging conversation in which the first contactable user is a participant). For example, the second prompting criteria is met when the received first message is part of an instant message thread that includes the first contactable user and a user of the electronic device (and optionally other contactable users). As another example, the second prompting criteria is met when the first message is an email message that includes a first contactable user in a "sender", "recipient", or "carbon copy" field, a user of the electronic device in a "recipient" or "carbon copy" field, and optionally other contactable users in a "sender", "recipient", or "carbon copy" field.
In some embodiments, the electronic device detects an activation (e.g., a tap) of a visual indication that updated contact information is available for the first contactable user. In some embodiments, in response to detecting activation of the visual indication that updated contact information is available for the first contactable user, in accordance with a determination that the first contactable user does not correspond to an existing entry in a set of contactable users associated with the user of the electronic device, the electronic device displays a selectable option (e.g., 1624a) for creating a new entry for the first contactable user in the set of contactable users associated with the user of the electronic device using the contact information (e.g., using a received graphical representation such as an avatar, photograph, and/or letter combination representing the user of the electronic device, and/or using a name of the contactable user). In some embodiments, in accordance with a determination that the first contactable user does not correspond to an existing entry in the address book of the electronic device, the device displays (as an alternative to or in addition to the first selectable option) a selectable option for adding the received contact information to the existing entry of the address book. For example, activating the option for adding to an existing entry enables a user of the electronic device to select an existing entry for a received name, a received graphical representation, and/or a communication method (e.g., phone number, email address) to which to add a message.
In some embodiments, the electronic device detects an activation (e.g., a tap) of a visual indication that updated contact information is available for the first contactable user. In some embodiments, in response to detecting activation of the visual indication that updated contact information is available for the first contactable user, in accordance with a determination that the first contactable user corresponds to an existing entry in the set of contactable users associated with the user of the electronic device, and that the received contact information includes a modified graphical representation of the first contactable user and a modified name of the contactable user, the electronic device displays a plurality of selectable options including two or more of: a selectable option for updating an existing entry in the set of contactable users associated with the user of the electronic device using the modified graphical representation of the first contactable user and the modified name of the contactable user; a selectable option for updating an existing entry in the set of contactable users associated with the user of the electronic device using the modified graphical representation of the first contactable user without updating the existing entry using the modified name of the contactable user; and a selectable option for updating an existing entry in the set of contactable users associated with the user of the electronic device using the modified name of the first contactable user without updating the existing entry using the modified graphical representation of the contactable user.
In some embodiments, the electronic device detects an activation (e.g., a tap) of a visual indication that updated contact information is available for the first contactable user. In some embodiments, in response to detecting activation of the visual indication that updated contact information is available for the first contactable user, in accordance with a determination that the first contactable user corresponds to an existing entry in the set of contactable users associated with the user of the electronic device and that the received contact information includes a modified graphical representation of the first contactable user without including a modified name of the contactable user, the electronic device updates the existing entry using the modified graphical representation of the contactable user (e.g., automatically updated, updated without any further user input). In some embodiments, in response to detecting activation of the visual indication that updated contact information is available for the first contactable user, and in accordance with a determination that the first contactable user corresponds to an existing entry in the set of contactable users associated with the user of the electronic device, and that the received contact information includes a modified graphical representation of the first contactable user and does not include a modified name of the contactable user, the device prompts the user to request confirmation to update the existing entry using the modified graphical representation of the contactable user.
In some embodiments, the electronic device detects an activation (e.g., a tap) of a visual indication that updated contact information is available for the first contactable user. In some embodiments, in response to detecting activation of the visual indication that updated contact information is available for the first contactable user, in accordance with a determination that the first contactable user corresponds to an existing entry in the set of contactable users associated with the user of the electronic device and that the received contact information includes a modified name of the first contactable user without including a modified graphical representation of the contactable user, the electronic device updates the existing entry using the modified name of the contactable user (e.g., automatically updated, updated without any further user input).
In some embodiments, a first message is received in a conversation comprising a first contactable user and a second contactable user. In some embodiments, the electronic device has received updated contact information for a first contactable user and updated contact information for a second contactable user (without using the updated contact information for the first and second contactable users to update entries in the address book of the electronic device). In some embodiments, the electronic device detects activation of a visual indication that updated contact information is available to the first contactable user. In some embodiments, in response to detecting activation of the visual indication that updated contact information is available to the first contactable user, the electronic device displays (e.g., by display of an alternate conversation) a second visual indication that updated contact information is available to the first contactable user and a third visual indication that updated contact information is available to the second contactable user, wherein the second visual indication comprises a visual representation of at least a portion of the received updated contact information about the first contactable user, wherein the third visual indication comprises a visual representation of at least a portion of the received updated contact information about the second contactable user.
In some embodiments, the visual indication regarding the updated contact information is available for the first contactable user to be displayed concurrently with a visual representation of at least a portion of a messaging conversation including a plurality of messages, the plurality of messages including a second message transmitted (e.g., from an electronic device) to the first contactable user and a third message received from the first contactable user.
In some embodiments, in response to (1806) receiving the request to display the first message, in accordance with (1812) a determination that the first contactable user does not meet the set of prompting criteria, the electronic device displays (1814) the first message on the display device without displaying a visual indication that updated contact information is available for the first contactable user. In some embodiments, the electronic device does not display the visual indication if the user of the device has previously selected to ignore the updated contact information regarding the first contactable user. Thus, the set of alert criteria optionally includes alert criteria that are met when the device does not receive a request to ignore updated contact information regarding the first contactable user.
In some embodiments, the received message includes a primary source identifier (e.g., a unique identifier associated with a communication protocol or application used to transmit the communication, such as an email address, phone number, account name) that is used to identify the source of the message (e.g., for that particular communication). For example, in traditional SMS, the primary source identifier may be the telephone number of the sending device. In some embodiments, the sending user may configure his device to have the primary source identifier be his email address for instant messaging technology, so an instant message received from the device will include the sending user's email address as the source of the message (e.g., in the "from" field). In contrast, contact information is information other than a primary source identifier that is used to identify a user (e.g., a first contactable user) that is not associated with an electronic device to a user associated with the electronic device, regardless of whether the contact information is a unique identifier (e.g., the user's first and/or last name, a set of acronyms for the user, a photograph of the user, and/or a virtual avatar created or selected by the contactable user). In some embodiments, after receiving the contact information, the receiving device associates the contact information with the primary source identifier. For example, the receiving device associates a name and graphical representation received as part of the contact information with a primary source identifier (e.g., the primary source identifier of the message in which the contact information was received).
In some embodiments, the received contact information includes a modified graphical representation of the first contactable user (and the received contact information optionally includes a modified name of the first contactable user). In some embodiments, the first contactable user corresponds to an existing entry in a set of contactable users associated with a user of the electronic device. In some embodiments, the electronic device receives user input updating an existing entry using the modified graphical representation (and optionally the modified name) of the first contactable user. In some embodiments, in response to receiving user input to update an existing entry using the modified graphical representation (and optionally the modified name) of the first contactable user, the electronic device updates the existing entry in the set of contactable users associated with the user of the electronic device using the modified graphical representation (and optionally the modified name) of the first contactable user (e.g., by replacing a previous graphical representation of the contactable user). In some embodiments, in response to receiving user input to update an existing entry using a modified graphical representation (and optionally a modified name) of a first contactable user, the electronic device displays a selectable affordance that, if selected, enables (e.g., displays a prompt for) the electronic device to automatically (e.g., without requiring additional user input/authorization) update the graphical representation of the first contactable user (e.g., for subsequently received modifications to the graphical representation of the first contactable user) in the future (and optionally does not prompt approval to automatically update the name of the contactable user).
The electronic device prompts the user to approve the automatic update of the graphical representation of the first contactable user, thereby providing the user with an option to eliminate the need to provide user input at the electronic device to update the graphical representation of the first contactable user in the future. Reducing the number of inputs enhances the operability of the device and makes the user-device interface more efficient (e.g., by reducing user errors in operating/interacting with the device, by reducing negative misidentifications of authentication), which in addition reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently. Further, the device optionally does not enable the user to approve updates to the first contactable user's name to avoid potential security issues, such as somebody impersonating himself as another.
In some embodiments, the contact information includes a name of the first contactable user or a graphical representation of the first contactable user. In some embodiments, the electronic device updates the set of contactable users associated with the user of the electronic device with the modified name of the first contactable user or the modified graphical representation of the contactable user (e.g., in response to user input requesting an update). In some embodiments, contact information, including a modified name or a modified graphical representation, in the set of contactable users associated with a user of the electronic device may be available to a plurality of applications of the electronic device (e.g., a phone application, an email application, an instant messaging application, a mapping application, a first party application provided by a manufacturer of the electronic device).
In some embodiments, the contact information of the first contactable user includes information corresponding to an avatar (e.g., a simulated three-dimensional avatar). In some embodiments, the information corresponding to the avatar includes gesture information identifying a gesture of the avatar (e.g., from a plurality of different gestures). The user interface for initiating the process for selecting an avatar to use as a representation is described in more detail above, such as with respect to fig. 9A-9 AG.
Note that the details of the process described above with respect to method 1800 (e.g., fig. 18) also apply in a similar manner to the method described above. For example, methods 700, 800, 1000, 1200, 1300, 1500, and 1700 optionally include one or more characteristics of the various methods described above with reference to method 1800. For the sake of brevity, these details are not repeated in the following.
The present disclosure also includes the following exemplary items.
1. A method, comprising:
at an electronic device having a display device and an input device:
receiving, via one or more input devices, a request to display a sticker user interface;
and
in response to receiving the request to display the sticker user interface, displaying, via the display device, a sticker user interface including a representation of a plurality of sets of stickers based on a user-created avatar, including:
In accordance with a determination that the user has created a first set of two or more user-created avatars including a first avatar and a second avatar, displaying representations of a first plurality of sets of stickers, wherein the representations of the first plurality of sets of stickers include a representation of a set of stickers based on the first avatar and a representation of a set of stickers based on the second avatar; and
in accordance with a determination that the user has created a second set of two or more user-created avatars that include a third avatar that is not included in the first set of two or more user-created avatars, displaying a representation of a second plurality of sets of stickers that is different from the representation of the first plurality of sets of stickers, wherein the representation of the second plurality of sets of stickers includes a representation of a set of stickers based on the third avatar that is not included in the representation of the first plurality of sets of stickers.
2. The method of item 1, further comprising:
after displaying the sticker user interface, receiving a request to redisplay the sticker user interface; and
in response to receiving the request to redisplay the sticker user interface, via the display
Displaying, by the display device, the sticker user interface, comprising:
In accordance with a determination that the user has created a fourth avatar that is not included in the first or second sets of two or more user-created avatars, displaying a representation of a third plurality of sets of stickers, wherein the representation of the third plurality of sets of stickers includes a representation of a set of stickers based on the fourth avatar that is not included in either the representation of the first plurality of sets of stickers or the representation of the second plurality of sets of stickers.
3. The method of any of items 1-2, wherein:
the representation of the set of stickers based on the first avatar has an appearance of one of the stickers in the set of stickers based on the first avatar;
the representation of the set of stickers based on the second avatar has an appearance of one of the stickers in the set of stickers based on the second avatar; and
the representation of the set of stickers based on the third avatar has the appearance of one of the stickers in the set of stickers based on the third avatar.
4. The method of any of items 1-3, further comprising:
Detecting selection of the representation of the set of stickers based on the first avatar, an
In response to detecting selection of the representation of the set of stickers based on the first avatar, displaying a plurality of stickers of the set of stickers based on the first avatar simultaneously with the selected representation, the plurality of stickers having an appearance based on the first avatar.
5. The method of item 4, wherein the plurality of stickers of the set of stickers based on the first avatar includes a first sticker having a first pose and an appearance based on the first avatar and a second pose different from the first pose and based on the first avatar
The second sticker of appearance, the method further comprising:
while displaying the plurality of stickers in the set of stickers based on the first avatar, detecting selection of the representation of the set of stickers based on the second avatar; and
in response to detecting the set of stickers based on the second avatar
Selection of the representation:
ceasing to display the plurality of stickers of the set of stickers based on the first avatar; and
Displaying the plurality of stickers of the set of stickers based on the second avatar, wherein the set of stickers based on the second avatar includes a third sticker having the first pose and an appearance based on the second avatar and a fourth sticker having the second pose and an appearance based on the second avatar.
6. The method of item 4, wherein one or more of the plurality of stickers is animated.
7. The method of any of items 1-6, wherein the representations of sets of stickers based on user-created avatars are displayed in a first area of the user interface, the first area further including one or more representations of sets of stickers based on avatars that are not user-created avatars.
8. The method of item 7, wherein the first region further comprises a create user interface object that, when selected, displays a user interface for creating a user-created avatar.
9. The method of item 7, further comprising:
upon detecting generation of a new user-created avatar, displaying a representation of a set of stickers based on the new user-created avatar in the first area.
10. The method of item 7, wherein displaying the sticker user interface further comprises:
displaying the sticker having the first region in accordance with a determination that the request to display the sticker user interface is a first received request to display the sticker user interface
A user interface; and
in accordance with a determination that the request to display the sticker user interface is a subsequently received request to display the sticker user interface, displaying the sticker user interface without the first area.
11. The method of item 10, further comprising:
receiving a first region of the sticker user interface when displaying the sticker user interface without the first region
Inputting; and
in response to detecting the first input, in accordance with a determination that the first input satisfies a first set of criteria, displaying the first region.
12. The method of any of items 1-11, wherein the set of stickers based on the first avatar has a first set of sticker poses and the set of stickers based on the second avatar has the first set of sticker poses.
13. The method of item 12, wherein:
displaying the sticker user interface further comprises displaying a representation of a set of stickers based on a first predefined avatar; and is
The set of stickers based on the first predefined avatar has a second set of sticker poses different from the first set of sticker poses.
14. The method of item 13, wherein:
the set of stickers based on the first predefined avatar includes stickers having a first sticker pose,
displaying the sticker user interface further includes displaying a representation of a set of stickers based on a second predefined avatar,
the set of stickers based on the second predefined avatar includes stickers having the first sticker pose, and
the sticker with the first sticker gesture for the first predefined avatar includes a graphical element corresponding to the first predefined avatar, the graphical element not including
In the sticker having the first sticker gesture for the second predefined avatar.
15. The method of item 13, wherein the first set of decal poses includes at least one decal pose not included in the second set of decal poses.
16. The method of any of items 1-15, wherein displaying the sticker user interface further comprises displaying an editing user interface object that, when selected, displays an editing interface for editing the respective user-created avatar.
17. The method of item 16, wherein displaying the sticker user interface further comprises:
displaying a plurality of stickers of a set of stickers based on the respective user-created avatar, wherein the plurality of stickers have an appearance based on a first appearance of the respective user-created avatar;
detecting that corresponds to editing the respective user-created avatar from the first appearance to
A series of inputs for a request of a second appearance;
detecting posts in the set of stickers that display avatars based on the respective user-created avatars
A request for the plurality of stickers; and
in response to detecting the request to display the plurality of stickers of the set of stickers based on the respective user-created avatar, displaying the plurality of stickers of the set of stickers based on the respective user-created avatar, wherein the set of stickers has an updated appearance based on the second appearance of the respective user-created avatar.
18. The method of item 16, wherein editing the respective user-created avatar using the editing interface changes an appearance of the respective user-created avatar in the sticker user interface and in user interfaces other than the sticker user interface.
19. The method of any of items 1-18, wherein displaying the sticker user interface further comprises displaying a keyboard display area including a plurality of emoticons and the representations of the plurality of sets of stickers based on a user-created avatar, the method further comprising:
detecting one of the representations of the sets of stickers based on a user-created avatar
Selection of an individual representation; and
in response to detecting the selection of the one of the representations of the plurality of sets of stickers based on the user-created avatar, displaying a plurality of stickers of a set of stickers based on the user-created avatar in the keyboard display area.
20. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display device and an input device, the one or more programs including instructions for performing the method of any of items 1-19.
21. An electronic device, comprising:
a display device;
an input device;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for performing operations according to item 1
Instructions of the method of any of claims 19.
22. An electronic device, comprising:
a display device;
an input device, and
means for performing the method of any of items 1-19.
23. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display device and an input device, the one or more programs comprising instructions for
Instructions for the operations:
receiving, via one or more input devices, a request to display a sticker user interface; and
in response to receiving the request to display the sticker user interface, displaying, via the display device, a sticker user including a representation of a plurality of sets of stickers based on a user-created avatar
An interface, comprising:
in accordance with a determination that the user has created a first set of two or more user-created avatars including a first avatar and a second avatar, displaying representations of a first plurality of sets of stickers, wherein the representations of the first plurality of sets of stickers include a representation of a set of stickers based on the first avatar and a representation of a set of stickers based on the second avatar; and
In accordance with a determination that the user has created a second set of two or more user-created avatars that include a third avatar that is not included in the first set of two or more user-created avatars, displaying a representation of a second plurality of sets of stickers that is different from the representation of the first plurality of sets of stickers, wherein the representation of the second plurality of sets of stickers includes a representation of a set of stickers based on the third avatar that is not included in the representation of the first plurality of sets of stickers.
24. An electronic device, comprising:
a display device;
an input device;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for:
receiving, via one or more input devices, a request to display a sticker user interface;
and
in response to receiving the request to display the sticker user interface, displaying, via the display device, a sticker user interface including a representation of a plurality of sets of stickers based on a user-created avatar, including:
in accordance with a determination that the user has created a first set of two or more user-created avatars including a first avatar and a second avatar, displaying representations of a first plurality of sets of stickers, wherein the representations of the first plurality of sets of stickers include a representation of a set of stickers based on the first avatar and a representation of a set of stickers based on the second avatar; and
In accordance with a determination that the user has created a second set of two or more user-created avatars that include a third avatar that is not included in the first set of two or more user-created avatars, displaying a representation of a second plurality of sets of stickers that is different from the representation of the first plurality of sets of stickers, wherein the representation of the second plurality of sets of stickers includes a representation of a set of stickers based on the third avatar that is not included in the representation of the first plurality of sets of stickers.
25. An electronic device, comprising:
a display device;
an input device;
means for receiving, via one or more input devices, a request to display a sticker user interface; and
display, via the display device, a sticker including a representation of a plurality of sets of stickers based on a user-created avatar in response to receiving the request to display the sticker user interface
Apparatus for a user interface, comprising:
in accordance with a determination that the user has created a first set of two or more user-created avatars including a first avatar and a second avatar, displaying representations of a first plurality of sets of stickers, wherein the representations of the first plurality of sets of stickers include a representation of a set of stickers based on the first avatar and a representation of a set of stickers based on the second avatar; and
In accordance with a determination that the user has created a second set of two or more user-created avatars that include a third avatar that is not included in the first set of two or more user-created avatars, displaying a representation of a second plurality of sets of stickers that is different from the representation of the first plurality of sets of stickers, wherein the representation of the second plurality of sets of stickers includes a representation of a set of stickers based on the third avatar that is not included in the representation of the first plurality of sets of stickers.
26. A method, comprising:
at an electronic device having a display device and one or more input devices:
displaying, via the display device, a contactable user-editing user interface, the contactable user-editing user interface comprising:
one or more presentation options for the contactable user, including an avatar presentation option;
detecting, via the one or more input devices, a selection of the avatar representation option;
in response to detecting selection of the avatar representation option, initiating a process for selecting an avatar to use as a representation of the contactable user in the contactable user interface;
receiving, via the one or more input devices, a sequence of one or more inputs corresponding to a selection of a simulated three-dimensional avatar as part of the process for selecting the avatar to use as a representation of the contactable user in the contactable user interface; and
In response to selection of the simulated three-dimensional avatar, displaying, via the display device, a posing user interface including one or more controls for selecting a pose of the simulated three-dimensional avatar from a plurality of different poses.
27. The method of item 26, wherein the one or more controls include a first gesture user interface object corresponding to a first predefined gesture and a second gesture user interface object corresponding to a second predefined gesture different from the first predefined gesture.
28. The method of item 26, wherein:
the one or more input devices comprise a camera; and is
The one or more controls include a capture user interface object that, when selected, selects a gesture for the simulated three-dimensional avatar, the gesture based on a facial gesture detected in a field of view of the camera when the capture user interface object is selected.
29. The method of any of items 26 to 28, further comprising:
after selecting the gesture of the simulated three-dimensional avatar from the plurality of different gestures, setting the simulated three-dimensional avatar having the selected gesture as the representation of the contactable user.
30. The method of any of items 26-29, wherein the posing user interface is displayed
The noodle comprises:
in accordance with a determination that a first avatar is selected as the simulated three-dimensional avatar, displaying at least one representation of the first avatar in the posing user interface; and
in accordance with a determination that a second avatar is selected as the simulated three-dimensional avatar, displaying at least one representation of the second avatar in the posing user interface.
31. The method of any of items 26 to 30, further comprising:
detecting and creating a first user prior to displaying the contactable user-editing user interface
A series of inputs corresponding to requests for the user-created avatar;
receiving a request to display the contactable user editing user interface; and
in response to receiving the request to display the contactable user editing user interface,
displaying the contactable user-editing user interface including the first user-created avatar. 32. The method of item 31, wherein the sequence of one or more inputs corresponding to selection of the simulated three-dimensional avatar includes an input corresponding to selection of the first user-created avatar from a set of user-created avatars.
33. The method of any of items 26-32, wherein the sequence of one or more inputs corresponding to selection of the simulated three-dimensional avatar includes a set of inputs corresponding to creation of a new avatar.
34. The method of any of items 26 to 33, wherein the contactable user editing user interface further comprises a first representation of the contactable user.
35. The method of any of items 26 to 34, wherein the one or more representations are selected
The items include non-avatar options, the method further comprising:
detecting, via the one or more input devices, a selection of the non-avatar option;
and
in response to detecting selection of the non-avatar option, initiating a process for selecting a representation option other than an avatar to use as a representation of the contactable user in the contactable user interface.
36. The method of any of items 26-35, wherein the one or more presentation options include a plurality of options selected based on information for the contactable user.
37. The method of item 36, wherein the plurality of options selected based on information for the contactable users includes a representation of the contactable users that was most recently used.
38. The method of item 36, wherein the plurality of options selected based on the information for the contactable user includes media items available at the electronic device that are identified as being associated with the contactable user.
39. The method of item 36, wherein the information for the contactable user comprises information from a messaging communication session with the contactable user.
40. The method of any of items 26-39, wherein the one or more presentation options include an alphabetical combination presentation option.
41. The method of any of items 26 to 40, wherein the one or more presentation options include a media item option.
42. The method of item 41, further comprising:
upon detecting selection of the media item option, displaying, via the display device, a plurality of filter options for applying a filter effect to media items associated with the selected media item option.
43. The method of any of items 26 to 42, further comprising:
after selecting the gesture that simulates the three-dimensional avatar from the plurality of different gestures, displaying, via the display device, a background option that, when selected, changes an appearance of a background area of the representation of the contactable user.
44. The method of any of items 26 to 43, wherein displaying comprises the one or more
The posing user interface of a control comprises:
in accordance with a determination that the one or more input devices include a depth camera sensor, displaying, via the display device, the simulated three-dimensional avatar having a dynamic appearance, wherein the simulated three-dimensional avatar is responsive to facial gestures detected in a field of view of the depth camera sensor
Change in posture; and
in accordance with a determination that the one or more input devices do not include a depth camera sensor, displaying, via the display device, a third gesture user interface object corresponding to a third predefined gesture and a fourth gesture user interface object corresponding to a fourth predefined gesture different from the third predefined gesture.
45. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display device and one or more input devices, the one or more programs including instructions for performing the method of any of items 26-44.
46. An electronic device, comprising:
a display device;
one or more input devices;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for performing operations according to an item
26 to 44.
47. An electronic device, comprising:
a display device;
one or more input devices; and
means for performing the method of any of items 26-44.
48. A non-transitory computer-readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display device and one or more input devices, the one or more programs including instructions for:
displaying a contactable user editing user interface via the display device, the contactable user editing user interface
The user editing user interface includes:
one or more presentation options for the contactable user, including an avatar presentation option;
detecting, via the one or more input devices, a selection of the avatar representation option;
In response to detecting selection of the avatar representation option, initiating a process for selecting an avatar to use as a representation of the contactable user in the contactable user interface;
receiving, via the one or more input devices, a sequence of one or more inputs corresponding to a selection of a simulated three-dimensional avatar as part of the process for selecting the avatar to use as a representation of the contactable user in the contactable user interface; and
in response to selection of the simulated three-dimensional avatar, displaying, via the display device, a posing user interface including one or more controls for selecting a pose of the simulated three-dimensional avatar from a plurality of different poses.
49. An electronic device, comprising:
a display device;
one or more input devices;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for:
displaying, via the display device, a contactable user-editing user interface, the contactable user-editing user interface comprising:
One or more presentation options for the contactable user, including an avatar presentation option;
detecting, via the one or more input devices, a selection of the avatar representation option;
in response to detecting selection of the avatar representation option, initiating a process for selecting an avatar to use as a representation of the contactable user in the contactable user interface;
receiving, via the one or more input devices, a sequence of one or more inputs corresponding to a selection of a simulated three-dimensional avatar as part of the process for selecting the avatar to use as a representation of the contactable user in the contactable user interface; and
in response to selection of the simulated three-dimensional avatar, displaying, via the display device, a posing user interface including one or more controls for selecting a pose of the simulated three-dimensional avatar from a plurality of different poses.
50. An electronic device, comprising:
a display device;
one or more input devices;
means for displaying, via the display device, a contactable user-editing user interface comprising:
One or more presentation options for the contactable user, including an avatar presentation option;
means for detecting selection of the avatar representation option via the one or more input devices;
means for initiating a process for selecting an avatar to use as a representation of the contactable user in the contactable user interface in response to detecting selection of the avatar representation option;
apparatus for: receiving, via the one or more input devices, one corresponding to a selection of a simulated three-dimensional avatar as part of the process for selecting the avatar to use as a representation of the contactable user in the contactable user interface
Or a sequence of multiple inputs; and
apparatus for: in response to selection of the simulated three-dimensional avatar, displaying, via the display device, a posing user interface including one or more controls for selecting a pose of the simulated three-dimensional avatar from a plurality of different poses.
51. A method, comprising:
at an electronic device having a display device and an input device:
displaying, via the display device, an avatar editing user interface, the avatar editing user interface comprising:
An avatar comprising a first feature, the first feature having a first color pattern generated with a first set of colors comprising a first color in a first region of the first color pattern;
a set of color options for the first feature; and a plurality of color pattern options for the first feature comprising a second color pattern option different from the first color pattern;
while the first feature is displayed with the first color pattern generated with the first set of colors including the first color in the first area of the first color pattern, detecting, via the input device, a selection of a color option of the set of color options that corresponds to a second color;
in response to detecting the selection:
changing an appearance of one or more of the plurality of color pattern options having a first portion corresponding to the set of color options, wherein changing the appearance includes changing a portion of the second color pattern option from a respective color to the second color; and is
Maintaining a display of the avatar including the first feature, the first feature having the first color pattern;
Detecting selection of a respective one of the color pattern options having a changed appearance; and
in response to detecting selection of the respective color pattern option and while selecting the second color for the set of color options:
changing an appearance of the first feature of the avatar to have an appearance generated based on the respective color pattern option, and the second color is applied to a portion of the respective color pattern option.
52. The method of item 51, wherein:
the plurality of color pattern options further comprises a third color pattern option different from the second color pattern option; and is
Changing a portion of the second color pattern option from the respective color to the second color includes changing a portion of the third color pattern option from a third color to the second color.
53. The method of any of items 51-52, wherein maintaining the display of the avatar including the first feature includes changing a respective one of the colors of the first set of colors of the first color pattern to the second color, the first feature having the first color pattern.
54. The method of any of items 51-53, wherein the plurality of color pattern options includes a first color pattern option corresponding to the first color pattern.
55. The method of any of items 51-54, wherein the plurality of color pattern options includes an option that, when selected, causes the first feature to cease being displayed.
56. The method of item 55, further comprising:
detecting selection of the option to cease displaying the first feature; and
in response to detecting selection of the option to cease displaying the first feature:
ceasing to display the first feature; and is
Displaying one or more avatar characteristics that are hidden when the first characteristics are displayed.
57. The method of item 56, wherein the avatar includes a fourth feature, the method further comprising
The method comprises the following steps:
displaying, while displaying the first feature, a display having a first outer portion based on the first feature
The fourth feature of the view; and
displaying the fourth feature having a second appearance that is not based on the first feature after ceasing to display the first feature.
58. The method of any of items 51-57, wherein changing an appearance of the first feature of the avatar to have the appearance generated based on the respective color pattern option, and the second color being applied to portions of the respective color pattern option comprises:
In accordance with a determination that the respective color pattern option is the second color pattern option, displaying the first feature of the avatar having a second color pattern corresponding to the second color pattern option; and
in accordance with a determination that the respective color pattern option is a fourth color pattern option that is different from the second color pattern option, displaying the first feature of the avatar having a fourth color pattern corresponding to the fourth color pattern option.
59. The method of any of items 51-58, wherein the first feature comprises a first display texture that is different from a second display texture of a skin feature of the avatar.
60. The method of any of items 51-59, further in response to detecting selection of the color option of the set of color options that corresponds to the second color:
displaying a color adjustment control for the selected color option;
detecting an input corresponding to the color adjustment control; and
in response to detecting the input corresponding to the color adjustment control, modifying one or more properties of the second color.
61. The method of any of items 51-60, wherein the plurality of color pattern options includes a fifth color pattern option having an area that is not responsive to selection of the color option.
62. The method of any of items 51-61, wherein the avatar includes a third feature displayed above the first feature, wherein the first feature is an item selected from the group consisting of: an avatar glasses feature, an avatar hair feature, an avatar facial hair feature, and an avatar skin wrinkle feature.
63. The method of any of items 51-62, wherein the avatar includes a fifth feature displayed concurrently with the first feature, wherein the fifth feature is separate from the first feature and does not change in response to a change in the first feature.
64. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display device and an input device, the one or more programs including instructions for performing the method of any of items 51-63.
65. An electronic device, comprising:
a display device;
an input device;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for performing operations according to an item
51 to 63.
66. An electronic device, comprising:
a display device;
an input device; and
means for performing the method of any of items 51-63.
67. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display device and an input device, the one or more programs comprising instructions for
Instructions for the operations:
displaying an avatar editing user interface via the display device, the avatar editing user
The interface comprises:
an avatar comprising a first feature, the first feature having a first color pattern generated with a first set of colors comprising a first color in a first region of the first color pattern;
a set of color options for the first feature; and
a plurality of color pattern options for the first feature comprising a second color pattern option different from the first color pattern;
detecting, via the input device, a corresponding first one of the set of color options while the first feature is displayed with the first color pattern generated with the first set of colors including the first color in the first area of the first color pattern
Selecting a color option of two colors;
in response to detecting the selection:
changing an appearance of one or more of the plurality of color pattern options having a first portion corresponding to the set of color options, wherein changing the appearance includes changing portions of the second color pattern option from the respective color to the second color; and is
Maintaining a display of the avatar including the first feature, the first feature having the first color pattern;
detecting a respective one of the color pattern options having a changed appearance
Selection of an item; and
in response to detecting the selection of the respective color pattern option and being
When the set of color options selects the second color:
changing an appearance of the first feature of the avatar to have an appearance generated based on the respective color pattern option, and the second color is applied to portions of the respective color pattern option.
68. An electronic device, comprising:
a display device;
an input device;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for:
Displaying, via the display device, an avatar editing user interface, the avatar editing user interface comprising:
an avatar comprising a first feature, the first feature having a first color pattern generated with a first set of colors comprising a first color in a first region of the first color pattern;
a set of color options for the first feature; and a plurality of color pattern options for the first feature comprising a second color pattern option different from the first color pattern;
while the first feature is displayed with the first color pattern generated with the first set of colors including the first color in the first area of the first color pattern, detecting, via the input device, a selection of a color option of the set of color options that corresponds to a second color;
in response to detecting the selection:
changing an appearance of one or more of the plurality of color pattern options having a first portion corresponding to the set of color options, wherein changing the appearance includes changing a portion of the second color pattern option from a respective color to the second color; and is
Maintaining a display of the avatar including the first feature, the first feature having the first color pattern;
detecting selection of a respective one of the color pattern options having a changed appearance; and
in response to detecting the selection of the respective color pattern option and while selecting the second color for the set of color options:
changing an appearance of the first feature of the avatar to have an appearance generated based on the respective color pattern option, and the second color is applied to portions of the respective color pattern option.
69. An electronic device, comprising:
a display device;
an input device;
means for displaying an avatar editing user interface via the display device, the avatar
Like the editing user interface, including:
an avatar comprising a first feature, the first feature having a first color pattern generated with a first set of colors comprising a first color in a first region of the first color pattern;
a set of color options for the first feature; and
a plurality of color pattern options for the first feature comprising a second color pattern option different from the first color pattern;
Apparatus for: detecting, via the input device, detection of the first color pattern generated with the first set of colors of the first color in the first area including the first color pattern while the first feature is displayed with the first color pattern
A selection of a color option of the color options corresponding to a second color;
apparatus for: in response to detecting the selection:
changing an appearance of one or more of the plurality of color pattern options having a first portion corresponding to the set of color options, wherein changing the appearance includes changing a portion of the second color pattern option from a respective color to the second color; and is
Maintaining a display of the avatar including the first feature, the first feature having the first color pattern;
for detecting a respective color map having a changed appearance in the color pattern options
Means for selecting case options; and
apparatus for: in response to detecting the selection of the respective color pattern option and while selecting the second color for the set of color options:
Changing an appearance of the first feature of the avatar to have an appearance generated based on the respective color pattern option, and the second color is applied to portions of the respective color pattern option.
70. A method, comprising:
at an electronic device having a display device and an input device:
displaying, via the display device, an avatar editing user interface, the avatar editing user interface comprising:
a avatar, the respective avatar feature having a first pose; and an avatar option selection area comprising a plurality of avatar feature options corresponding to a set of candidate values for a characteristic of an avatar feature and having an appearance based on the avatar;
detecting, via the input device, a request to display an option for editing the respective avatar characteristic; and
in response to detecting the request, updating the avatar option selection area to display avatar feature options corresponding to a set of candidate values for characteristics of the respective avatar feature, including concurrently displaying:
a representation of a first option for the respective avatar feature, wherein the respective avatar feature has a second pose; and
A representation of a second option for the respective avatar feature, wherein the respective avatar feature has a third pose different from the second pose.
71. The method of item 70, wherein updating the avatar option selection area to display avatar feature options corresponding to a set of candidate values for characteristics of the respective avatar feature further comprises
The method comprises the following steps:
displaying a plurality of representations of alternatives to the first option for the respective avatar feature, wherein the respective avatar feature has the second gesture in each of the plurality of representations of alternatives to the first option; and
displaying a plurality of representations of alternatives to the second option for the respective avatar feature, wherein the respective avatar feature has the third gesture different from the second gesture in each of the plurality of representations of alternatives to the second option.
72. The method of item 71, wherein the plurality of representations for the alternatives of the first option and the plurality of representations for the alternatives of the second option each have an appearance based on an appearance of the avatar.
73. The method of any of items 70-72, wherein:
the first option corresponds to an option for editing a first portion of the respective avatar characteristic;
the second pose increases a degree of visibility of the first portion of the respective avatar feature;
the second option corresponds to a different one of the avatar characteristics than the one used to edit the respective avatar characteristic
An option for a second portion of the first portion; and
the third gesture increases a degree of visibility of the second portion of the respective avatar feature.
74. The method of item 73, wherein:
when the respective avatar feature has the first pose, the first portion has a first degree of visibility, and the degree of visibility of the first portion in the second pose is greater than the first degree of visibility of the first portion in the first pose; and is
When the respective avatar feature has the first pose, the second portion has a second degree of visibility, and the degree of visibility of the second portion in the third pose is greater than the second degree of visibility of the second portion in the first pose.
75. The method of item 73, wherein:
When the respective avatar feature has the third pose, the first portion has a third degree of visibility, and the degree of visibility of the first portion in the second pose is greater than the third degree of visibility of the first portion in the third pose; and is
When the respective avatar feature has the second pose, the second portion has a fourth degree of visibility, and the degree of visibility of the second portion in the third pose is greater than the fourth degree of visibility of the second portion in the second pose.
76. The method of any of items 70-75, wherein:
the corresponding avatar feature is an avatar mouth;
the first option is a tongue nail option for an avatar tongue; and is
The second gesture is a gesture in which the avatar mouth is displayed with the avatar tongue extended from the avatar mouth.
77. The method of any of items 70-76, wherein:
the corresponding avatar feature is an avatar mouth;
the second option is an avatar teeth option; and is
The third posture is a posture in which the head image mouth is displayed with the head image lips positioned to expose the head image teeth.
78. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display device and an input device, the one or more programs including instructions for performing the method of any of items 70-77.
79. An electronic device, comprising:
a display device;
an input device;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for performing operations according to an item
70 to 77.
80. An electronic device, comprising:
a display device;
an input device; and
means for performing the method of any of items 70-77.
81. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display device and an input device, the one or more programs comprising instructions for
Instructions for the operations:
displaying an avatar editing user interface via the display device, the avatar editing user
The interface comprises:
a avatar, the respective avatar feature having a first pose; and
an avatar option selection area comprising a plurality of avatar feature options corresponding to a set of candidate values for a characteristic of an avatar feature and comprising having an appearance based on the avatar;
detecting, via the input device, a request to display an option for editing the respective avatar characteristic; and
in response to detecting the request, updating the avatar option selection area to display avatar feature options corresponding to a set of candidate values for characteristics of the respective avatar feature, including concurrently displaying:
a representation of a first option for the respective avatar feature, wherein the respective avatar feature has a second pose; and
a representation of a second option for the respective avatar feature, wherein the respective avatar feature has a third pose different from the second pose.
82. An electronic device, comprising:
a display device;
an input device;
one or more processors; and
Memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for:
displaying, via the display device, an avatar editing user interface, the avatar editing user interface comprising:
a avatar, the respective avatar feature having a first pose; and an avatar option selection area comprising a plurality of avatar feature options corresponding to a set of candidate values for a characteristic of an avatar feature and comprising having an appearance based on the avatar;
detecting, via the input device, a request to display an option for editing the respective avatar characteristic; and
in response to detecting the request, updating the avatar option selection area to display avatar feature options corresponding to a set of candidate values for characteristics of the respective avatar feature, including concurrently displaying:
a representation of a first option for the respective avatar feature, wherein the respective avatar feature has a second pose; and
a representation of a second option for the respective avatar feature, wherein the respective avatar feature has a third pose different from the second pose.
83. An electronic device, comprising:
a display device;
an input device;
means for displaying an avatar editing user interface via the display device, the avatar
Like the editing user interface, including:
a avatar, the respective avatar feature having a first pose; and
an avatar option selection area comprising a plurality of avatar feature options corresponding to a set of candidate values for a characteristic of an avatar feature and comprising having an appearance based on the avatar;
means for detecting, via the input device, a request to display an option for editing the respective avatar feature; and
apparatus for: in response to detecting the request, updating the avatar option selection area to display a set of candidate values associated with characteristics for the respective avatar feature
Corresponding avatar feature options, including simultaneous display:
a representation of a first option for the respective avatar feature, wherein the respective avatar feature has a second pose; and
a representation of a second option for the respective avatar feature, wherein the respective avatar feature has a third pose different from the second pose.
84. A method, comprising:
at an electronic device having a display device and one or more cameras:
displaying, via the display device, a virtual avatar having one or more avatar features that change appearance in response to a change in a facial pose detected in a field of view of the one or more cameras, the one or more avatar features including a first avatar feature having a first appearance that is modified in response to a change in the facial pose detected in the field of view of the one or more cameras;
detecting movement of one or more facial features of a face when the face including one or more detected facial features is detected in the field of view of the one or more cameras;
in response to detecting movement of the one or more facial features:
in accordance with a determination that the detected movement of the one or more facial features is such that a first pose criterion is satisfied, modifying the virtual avatar to display the first avatar feature having a second appearance different from the first appearance, the second appearance being modified in response to a detected change in the facial pose in the field of view of the one or more cameras;
In accordance with a determination that the detected movement of the one or more facial features causes a second pose criterion, different from the first pose criterion, to be satisfied, modifying the virtual avatar to display the first avatar feature having a third appearance, different from the first appearance and the second appearance, the third appearance being modified in response to a change in the facial pose detected in the field of view of the one or more cameras; and is
In accordance with a determination that the detected movement of the one or more facial features satisfies a criterion for maintaining display of the first avatar feature having the first appearance, modifying the virtual avatar to display the first avatar feature by modifying the first appearance of the first avatar feature in response to a change in the facial pose detected in the field of view of the one or more cameras.
85. The method of item 84, wherein:
the detected movement of the one or more facial features includes a first facial feature
The movement of (2);
the detected movement of the one or more facial features is sufficient to maintain the first appearance when the movement of the first facial feature is within a first range of possible first facial feature values based on a predetermined range of motion of the first facial feature
The criteria for display of a first avatar characteristic;
when the movement of the first facial feature is within a second range of possible first facial feature values that is different from the first range of possible first facial feature values, the detected movement of the one or more facial features is such that the first pose criterion is satisfied;
modifying the first appearance of the first avatar feature in response to the detected change in the facial pose in the field of view of the one or more cameras comprises: modifying the first appearance of the first avatar characteristic within a first range of appearance values corresponding to the first range of possible first facial feature values; and is
Modifying the virtual avatar to display the first avatar characteristic having the second appearance includes displaying the first avatar characteristic having a second appearance value within a second range of appearance values, the second range of appearance values being different from the first range of appearance values and corresponding to the second range of possible first facial feature values.
86. The method of item 85, wherein:
when the movement of the first facial feature is within a third range of possible first facial feature values that is different from the first range of possible first facial feature values and the second range of possible first facial feature values, the detected movement of the one or more facial features is such that the second pose criterion is satisfied; and is
Modifying the virtual avatar to display the first avatar characteristic having the third appearance includes displaying the first avatar characteristic having a third appearance value within a third range of appearance values, the third range of appearance values being different from the first range of appearance values and the second range of appearance values and corresponding to the third range of possible first facial feature values.
87. The method of any of items 84-86, wherein:
the electronic device is configured to transmit a first predefined emoticon and a second predefined emoticon
An emoticon;
the second appearance of the first avatar feature corresponds to an appearance of the first predefined emoticon; and is
The third appearance of the first avatar characteristic corresponds to an appearance of the second predefined emoticon.
88. The method of any of items 84-87, further comprising:
while the first avatar feature is displayed as having the second appearance, detecting a change in the facial pose in the field of view of the one or more cameras;
in response to detecting the facial pose in the field of view of the one or more cameras
Change in potential:
In accordance with a determination that the detected change in the facial pose in the field of view of the one or more cameras comprises a pose in which a second facial feature moves outside a first pose range of the second facial feature, modifying the first portrait feature to have the first appearance; and
in accordance with a determination that the detected change in the facial pose in the field of view of the one or more cameras comprises a pose in which the second facial feature moves to within the first pose range of the second facial feature, maintaining display of the first portrait feature having the second appearance.
89. The method of item 88, wherein the one or more avatar features further comprise a second avatar feature having a fourth appearance that is modified in response to a change in the facial pose detected in the field of view of the one or more cameras, the method
Further comprising:
further in response to detecting the one or more cameras in the field of view
The change in facial pose:
in accordance with a determination that the detected movement of the one or more facial features causes third pose criteria to be satisfied, modifying the virtual avatar to display the second avatar feature having a fifth appearance different from the fourth appearance, the fifth appearance modified in response to a detected change in the facial pose in the field of view of the one or more cameras; and is
In accordance with a determination that the detected movement of the one or more facial features satisfies criteria for maintaining display of the second avatar feature having the fourth appearance, modifying the virtual avatar to display the second avatar feature by modifying the fourth appearance of the second avatar feature in response to a change in the facial pose detected in the field of view of the one or more cameras;
while the second avatar feature is displayed as having the fifth appearance, detecting a second change in the facial pose in the field of view of the one or more cameras; and
in response to detecting the facial pose in the field of view of the one or more cameras
The second change in potential:
in accordance with a determination that the detected change in the facial pose in the field of view of the one or more cameras comprises a pose in which a third facial feature moves outside of a second pose range of the third facial feature, modifying the second avatar feature to have the fourth appearance, the second pose range being different from the first pose range of the second facial feature; and is
In accordance with a determination that the detected change in the facial pose in the field of view of the one or more cameras comprises a pose in which the third facial feature moves to within the second pose range of the third facial feature, maintaining display of the second avatar feature having the fifth appearance.
90. The method of clause 89, wherein:
the first avatar feature is an avatar mouth;
the second avatar characteristic is one or more avatar eyes; and is
The second gesture range is greater than the first gesture range.
91. The method of any of items 84-90, further comprising:
displaying a three-dimensional effect at a first location on the virtual avatar when the virtual avatar is displayed with a first orientation;
detecting a change in the orientation of the face in the field of view of the one or more cameras;
and
in response to detecting the change in the face orientation, modifying the virtual avatar based on the detected change in the face orientation, comprising:
changing the orientation of one or more features of the avatar by a respective amount, the respective amount determined based on a magnitude of the detected change in the orientation of the face when changing the orientation of the three-dimensional effect by less than the respective amount.
92. The method of any of items 84-91, wherein modifying the virtual avatar to display the first avatar feature having the second appearance includes displaying a third avatar feature, wherein the third avatar feature is not displayed until the movement of the one or more facial features is detected.
93. The method of item 92, wherein displaying the third avatar characteristic includes displaying the third avatar characteristic as it gradually appears on the virtual avatar.
94. The method of item 92, further comprising:
while displaying the third avatar feature, detecting movement of the one or more facial features; and
in response to detecting the movement of the one or more facial features:
in accordance with a determination that the detected movement of the one or more facial features is such that the first pose criteria is no longer satisfied, ceasing to display the third avatar feature by tapering the third avatar feature from the virtual avatar.
95. The method of item 92, wherein displaying the third avatar characteristic includes maintaining display of the third avatar characteristic for at least a predetermined period of time.
96. The method of any of items 84-95, wherein modifying the virtual avatar to display the first avatar feature having the second appearance includes displaying a first animation of the first avatar feature having the first appearance that gradually diminishes, and displaying a second animation of the first avatar feature having the second appearance that gradually increases, wherein the second animation is displayed concurrently with at least a portion of the first animation.
97. The method of item 96, wherein the movement of the one or more facial features includes movement of a fourth facial feature, and the first portrait feature is a representation of a facial feature that is different from the fourth facial feature.
98. The method of any of items 84-97, wherein the one or more avatar features further include a fourth avatar feature having a sixth appearance that is modified in response to a detected change in the facial pose in the field of view of the one or more cameras
In addition, the method further comprises:
further in response to detecting movement of the one or more facial features:
in accordance with the determination that the detected movement of the one or more facial features causes the first pose criteria to be satisfied, modify the virtual avatar to display the fourth avatar feature having a seventh appearance different from the sixth appearance, the seventh appearance modified in response to a change in the facial pose detected in the field of view of the one or more cameras;
in accordance with the determination that the detected movement of the one or more facial features causes the second pose criteria to be satisfied, modify the virtual avatar to display the fourth avatar feature having an eighth appearance different from the sixth appearance and the seventh appearance, the eighth appearance modified in response to a change in the facial pose detected in the field of view of the one or more cameras; and is
In accordance with a determination that the detected movement of the one or more facial features satisfies criteria for maintaining display of the fourth avatar feature having the sixth appearance, modifying the virtual avatar to display the fourth avatar feature by modifying the sixth appearance of the fourth avatar feature in response to a change in the facial pose detected in the field of view of the one or more cameras.
99. The method of any of items 84-98, wherein:
the first avatar characteristic includes a first state and a second state, and
the one or more avatar features further include a fifth avatar feature modified in response to a change in the facial pose detected in the field of view of the one or more cameras,
the fifth avatar characteristic includes a third state and a fourth state, the method further comprising:
further in response to detecting the movement of the one or more facial features:
in accordance with a determination that a first set of criteria is satisfied, displaying the first avatar characteristic having the first state and displaying the fifth avatar characteristic having the third state;
in accordance with a determination that a second set of criteria is satisfied, displaying the first avatar characteristic having the second state and displaying the fifth avatar characteristic having the third state;
In accordance with a determination that a third set of criteria is satisfied, displaying the first avatar characteristic having the first state and displaying the fifth avatar characteristic having the fourth state; and
in accordance with a determination that a fourth set of criteria is satisfied, displaying the first avatar characteristic having the second state and displaying the fifth avatar characteristic having the fourth state.
100. The method of item 99, wherein:
the first avatar characteristic is one or more avatar eyes;
the first state is a state in which the one or more head-like eyes have a rounded eye appearance; and is
The second state is a state in which the one or more avatar eyes have a squinting appearance. 101. The method of item 99, wherein:
the first avatar feature is an avatar mouth;
the first state is a state in which the mouth of the avatar has a first expression; and is
The second state is a state in which the avatar mouth has a second expression different from the first expression.
102. The method of item 99, wherein:
the first avatar characteristic includes a set of avatar eyebrows;
the first state is a state in which the set of head portrait eyebrows is displayed; and is
The second state is a state in which the set of head portrait eyebrows is not displayed.
103. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with one or more cameras and a display device, the one or more programs including instructions for performing the method of any of items 84-102.
104. An electronic device, comprising:
a display device;
one or more cameras;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for performing operations according to an item
Instructions of the method of any one of 84 to 102.
105. An electronic device, comprising:
a display device;
one or more cameras; and
means for performing the method of any of items 84-102.
106. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display device and one or more cameras, the one or more programs including instructions for use in connection with a method for operating the electronic device
Instructions in the following operations:
displaying, via the display device, a virtual avatar having one or more avatar features that change appearance in response to a change in a facial pose detected in a field of view of the one or more cameras, the one or more avatar features including a first avatar feature having a first appearance that is modified in response to a change in the facial pose detected in the field of view of the one or more cameras;
detecting movement of one or more facial features of a face when the face including one or more detected facial features is detected in the field of view of the one or more cameras;
in response to detecting movement of the one or more facial features:
in accordance with a determination that the detected movement of the one or more facial features is such that a first pose criterion is satisfied, modifying the virtual avatar to display the first avatar feature having a second appearance different from the first appearance, the second appearance being modified in response to a detected change in the facial pose in the field of view of the one or more cameras;
In accordance with a determination that the detected movement of the one or more facial features causes a second pose criterion, different from the first pose criterion, to be satisfied, modifying the virtual avatar to display the first avatar feature having a third appearance, different from the first appearance and the second appearance, the third appearance being modified in response to a change in the facial pose detected in the field of view of the one or more cameras; and are
And is
In accordance with a determination that the detected movement of the one or more facial features satisfies a criterion for maintaining display of the first avatar feature having the first appearance, modifying the virtual avatar to display the first avatar feature by modifying the first appearance of the first avatar feature in response to a detected change in the facial pose in the field of view of the one or more cameras.
107. An electronic device, comprising:
a display device;
one or more cameras;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for:
Displaying, via the display device, a virtual avatar having one or more avatar features that change appearance in response to a change in a facial pose detected in a field of view of the one or more cameras, the one or more avatar features including a first avatar feature having a first appearance that is modified in response to a change in the facial pose detected in the field of view of the one or more cameras;
detecting movement of one or more facial features of a face when the face including one or more detected facial features is detected in the field of view of the one or more cameras;
in response to detecting movement of the one or more facial features:
in accordance with a determination that the detected movement of the one or more facial features is such that a first pose criterion is satisfied, modifying the virtual avatar to display the first avatar feature having a second appearance different from the first appearance, the second appearance being modified in response to a detected change in the facial pose in the field of view of the one or more cameras;
in accordance with a determination that the detected movement of the one or more facial features causes a second pose criterion, different from the first pose criterion, to be satisfied, modifying the virtual avatar to display the first avatar feature having a third appearance, different from the first appearance and the second appearance, the third appearance being modified in response to a change in the facial pose detected in the field of view of the one or more cameras; and is
In accordance with a determination that the detected movement of the one or more facial features satisfies a criterion for maintaining display of the first avatar feature having the first appearance, modifying the virtual avatar to display the first avatar feature by modifying the first appearance of the first avatar feature in response to a change in the facial pose detected in the field of view of the one or more cameras.
108. An electronic device, comprising:
a display device;
one or more cameras;
means for displaying, via the display device, a virtual avatar having one or more avatar features that change appearance in response to a change in a facial pose detected in a field of view of the one or more cameras, the one or more avatar features including a first avatar feature having a first appearance that is modified in response to a change in the facial pose detected in the field of view of the one or more cameras;
apparatus for: detecting a face comprising one or more detected facial features when the face is detected in the field of view of the one or more cameras
Movement of one or more facial features of;
apparatus for: in response to detecting movement of the one or more facial features:
in accordance with a determination that the detected movement of the one or more facial features is such that a first pose criterion is satisfied, modifying the virtual avatar to display the first avatar feature having a second appearance different from the first appearance, the second appearance being modified in response to a detected change in the facial pose in the field of view of the one or more cameras;
in accordance with a determination that the detected movement of the one or more facial features causes a second pose criterion, different from the first pose criterion, to be satisfied, modifying the virtual avatar to display the first avatar feature having a third appearance, different from the first appearance and the second appearance, the third appearance being modified in response to a change in the facial pose detected in the field of view of the one or more cameras; and are
And is
In accordance with a determination that the detected movement of the one or more facial features satisfies a criterion for maintaining display of the first avatar feature having the first appearance, modifying the virtual avatar to display the first avatar feature by modifying the first appearance of the first avatar feature in response to a detected change in the facial pose in the field of view of the one or more cameras.
109. A method, comprising:
at an electronic device having a display device and one or more input devices:
displaying a content creation user interface via the display device;
while displaying the content-creation user interface, receiving, via the one or more input devices, a request to display a first display area comprising a plurality of graphical objects corresponding to predefined content for insertion into the content-creation user interface, wherein displaying the first display area comprises:
in response to receiving the request, displaying, via the display device, the first display region including a first subset of graphical objects, wherein the graphical objects have an appearance based on a set of avatars available at the electronic device, including:
in accordance with a determination that the set of avatars includes a first type of avatar, displaying one of the graphical objects in the first subset having an appearance based on the first type of avatar; and in accordance with a determination that the set of avatars does not include any avatars of the first type, displaying the graphical objects in the first subset that have an appearance based on avatars of a second type different from the first type without displaying one of the graphical objects in the first subset that have an appearance based on avatars of the first type.
110. The method of item 109, wherein the first type of avatar is a user-created avatar, and the second type of avatar is a predefined avatar.
111. The method of any of items 109-110, wherein the first type of avatar is based on a human character and the second type of avatar is based on a non-human character.
112. The method of any of items 109-111, further comprising:
after displaying the graphical objects in the first subset having an appearance based on a second type of avatar different from the first type without displaying one of the graphical objects in the first subset having an appearance based on the avatar of the first type, receiving a series of commands corresponding to a request to create a first avatar of the first type
Inputting a column;
in response to receiving the series of inputs, creating the first of the first type
An avatar and adding the first avatar to the set of avatars;
receiving a redisplay of the first head image of the first type after creation of the first head image
A request for a first display area; and
in response to receiving the request to redisplay the first display area, displaying the first display area with a first subset of the graphical objects including a first graphical object having an appearance based on the first avatar of the first type.
113. The method of any of items 109-112, further comprising:
after displaying the graphical objects in the first subset having an appearance based on a second type of avatar different from the first type without displaying one of the graphical objects in the first subset having an appearance based on the avatar of the first type, receiving a series of inputs corresponding to use of a graphical object corresponding to a second avatar of the first type;
receiving a re-display of the first display after receiving the series of inputs corresponding to use of the graphical object corresponding to the second avatar of the first type
A request for a region; and
in response to receiving the request to redisplay the first display area, displaying the first display area with a first subset of the graphical objects including the graphical object corresponding to the second avatar of the first type. 114. The method of any of items 109-113, wherein displaying the first display region further comprises displaying a sticker user interface object, the method further comprising:
Receiving an input directed to the sticker user interface object; and
in response to receiving the input directed to the sticker user interface object:
stopping displaying the first display area; and is
Displaying a sticker user interface including a second plurality of graphical objects corresponding to predefined content for insertion into the content creation user interface.
115. The method of item 114, wherein displaying the sticker user interface object includes displaying the sticker user interface object having a first appearance that includes a plurality of representations of the first type of avatar.
116. The method of item 115, further comprising:
after displaying the sticker user interface object having the first appearance, receiving a series of inputs corresponding to a request to create a third avatar of the first type;
receiving a request to redisplay the first display area; and
in response to receiving the request to redisplay the first display area, displaying the sticker user interface object having a second appearance, the second appearance including a representation of the third avatar of the first type.
117. The method of item 115, wherein displaying the sticker user interface object having the first appearance includes displaying an animated sequence of representations of the first type of avatar and the second type of avatar.
118. The method of any of items 114-117, wherein displaying the sticker user interface
The noodle further comprises:
in accordance with a determination that the set of avatars does not include any avatars of the first type, displaying a create user interface object that, when selected, displays a create user interface for creating avatars of the first type.
119. The method of any of items 109-118, wherein displaying the first display region further comprises displaying a plurality of emoticons.
120. The method of any of items 109-119, further comprising:
receiving a first type of input directed to a first instance of a respective one of the graphical objects having an appearance based on a set of avatars available at the electronic device while displaying the first instance of the respective one of the graphical objects
Entering; and
in response to receiving the input of the first type, displaying a second instance of the respective one of the graphical objects.
121. The method of item 120, further comprising:
while displaying the second instance of the respective one of the graphical objects, receiving a second input directed to the second instance of the respective one of the graphical objects, wherein the second input comprises a stationary first portion followed by a moving second portion comprising the second input; and
in response to receiving the second input:
in accordance with a determination that the second input satisfies a first criterion, sending a sticker corresponding to the respective one of the graphical objects to a recipient user; and
in accordance with a determination that the second input does not satisfy the first criteria, forgoing sending the sticker corresponding to the respective one of the graphical objects to the recipient user.
122. The method of item 120, wherein displaying the second instance of the respective one of the graphical objects further comprises displaying a send user interface object, the method
Further comprising:
receiving an input directed to the sending user interface object; and
in response to receiving the input directed to the sending user interface object, sending a sticker corresponding to the respective one of the graphical objects to a recipient user.
123. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display device and one or more input devices, the one or more programs including instructions for performing the method of any of items 108-122.
124. An electronic device, comprising:
a display device;
one or more input devices;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for performing operations according to an item
108 to 122.
125. An electronic device, comprising:
a display device;
one or more input devices; and
means for performing the method of any of items 108 to 122.
126. A non-transitory computer-readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display device and one or more input devices, the one or more programs including instructions for:
Displaying a content creation user interface via the display device;
while displaying the content-creation user interface, receiving, via the one or more input devices, a request to display a first display area comprising a plurality of graphical objects corresponding to predefined content for insertion into the content-creation user interface, wherein displaying the first display area comprises:
in response to receiving the request, displaying, via the display device, the first display region including a first subset of graphical objects, wherein the graphical objects have an appearance based on a set of avatars available at the electronic device, including:
in accordance with a determination that the set of avatars includes a first type of avatar, displaying one of the graphical objects in the first subset having an appearance based on the avatar of the first type; and in accordance with a determination that the set of avatars does not include any avatars of the first type, displaying the graphical objects in the first subset that have an appearance based on an avatar of a second type different from the first type without displaying one of the graphical objects in the first subset that have an appearance based on the avatar of the first type.
127. An electronic device, comprising:
a display device;
one or more input devices;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for:
displaying a content creation user interface via the display device;
while displaying the content-creation user interface, receiving, via the one or more input devices, a request to display a first display area comprising a plurality of graphical objects corresponding to predefined content for insertion into the content-creation user interface, wherein displaying the first display area comprises:
in response to receiving the request, displaying, via the display device, the first display region including a first subset of graphical objects, wherein the graphical objects have an appearance based on a set of avatars available at the electronic device, including:
in accordance with a determination that the set of avatars includes a first type of avatar, displaying one of the graphical objects in the first subset having an appearance based on the avatar of the first type; and in accordance with a determination that the set of avatars does not include any avatars of the first type, displaying the graphical objects of the first subset that have an appearance based on an avatar of a second type different from the first type without displaying one of the graphical objects of the first subset that have an appearance based on the avatar of the first type.
128. An electronic device, comprising:
a display device;
one or more input devices;
means for displaying a content creation user interface via the display device;
means for receiving, via the one or more input devices, a request to display a first display area while displaying the content-creation user interface, the first display area including a plurality of graphs corresponding to predefined content for insertion into the content-creation user interface
A graphical object, wherein displaying the first display region comprises:
in response to receiving the request, displaying, via the display device, the first display region including a first subset of graphical objects, wherein the graphical objects have an appearance based on a set of avatars available at the electronic device, including:
apparatus for: in accordance with a determination that the set of avatars includes an avatar of a first type, displaying outer ones of the first subset having the avatar based on the first type
One of the graphical objects of view; and
apparatus for: in accordance with a determination that the set of avatars does not include any avatars of the first type, displaying the graphical objects of the first subset that have appearances based on avatars of a second type different from the first type without displaying one of the graphical objects of the first subset that have appearances based on the avatars of the first type.
129. A method, comprising:
at an electronic device having one or more communication devices with which a user communicates
The child device is associated with:
receiving a request to transmit a first message to a set of contactable users, the set of contactable users including a first contactable user; and
in response to receiving a request to transmit the first message:
in accordance with a determination that the first contactable user satisfies a set of sharing criteria, wherein the set of sharing criteria includes a first sharing criterion that is satisfied when the first contactable user corresponds to an approved recipient:
transmitting, to the first contactable user via the one or more communication devices:
the first message, and
contact information of the user associated with the electronic device; and is
In accordance with a determination that the first contactable user does not satisfy the set of sharing criteria:
transmitting the first message to the first contactable user via the one or more communication devices without transmitting the contact information of the user associated with the electronic device.
130. The method of item 129, further comprising:
prior to receiving a request to transmit the first message to the set of contactable users:
Receiving a user input to update the contact information of the user associated with the electronic device; and
updating the contact information of the user associated with the electronic device in response to receiving the user input updating the contact information of the user associated with the electronic device without transmitting the contact information of the user associated with the electronic device to the first contactable user in response to the user input updating the contact information.
131. The method of any of items 129-130, wherein the set of sharing criteria includes a second sharing criterion that is satisfied when the contact information has been updated because the contact information was most recently transmitted to the first contactable user.
132. The method of any of items 129-131, further comprising:
in response to receiving the request to transmit the first message:
in accordance with the determination that the first contactable user does not satisfy the set of sharing criteria, concurrently displaying:
the first message, and
an indication that the contact information is not transmitted to the first contactable user.
133. The method of any of items 129-132, further comprising:
prior to receiving a request to transmit the first message to the set of contactable users:
providing a plurality of predetermined options for identifying whether a respective contactable user corresponds to an approved recipient, the plurality of predetermined options including one or more of:
a first recipient option, the first recipient option representing: a contactable user in a set of contactable users associated with the user of the electronic device corresponds to an approved recipient, and a contactable user not in the set of contactable users associated with the user of the electronic device does not correspond to an approved recipient,
a second recipient option indicating that all contactable users correspond to approved recipients, an
A third recipient option indicating that no contactable user corresponds to an approved recipient.
134. The method of any of items 129 to 133, further comprising:
prior to receiving a request to transmit the first message to the set of contactable users:
Receiving a set of one or more inputs selecting a graphical representation of the user associated with the electronic device, including an input selecting a graphical object; and
in response to receiving user input selecting the graphical representation:
updating the contact information of the user associated with the electronic device to include the selected graphical representation without transmitting the contact information of the user associated with the electronic device.
135. The method of item 134, further comprising:
providing the contact information including the selected graphical representation to a plurality of applications of the electronic device.
136. The method of any of items 129-135, further comprising:
prior to receiving a request to transmit the first message to the set of contactable users:
accessing a name of the user associated with the electronic device from a set of contactable users associated with the user of the electronic device;
displaying the name of the user in an editable format;
receiving user input editing the name of the user associated with the electronic device;
in response to receiving user input editing the name:
Updating the contact information of the user associated with the electronic device to include the selected name without transmitting the contact information of the user associated with the electronic device; and
providing the contact information including the selected name to a plurality of applications of the electronic device.
137. The method of any of items 129 to 136, further comprising:
subsequent to receiving the request to transmit the first message to the set of contactable users, receiving a second request to transmit a second message to a second set of one or more contactable users, wherein the second set of one or more contactable users includes the first contactable user; and is
In response to receiving the second request to transmit the second message:
in accordance with a determination that the first contactable user satisfies the set of sharing criteria, the set of sharing criteria includes a second sharing criterion that is satisfied when the contact information has been updated because the contact information was most recently transmitted to the first contactable user:
transmitting, to the first contactable user via the one or more communication devices:
the second message, and
Contact information of the user associated with the electronic device; and is
In accordance with a determination that the first contactable user does not satisfy the set of sharing criteria:
transmitting the second message to the first contactable user via the one or more communication devices without transmitting the contact information of the user associated with the electronic device.
138. The method of any of items 129-137, wherein the set of contactable users
Including a second contactable user, the method further comprising:
in response to receiving a request to transmit the first message:
in accordance with a determination that the second contactable user satisfies the set of sharing criteria, the set of sharing criteria includes the first sharing criteria that are satisfied when the second contactable user corresponds to an approved recipient:
transmitting, to the second contactable user via the one or more communication devices:
the first message, and
contact information of the user associated with the electronic device; and
in accordance with a determination that the second contactable user does not satisfy the set of sharing criteria:
transmitting the first message to the first contactable user via the one or more communication devices without transmitting the contact information of the user associated with the electronic device.
139. The method of any of items 129-138, wherein the contact information includes information corresponding to an avatar.
140. The method of any of items 129 to 139, further comprising:
upon receiving a request to transmit the first message to the set of contactable users
And front, simultaneously displaying:
the first message, and
an affordance that, when selected, causes the device to display a user interface that includes one or more options for configuring whether the first contactable user corresponds to an approved recipient.
141. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with one or more communication devices, the one or more programs including instructions for performing the method of any of items 129-140.
142. An electronic device, comprising:
one or more communication devices;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for performing operations according to an item
129 to 140.
143. An electronic device, comprising:
one or more communication devices; and
means for performing the method of any of items 129 to 140.
144. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with one or more communication devices, wherein a user is associated with the electronic device and the one or more programs include instructions for:
receiving a request to transmit a first message to a set of contactable users, the set of contactable users
The contact users comprise first contactable users; and
in response to receiving a request to transmit the first message:
in accordance with a determination that the first contactable user satisfies a set of sharing criteria, wherein the set of sharing criteria includes a first sharing criterion that is satisfied when the first contactable user corresponds to an approved recipient:
transmitting, to the first contactable user via the one or more communication devices:
the first message, and
contact information of the user associated with the electronic device; and is
In accordance with a determination that the first contactable user does not satisfy the set of sharing criteria:
transmitting the first message to the first contactable user via the one or more communication devices without transmitting the contact information of the user associated with the electronic device.
145. An electronic device, comprising:
one or more communication devices;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors,
wherein a user is associated with the electronic device and the one or more packages
Instructions for:
receiving a request to transmit a first message to a set of contactable users, the set of contactable users including a first contactable user; and
in response to receiving a request to transmit the first message:
in accordance with a determination that the first contactable user satisfies a set of sharing criteria, wherein the set of sharing criteria includes a first sharing criterion that is satisfied when the first contactable user corresponds to an approved recipient:
transmitting, to the first contactable user via the one or more communication devices:
The first message, and
contact information of the user associated with the electronic device; and is
In accordance with a determination that the first contactable user does not satisfy the set of sharing criteria:
transmitting the first message to the first contactable user via the one or more communication devices without transmitting the contact information of the user associated with the electronic device.
146. An electronic device, comprising:
one or more communication devices, wherein a user is associated with the electronic device;
means for receiving a request to transmit a first message to a set of contactable users, the set of contactable users including a first contactable user; and
apparatus for: in response to receiving a request to transmit the first message:
in accordance with a determination that the first contactable user satisfies a set of sharing criteria, wherein the set of sharing criteria includes a first sharing criterion that is satisfied when the first contactable user corresponds to an approved recipient:
transmitting, to the first contactable user via the one or more communication devices:
the first message, and
contact information of the user associated with the electronic device; and is
In accordance with a determination that the first contactable user does not satisfy the set of sharing criteria:
transmitting the first message to the first contactable user via the one or more communication devices without transmitting the contact information of the user associated with the electronic device.
147. A method, comprising:
at an electronic device having a display device and having one or more communication devices:
receiving a first message via the one or more communication devices;
subsequent to receiving the first message, receiving a request to display the first message; and
in response to receiving the request to display the first message:
in accordance with a determination that a first contactable user satisfies a set of prompting criteria, wherein the set of prompting criteria includes a first prompting criteria that is satisfied when updated contact information corresponding to the first contactable user has been received, concurrently displaying on the display device:
the first message, and
the updated contact information is available for visual indication of the first contactable user; and is
In accordance with a determination that the first contactable user does not meet the set of prompt criteria, displaying on the display device:
The first message without displaying updated contact information may be used for the visual indication of the first contactable user.
148. The method of item 147, wherein the set of prompting criteria includes a second prompting criteria that is met when the first message is part of a messaging conversation in which the first contactable user is a participant.
149. The method of any of items 147-148, further comprising:
detecting that updated contact information is available for the vision of the first contactable user
Activation of the indication; and
in response to detecting that updated contact information is available for activation of the visual indication of the first contactable user:
in accordance with a determination that the first contactable user does not correspond to an existing entry in a set of contactable users associated with the user of the electronic device, displaying a selectable option for creating a new entry for the first contactable user in the set of contactable users associated with the user of the electronic device using the contact information.
150. The method of any of items 147-149, further comprising:
Detecting that updated contact information is available for the vision of the first contactable user
Activation of the indication; and
in response to detecting that updated contact information is available for activation of the visual indication of the first contactable user:
in accordance with a determination that the first contactable user corresponds to an existing entry in the set of contactable users associated with the user of the electronic device and that the received contact information includes a modified graphical representation of the first contactable user and a modified name of the contactable user, displaying a plurality of selectable options including two or more of:
a selectable option for updating the existing entry in the set of contactable users associated with the user of the electronic device with the modified graphical representation of the first contactable user and the modified name of the contactable user,
a selectable option for updating the existing entry in the set of contactable users associated with the user of the electronic device with the modified graphical representation of the first contactable user without updating the existing entry with the modified name of the contactable user, and a selectable option for updating the existing entry in the set of contactable users associated with the user of the electronic device with the modified name of the first contactable user without updating the existing entry with the modified graphical representation of the contactable user.
151. The method of any of items 147 to 150, further comprising:
detecting that updated contact information is available for the vision of the first contactable user
Activation of the indication; and
in response to detecting that updated contact information is available for activation of the visual indication of the first contactable user:
in accordance with a determination that the first contactable user corresponds to an existing entry in the set of contactable users associated with the user of the electronic device and that the received contact information includes a modified graphical representation of the first contactable user and does not include a modified name of the contactable user, updating the existing entry with the modified graphical representation of the contactable user.
152. The method of any of items 147 to 151, further comprising:
detecting that updated contact information is available for the vision of the first contactable user
Activation of the indication; and
in response to detecting that updated contact information is available for activation of the visual indication of the first contactable user:
in accordance with a determination that the first contactable user corresponds to an existing entry in the set of contactable users associated with the user of the electronic device and that the received contact information includes a modified name of the first contactable user and does not include a modified graphical representation of the contactable user, updating the existing entry with the modified name of the contactable user.
153. The method of any of items 147-152, wherein the received contact information includes a modified graphical representation of the first contactable user, and wherein the first contactable user corresponds to a set of contactable users associated with the user of the electronic device
An existing entry in the user, the method further comprising:
receiving an update to the first contactable user using the modified graphical representation of the first contactable user
User input of the existing entry;
in response to receiving the modified graphic using the first contactable user
User input representing an update to the existing entry:
updating the existing entry in the set of contactable users associated with the user of the electronic device using the modified graphical representation of the first contactable user; and is
Displaying a selectable affordance that, if selected, enables the electronic device to automatically update the graphical representation of the first contactable user in the future.
154. The method of any of items 147-153, wherein the contact information comprises a name of the first contactable user or a graphical representation of the first contactable user
The method further comprises the following steps:
updating the set of contactable users associated with the user of the electronic device with a modified name of the first contactable user or a modified graphical representation of the contactable user; and is
Wherein the contact information in the set of contactable users associated with the user of the electronic device that includes the modified name or the modified graphical representation is available to a plurality of applications of the electronic device.
155. The method of any of items 147-154, wherein the electronic device is associated with a user
In association, the method further comprises:
in response to receiving the request to display the first message:
in accordance with a determination that the first contactable user does not satisfy a set of sharing criteria, wherein the set of sharing criteria includes a first sharing criteria that is satisfied when the first contactable user corresponds to an approved recipient:
concurrently displaying an indication that updated contact information of the user of the electronic device is available for transmission to the first contactable user and the first message.
156. The method of any of items 147-155, wherein the first message is received in a conversation that includes the first contactable user and a second contactable user, and wherein the electronic device has received updated contact information for the first contactable user and updated contact information for the second contactable user, the method further comprising:
Detecting that updated contact information is available for the vision of the first contactable user
Activation of the indication; and
in response to detecting that updated contact information is available for activation of the visual indication by the first contactable user, displaying:
the updated contact information is available to a second visual indication of the first contactable user, wherein the second visual indication comprises a visual representation of at least a portion of the received updated contact information for the first contactable user; and a third visual indication that the updated contact information is available to the second contactable user, wherein the third visual indication comprises a visual representation of at least a portion of the received updated contact information for the second contactable user.
157. The method of any of items 147-156, wherein the contact information of the first contactable user includes information corresponding to an avatar.
158. The method of any of items 147-157, wherein the visual indication that updated contact information is available for the first contactable user is displayed concurrently with a visual representation of at least a portion of a messaging conversation including a plurality of messages, the plurality of messages including a second message transmitted to the first contactable user and a third message received from the first contactable user.
159. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display device and one or more communication devices, the one or more programs including instructions for performing the method of any of items 147-158.
160. An electronic device, comprising:
a display device;
one or more communication devices;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for performing operations according to an item
147 to 158.
161. An electronic device, comprising:
a display device;
one or more communication devices; and
means for performing the method of any of items 147 to 158.
162. A non-transitory computer-readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display device and one or more communication devices, the one or more programs including instructions for:
Receiving a first message via the one or more communication devices;
subsequent to receiving the first message, receiving a request to display the first message;
and
in response to receiving the request to display the first message:
in accordance with a determination that a first contactable user satisfies a set of prompting criteria, wherein the set of prompting criteria includes a first prompting criteria that is satisfied when updated contact information corresponding to the first contactable user has been received, concurrently displaying on the display device:
the first message, and
the updated contact information is available for visual indication of the first contactable user; and is
In accordance with a determination that the first contactable user does not meet the set of prompt criteria, displaying on the display device:
the first message without displaying updated contact information may be used for the visual indication of the first contactable user.
163. An electronic device, comprising:
a display device;
one or more communication devices;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for:
Receiving a first message via the one or more communication devices;
subsequent to receiving the first message, receiving a request to display the first message; and
in response to receiving the request to display the first message:
in accordance with a determination that a first contactable user satisfies a set of prompting criteria, wherein the set of prompting criteria includes a first prompting criteria that is satisfied when updated contact information corresponding to the first contactable user has been received, concurrently displaying on the display device:
the first message, and
the updated contact information is available for visual indication of the first contactable user; and is
In accordance with a determination that the first contactable user does not meet the set of prompt criteria, displaying on the display device:
the first message without displaying updated contact information may be used for the visual indication of the first contactable user.
164. An electronic device, comprising:
a display device;
one or more communication devices;
means for receiving a first message via the one or more communication devices;
means for receiving a request to display the first message subsequent to receiving the first message; and
Apparatus for: in response to receiving the request to display the first message:
in accordance with a determination that a first contactable user satisfies a set of prompting criteria, wherein the set of prompting criteria includes a first prompting criteria that is satisfied when updated contact information corresponding to the first contactable user has been received, concurrently displaying on the display device:
the first message, and
the updated contact information is available for visual indication of the first contactable user; and is
In accordance with a determination that the first contactable user does not meet the set of prompt criteria, displaying on the display device:
the first message without displaying updated contact information may be used for the visual indication of the first contactable user.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the technology and its practical applications. Those skilled in the art are thus well able to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
Although the present disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. It is to be understood that such changes and modifications are to be considered as included within the scope of the disclosure and examples as defined by the following claims.
As described above, one aspect of the present technology is to collect and use data from a variety of sources to display and use an avatar. The present disclosure contemplates that, in some instances, such collected data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, phone numbers, email addresses, twitter IDs, home addresses, data or records relating to the user's health or fitness level (e.g., vital sign measurements, medication information, exercise information), date of birth, or any other identifying or personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be useful to benefit the user. For example, the personal information data may be used to present a recommended image for contact representation. In addition, the present disclosure also contemplates other uses for which personal information data is beneficial to a user. For example, health and fitness data may be used to provide insight into the overall health condition of a user, or may be used as positive feedback for individuals using technology to pursue health goals.
The present disclosure contemplates that entities responsible for collecting, analyzing, disclosing, transmitting, storing, or otherwise using such personal information data will comply with established privacy policies and/or privacy practices. In particular, such entities should enforce and adhere to the use of privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining privacy and security of personal information data. Such policies should be easily accessible to users and should be updated as data is collected and/or used. Personal information from the user should be collected for legitimate and legitimate uses by the entity and not shared or sold outside of these legitimate uses. Furthermore, such acquisition/sharing should be performed after receiving user informed consent. Furthermore, such entities should consider taking any necessary steps to defend and secure access to such personal information data, and to ensure that others who have access to the personal information data comply with their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices. In addition, policies and practices should be adjusted to the particular type of personal information data collected and/or accessed, and to applicable laws and standards including specific considerations of jurisdiction. For example, in the united states, the collection or acquisition of certain health data may be governed by federal and/or state laws, such as the health insurance association and accountability act (HIPAA); while other countries may have health data subject to other regulations and policies and should be treated accordingly. Therefore, different privacy practices should be maintained for different personal data types in each country.
Regardless of the foregoing, the present disclosure also contemplates embodiments in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. In addition to providing "opt-in" and "opt-out" options, the present disclosure contemplates providing notifications related to accessing or using personal information. For example, the user may be notified that their personal information data is to be accessed when the application is downloaded, and then be reminded again just before the personal information data is accessed by the application.
Further, it is an object of the present disclosure that personal information data should be managed and processed to minimize the risk of inadvertent or unauthorized access or use. Once the data is no longer needed, the risk can be minimized by limiting data collection and deleting data. In addition, and when applicable, including in certain health-related applications, data de-identification may be used to protect the privacy of the user. De-identification may be facilitated by removing particular identifiers (e.g., date of birth, etc.), controlling the amount or specificity of stored data (e.g., collecting location data at a city level rather than at an address level), controlling how data is stored (e.g., aggregating data among users), and/or other methods, as appropriate.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, the present disclosure also contemplates that various embodiments may be implemented without the need to access such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, content may be recommended to a user by inferring preferences based on non-personal information data or an absolute minimum amount of personal information, such as content requested by a device associated with the user, other non-personal information available, or publicly available information.

Claims (33)

1. An electronic device, comprising:
one or more communication devices;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, wherein a user is associated with the electronic device and the one or more programs include instructions for:
receiving a set of one or more inputs selecting a graphical representation of a user associated with the electronic device, the set of one or more inputs including an input selecting a graphical object;
In response to receiving input selecting the graphical representation:
updating contact information of the user associated with the electronic device to include the selected graphical representation without transmitting the contact information of the user associated with the electronic device to a set of contactable users;
after updating the contact information of the user associated with the electronic device to include the selected graphical representation, receiving a request to transmit a first message to a set of contactable users, the set of contactable users including a first contactable user; and
in response to receiving a request to transmit the first message:
in accordance with a determination that the first contactable user satisfies a set of sharing criteria, the set of sharing criteria including a first sharing criterion that is satisfied when the first contactable user corresponds to an approved recipient:
transmitting, to the first contactable user via the one or more communication devices:
the first message, and
the contact information of the user associated with the electronic device; and is
In accordance with a determination that the first contactable user does not satisfy the set of sharing criteria:
transmitting the first message to the first contactable user via the one or more communication devices without transmitting the contact information of the user associated with the electronic device.
2. The electronic device of claim 1, the one or more programs further comprising instructions to:
prior to receiving a request to transmit the first message to the set of contactable users:
receiving a user input to update the contact information of the user associated with the electronic device; and
updating the contact information of the user associated with the electronic device in response to receiving a user input to update the contact information of the user associated with the electronic device without transmitting the contact information of the user associated with the electronic device to the first contactable user in response to a user input to update the contact information.
3. The electronic device of claim 1, wherein the set of sharing criteria includes a second sharing criterion that is satisfied when the contact information has been updated because the contact information was most recently transmitted to the first contactable user.
4. The electronic device of claim 1, the one or more programs further comprising instructions to:
in response to receiving a request to transmit the first message:
In accordance with the determination that the first contactable user does not satisfy the set of sharing criteria, concurrently displaying:
the first message, and
an indication that the contact information is not transmitted to the first contactable user.
5. The electronic device of claim 1, the one or more programs further comprising instructions to:
prior to receiving a request to transmit the first message to the set of contactable users:
providing a plurality of predetermined options for identifying whether a respective contactable user corresponds to an approved recipient, the plurality of predetermined options including one or more of:
a first recipient option, the first recipient option representing: a contactable user in a set of contactable users associated with the user of the electronic device corresponds to an approved recipient, and a contactable user not in the set of contactable users associated with the user of the electronic device does not correspond to an approved recipient,
a second recipient option indicating that all contactable users correspond to approved recipients, an
A third recipient option indicating that no contactable user corresponds to an approved recipient.
6. The electronic device of claim 1, the one or more programs further comprising instructions to:
providing the contact information including the selected graphical representation to a plurality of applications of the electronic device.
7. The electronic device of claim 1, the one or more programs further comprising instructions to:
prior to receiving a request to transmit the first message to the set of contactable users:
accessing a name of the user associated with the electronic device from a set of contactable users associated with the user of the electronic device;
displaying the name of the user in an editable format;
receiving user input editing the name of the user associated with the electronic device; and
in response to receiving user input editing the name:
updating the contact information of the user associated with the electronic device to include the selected name without transmitting the contact information of the user associated with the electronic device; and
providing the contact information including the selected name to a plurality of applications of the electronic device.
8. The electronic device of claim 1, the one or more programs further comprising instructions to:
subsequent to receiving the request to transmit the first message to the set of contactable users, receiving a second request to transmit a second message to a second set of one or more contactable users, wherein the second set of one or more contactable users includes the first contactable user; and is
In response to receiving the second request to transmit the second message:
in accordance with a determination that the first contactable user satisfies the set of sharing criteria, the set of sharing criteria includes a second sharing criterion that is satisfied when the contact information has been updated because the contact information was most recently transmitted to the first contactable user:
transmitting, to the first contactable user via the one or more communication devices:
the second message, and
the contact information of the user associated with the electronic device; and is
In accordance with a determination that the first contactable user does not satisfy the set of sharing criteria:
transmitting the second message to the first contactable user via the one or more communication devices without transmitting the contact information of the user associated with the electronic device.
9. The electronic device of claim 1, wherein the set of contactable users includes a second contactable user, the one or more programs further including instructions for:
in response to receiving a request to transmit the first message:
in accordance with a determination that the second contactable user satisfies the set of sharing criteria, the set of sharing criteria includes the first sharing criteria that are satisfied when the second contactable user corresponds to an approved recipient:
transmitting, to the second contactable user via the one or more communication devices:
the first message, and
the contact information of the user associated with the electronic device; and
in accordance with a determination that the second contactable user does not satisfy the set of sharing criteria:
transmitting the first message to the first contactable user via the one or more communication devices without transmitting the contact information of the user associated with the electronic device.
10. The electronic device of claim 1, wherein the contact information includes information corresponding to an avatar.
11. The electronic device of claim 1, the one or more programs further comprising instructions to:
Concurrently displaying, prior to receiving a request to transmit the first message to the set of contactable users:
the first message, and
an affordance that, when selected, causes the device to display a user interface that includes one or more options for configuring whether the first contactable user corresponds to an approved recipient.
12. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with one or more communication devices, wherein a user is associated with the electronic device and the one or more programs include instructions for:
receiving a set of one or more inputs selecting a graphical representation of a user associated with the electronic device, the set of one or more inputs including an input selecting a graphical object;
in response to receiving input selecting the graphical representation:
updating contact information of the user associated with the electronic device to include the selected graphical representation without transmitting the contact information of the user associated with the electronic device to a set of contactable users;
After updating the contact information of the user associated with the electronic device to include the selected graphical representation, receiving a request to transmit a first message to a set of contactable users, the set of contactable users including a first contactable user; and
in response to receiving a request to transmit the first message:
in accordance with a determination that the first contactable user satisfies a set of sharing criteria, the set of sharing criteria including a first sharing criterion that is satisfied when the first contactable user corresponds to an approved recipient:
transmitting, to the first contactable user via the one or more communication devices:
the first message, and
the contact information of the user associated with the electronic device; and is
In accordance with a determination that the first contactable user does not satisfy the set of sharing criteria:
transmitting the first message to the first contactable user via the one or more communication devices without transmitting the contact information of the user associated with the electronic device.
13. The non-transitory computer readable storage medium of claim 12, the one or more programs further comprising instructions for:
Prior to receiving a request to transmit the first message to the set of contactable users:
receiving a user input to update the contact information of the user associated with the electronic device; and
updating the contact information of the user associated with the electronic device in response to receiving a user input to update the contact information of the user associated with the electronic device without transmitting the contact information of the user associated with the electronic device to the first contactable user in response to a user input to update the contact information.
14. The non-transitory computer-readable storage medium of claim 12, wherein the set of sharing criteria includes a second sharing criterion that is satisfied when the contact information has been updated because the contact information was most recently transmitted to the first contactable user.
15. The non-transitory computer readable storage medium of claim 12, the one or more programs further comprising instructions for:
in response to receiving a request to transmit the first message:
in accordance with the determination that the first contactable user does not satisfy the set of sharing criteria, concurrently displaying:
The first message, and
an indication that the contact information is not transmitted to the first contactable user.
16. The non-transitory computer readable storage medium of claim 12, the one or more programs further comprising instructions for:
prior to receiving a request to transmit the first message to the set of contactable users:
providing a plurality of predetermined options for identifying whether a respective contactable user corresponds to an approved recipient, the plurality of predetermined options including one or more of:
a first recipient option, the first recipient option representing: a contactable user in a set of contactable users associated with the user of the electronic device corresponds to an approved recipient, and a contactable user not in the set of contactable users associated with the user of the electronic device does not correspond to an approved recipient,
a second recipient option indicating that all contactable users correspond to approved recipients, an
A third recipient option indicating that no contactable user corresponds to an approved recipient.
17. The non-transitory computer readable storage medium of claim 12, the one or more programs further comprising instructions for:
providing the contact information including the selected graphical representation to a plurality of applications of the electronic device.
18. The non-transitory computer readable storage medium of claim 12, the one or more programs further comprising instructions for:
prior to receiving a request to transmit the first message to the set of contactable users:
accessing a name of the user associated with the electronic device from a set of contactable users associated with the user of the electronic device;
displaying the name of the user in an editable format;
receiving user input editing the name of the user associated with the electronic device; and
in response to receiving user input editing the name:
updating the contact information of the user associated with the electronic device to include the selected name without transmitting the contact information of the user associated with the electronic device; and
providing the contact information including the selected name to a plurality of applications of the electronic device.
19. The non-transitory computer readable storage medium of claim 12, the one or more programs further comprising instructions for:
subsequent to receiving the request to transmit the first message to the set of contactable users, receiving a second request to transmit a second message to a second set of one or more contactable users, wherein the second set of one or more contactable users includes the first contactable user; and is
In response to receiving the second request to transmit the second message:
in accordance with a determination that the first contactable user satisfies the set of sharing criteria, the set of sharing criteria includes a second sharing criterion that is satisfied when the contact information has been updated because the contact information was most recently transmitted to the first contactable user:
transmitting, to the first contactable user via the one or more communication devices:
the second message, and
the contact information of the user associated with the electronic device; and is
In accordance with a determination that the first contactable user does not satisfy the set of sharing criteria:
transmitting the second message to the first contactable user via the one or more communication devices without transmitting the contact information of the user associated with the electronic device.
20. The non-transitory computer readable storage medium of claim 12, wherein the set of contactable users includes a second contactable user, the one or more programs further including instructions for:
in response to receiving a request to transmit the first message:
in accordance with a determination that the second contactable user satisfies the set of sharing criteria, the set of sharing criteria includes the first sharing criteria that are satisfied when the second contactable user corresponds to an approved recipient:
transmitting, to the second contactable user via the one or more communication devices:
the first message, and
the contact information of the user associated with the electronic device; and
in accordance with a determination that the second contactable user does not satisfy the set of sharing criteria:
transmitting the first message to the first contactable user via the one or more communication devices without transmitting the contact information of the user associated with the electronic device.
21. The non-transitory computer-readable storage medium of claim 12, wherein the contact information comprises information corresponding to an avatar.
22. The non-transitory computer readable storage medium of claim 12, the one or more programs further comprising instructions for:
concurrently displaying, prior to receiving a request to transmit the first message to the set of contactable users:
the first message, and
an affordance that, when selected, causes the device to display a user interface that includes one or more options for configuring whether the first contactable user corresponds to an approved recipient.
23. A method, comprising:
at an electronic device having one or more communication devices, wherein a user is associated with the electronic device:
receiving a set of one or more inputs selecting a graphical representation of a user associated with the electronic device, the set of one or more inputs including an input selecting a graphical object;
in response to receiving input selecting the graphical representation:
updating contact information of the user associated with the electronic device to include the selected graphical representation without transmitting the contact information of the user associated with the electronic device to a set of contactable users;
After updating the contact information of the user associated with the electronic device to include the selected graphical representation, receiving a request to transmit a first message to a set of contactable users, the set of contactable users including a first contactable user; and
in response to receiving a request to transmit the first message:
in accordance with a determination that the first contactable user satisfies a set of sharing criteria, the set of sharing criteria including a first sharing criterion that is satisfied when the first contactable user corresponds to an approved recipient:
transmitting, to the first contactable user via the one or more communication devices:
the first message, and
the contact information of the user associated with the electronic device; and is
In accordance with a determination that the first contactable user does not satisfy the set of sharing criteria:
transmitting the first message to the first contactable user via the one or more communication devices without transmitting the contact information of the user associated with the electronic device.
24. The method of claim 23, further comprising:
prior to receiving a request to transmit the first message to the set of contactable users:
Receiving a user input to update the contact information of the user associated with the electronic device; and
updating the contact information of the user associated with the electronic device in response to receiving a user input to update the contact information of the user associated with the electronic device without transmitting the contact information of the user associated with the electronic device to the first contactable user in response to a user input to update the contact information.
25. The method of claim 23, wherein the set of sharing criteria includes a second sharing criterion that is satisfied when the contact information has been updated because the contact information was most recently transmitted to the first contactable user.
26. The method of claim 23, further comprising:
in response to receiving a request to transmit the first message:
in accordance with the determination that the first contactable user does not satisfy the set of sharing criteria, concurrently displaying:
the first message, and
an indication that the contact information is not transmitted to the first contactable user.
27. The method of claim 23, further comprising:
Prior to receiving a request to transmit the first message to the set of contactable users:
providing a plurality of predetermined options for identifying whether a respective contactable user corresponds to an approved recipient, the plurality of predetermined options including one or more of:
a first recipient option, the first recipient option representing: a contactable user in a set of contactable users associated with the user of the electronic device corresponds to an approved recipient, and a contactable user not in the set of contactable users associated with the user of the electronic device does not correspond to an approved recipient,
a second recipient option indicating that all contactable users correspond to approved recipients, an
A third recipient option indicating that no contactable user corresponds to an approved recipient.
28. The method of claim 23, further comprising:
providing the contact information including the selected graphical representation to a plurality of applications of the electronic device.
29. The method of claim 23, further comprising:
prior to receiving a request to transmit the first message to the set of contactable users:
Accessing a name of the user associated with the electronic device from a set of contactable users associated with the user of the electronic device;
displaying the name of the user in an editable format;
receiving user input editing the name of the user associated with the electronic device; and
in response to receiving user input editing the name:
updating the contact information of the user associated with the electronic device to include the selected name without transmitting the contact information of the user associated with the electronic device; and
providing the contact information including the selected name to a plurality of applications of the electronic device.
30. The method of claim 23, further comprising:
subsequent to receiving the request to transmit the first message to the set of contactable users, receiving a second request to transmit a second message to a second set of one or more contactable users, wherein the second set of one or more contactable users includes the first contactable user; and is
In response to receiving the second request to transmit the second message:
In accordance with a determination that the first contactable user satisfies the set of sharing criteria, the set of sharing criteria includes a second sharing criterion that is satisfied when the contact information has been updated because the contact information was most recently transmitted to the first contactable user:
transmitting, to the first contactable user via the one or more communication devices:
the second message, and
the contact information of the user associated with the electronic device; and is
In accordance with a determination that the first contactable user does not satisfy the set of sharing criteria:
transmitting the second message to the first contactable user via the one or more communication devices without transmitting the contact information of the user associated with the electronic device.
31. The method of claim 23, wherein the set of contactable users includes a second contactable user, the method further comprising:
in response to receiving a request to transmit the first message:
in accordance with a determination that the second contactable user satisfies the set of sharing criteria, the set of sharing criteria includes the first sharing criteria that are satisfied when the second contactable user corresponds to an approved recipient:
Transmitting, to the second contactable user via the one or more communication devices:
the first message, and
the contact information of the user associated with the electronic device; and
in accordance with a determination that the second contactable user does not satisfy the set of sharing criteria:
transmitting the first message to the first contactable user via the one or more communication devices without transmitting the contact information of the user associated with the electronic device.
32. The method of claim 23, wherein the contact information comprises information corresponding to an avatar.
33. The method of claim 23, further comprising:
concurrently displaying, prior to receiving a request to transmit the first message to the set of contactable users:
the first message, and
an affordance that, when selected, causes the device to display a user interface that includes one or more options for configuring whether the first contactable user corresponds to an approved recipient.
CN202010776600.2A 2019-05-06 2020-03-31 Integration of head portraits with multiple applications Active CN111897614B (en)

Applications Claiming Priority (15)

Application Number Priority Date Filing Date Title
US201962843967P 2019-05-06 2019-05-06
US62/843,967 2019-05-06
US201962855891P 2019-05-31 2019-05-31
US62/855,891 2019-05-31
DKPA201970531A DK201970531A1 (en) 2019-05-06 2019-08-27 Avatar integration with multiple applications
DKPA201970531 2019-08-27
DKPA201970530A DK201970530A1 (en) 2019-05-06 2019-08-27 Avatar integration with multiple applications
DKPA201970530 2019-08-27
US16/582,570 2019-09-25
US16/582,570 US10659405B1 (en) 2019-05-06 2019-09-25 Avatar integration with multiple applications
US16/582,500 US20200358725A1 (en) 2019-05-06 2019-09-25 Avatar integration with multiple applications
US16/582,500 2019-09-25
US16/583,706 2019-09-26
US16/583,706 US20200358726A1 (en) 2019-05-06 2019-09-26 Avatar integration with multiple applications
CN202080001137.2A CN112204519A (en) 2019-05-06 2020-03-31 Integration of head portraits with multiple applications

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202080001137.2A Division CN112204519A (en) 2019-05-06 2020-03-31 Integration of head portraits with multiple applications

Publications (2)

Publication Number Publication Date
CN111897614A CN111897614A (en) 2020-11-06
CN111897614B true CN111897614B (en) 2021-07-06

Family

ID=73249375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010776600.2A Active CN111897614B (en) 2019-05-06 2020-03-31 Integration of head portraits with multiple applications

Country Status (1)

Country Link
CN (1) CN111897614B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113784039B (en) * 2021-08-03 2023-07-11 北京达佳互联信息技术有限公司 Head portrait processing method, head portrait processing device, electronic equipment and computer readable storage medium
CN114245158B (en) * 2021-12-03 2022-09-02 广州方硅信息技术有限公司 Live broadcast room head portrait special effect display method and device, equipment, medium and product thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1581901A (en) * 2003-08-01 2005-02-16 微软公司 Unified contact list
CN104836879A (en) * 2014-02-12 2015-08-12 腾讯科技(深圳)有限公司 Address list updating method, server and system
CN106101358A (en) * 2016-05-27 2016-11-09 珠海市魅族科技有限公司 A kind of method of contact person information updating and smart machine
CN107171934A (en) * 2017-05-05 2017-09-15 沈思远 Information processing method, instant communication client and the system of immediate communication tool
CN107613085A (en) * 2017-10-31 2018-01-19 武汉诚迈科技有限公司 Automatic mobile phone address book update method, server and user terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102347912B (en) * 2010-08-02 2014-11-05 腾讯科技(深圳)有限公司 Method and system for obtaining dynamic update in instant messaging software
US9891933B2 (en) * 2015-06-24 2018-02-13 International Business Machines Corporation Automated testing of GUI mirroring

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1581901A (en) * 2003-08-01 2005-02-16 微软公司 Unified contact list
CN104836879A (en) * 2014-02-12 2015-08-12 腾讯科技(深圳)有限公司 Address list updating method, server and system
CN106101358A (en) * 2016-05-27 2016-11-09 珠海市魅族科技有限公司 A kind of method of contact person information updating and smart machine
CN107171934A (en) * 2017-05-05 2017-09-15 沈思远 Information processing method, instant communication client and the system of immediate communication tool
CN107613085A (en) * 2017-10-31 2018-01-19 武汉诚迈科技有限公司 Automatic mobile phone address book update method, server and user terminal

Also Published As

Publication number Publication date
CN111897614A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
AU2020269590B2 (en) Avatar integration with multiple applications
US11380077B2 (en) Avatar creation user interface
JP7249392B2 (en) Avatar creation user interface
AU2023200867B2 (en) Avatar integration with multiple applications
CN113330488A (en) Virtual avatar animation based on facial feature movement
KR20200132995A (en) Creative camera
CN111897614B (en) Integration of head portraits with multiple applications
AU2024201007A1 (en) Avatar navigation, library, editing and creation user interface
AU2020101715A4 (en) Avatar creation user interface
EP3567457B1 (en) Avatar creation user interface
US20230343053A1 (en) Avatar creation user interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant