CN113535306A - Avatar creation user interface - Google Patents

Avatar creation user interface Download PDF

Info

Publication number
CN113535306A
CN113535306A CN202110820692.4A CN202110820692A CN113535306A CN 113535306 A CN113535306 A CN 113535306A CN 202110820692 A CN202110820692 A CN 202110820692A CN 113535306 A CN113535306 A CN 113535306A
Authority
CN
China
Prior art keywords
avatar
color
colors
option
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110820692.4A
Other languages
Chinese (zh)
Other versions
CN113535306B (en
Inventor
M·万欧斯
J·瑞克瓦德
A·C·戴伊
A·古兹曼
N·V·斯卡普尔
C·威尔森
A·贝扎蒂
C·J·罗姆尼
G·耶基斯
G·P·A·巴利尔
J·D·加德纳
L·K·福塞尔
R·加西亚三世
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DKPA201870377A external-priority patent/DK179874B1/en
Application filed by Apple Inc filed Critical Apple Inc
Priority to CN202110820692.4A priority Critical patent/CN113535306B/en
Publication of CN113535306A publication Critical patent/CN113535306A/en
Application granted granted Critical
Publication of CN113535306B publication Critical patent/CN113535306B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/63Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • A63F2300/5553Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history user representation in the game field, e.g. avatar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2024Style variation

Abstract

The invention provides an avatar creation user interface. The present disclosure relates generally to creating and editing avatars and navigating avatar selection interfaces. In some examples, the avatar feature user interface includes a plurality of feature options that may be customized to create an avatar. In some examples, different types of avatars may be managed for different applications. In some examples, an interface for navigating an avatar type for an application is provided.

Description

Avatar creation user interface
The application is a divisional application of an invention patent application with the application date of 2018, 9, 28 and the application number of 201811142889.1 and the title of 'avatar creation user interface'.
Cross Reference to Related Applications
This application relates to U.S. provisional application No.62/668,200 entitled "Avatar Creation User Interface," filed on 7/5.2018. The contents of this patent application are hereby incorporated by reference in their entirety.
Technical Field
The present disclosure relates generally to computer user interfaces, and more particularly to techniques for creating and editing avatars.
Background
The avatar is used to represent a user of the electronic device. The avatar may represent the appearance of the user or may represent an idealized or fully fictional avatar of the user. The avatar may then be associated with the user such that the appearance of the avatar prompts others to contact or associate it with the user. Avatars can be created and edited for such uses including multimedia communications.
Disclosure of Invention
However, some techniques for creating and editing an avatar using an electronic device are often cumbersome and inefficient. For example, some prior art techniques use complex and time-consuming user interfaces that may include multiple keystrokes or keystrokes. The prior art requires more time than necessary, which results in wasted time and equipment energy for the user. This latter consideration is particularly important in battery-powered devices.
Thus, the present technology provides faster, more efficient methods and interfaces for electronic devices for creating and editing avatars. Such methods and interfaces optionally complement or replace other methods for creating an avatar. Such methods and interfaces reduce the cognitive burden placed on the user and result in a more efficient human-machine interface. For battery-driven computing devices, such methods and interfaces conserve power and increase the time interval between battery charges.
A method is described. The method is performed at an electronic device having a display and one or more input devices. The method comprises the following steps: displaying, via a display device, an avatar navigation user interface, the avatar navigation user interface including an avatar; while displaying the avatar navigation user interface, detecting, via one or more input devices, a gesture for the avatar navigation user interface; and in response to detecting the gesture: in accordance with a determination that the gesture is along the first direction, displaying a first type of avatar in an avatar navigation user interface; and in accordance with a determination that the gesture is along a second direction opposite the first direction, displaying a second type of avatar in the avatar navigation user interface that is different from the first type.
A non-transitory computer-readable storage medium is described. The non-transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display apparatus and one or more input devices. The one or more programs include programs for: displaying, via a display device, an avatar navigation user interface, the avatar navigation user interface including an avatar; while displaying the avatar navigation user interface, detecting, via one or more input devices, a gesture for the avatar navigation user interface; and in response to detecting the gesture: in accordance with a determination that the gesture is along the first direction, displaying a first type of avatar in an avatar navigation user interface; and in accordance with a determination that the gesture is along a second direction opposite the first direction, displaying a second type of avatar in the avatar navigation user interface that is different from the first type.
A transitory computer-readable storage medium is described. The transitory computer-readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display apparatus and one or more input devices. The one or more programs include programs for: displaying, via a display device, an avatar navigation user interface, the avatar navigation user interface including an avatar; while displaying the avatar navigation user interface, detecting, via one or more input devices, a gesture for the avatar navigation user interface; and in response to detecting the gesture: in accordance with a determination that the gesture is along the first direction, displaying a first type of avatar in an avatar navigation user interface; and in accordance with a determination that the gesture is along a second direction opposite the first direction, displaying a second type of avatar in the avatar navigation user interface that is different from the first type.
An electronic device is described. The electronic device includes: a display device; one or more input devices; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for: displaying, via a display device, an avatar navigation user interface, the avatar navigation user interface including an avatar; while displaying the avatar navigation user interface, detecting, via one or more input devices, a gesture for the avatar navigation user interface; and in response to detecting the gesture: in accordance with a determination that the gesture is along the first direction, displaying a first type of avatar in an avatar navigation user interface; and in accordance with a determination that the gesture is along a second direction opposite the first direction, displaying a second type of avatar in the avatar navigation user interface that is different from the first type.
An electronic device is described. The electronic device includes: a display device; one or more input devices; means for displaying, via a display device, an avatar navigation user interface, the avatar navigation user interface including an avatar; means for detecting, via one or more input devices, a gesture with respect to an avatar navigation user interface while the avatar navigation user interface is displayed; and means for, in response to detecting the gesture: in accordance with a determination that the gesture is along the first direction, displaying a first type of avatar in an avatar navigation user interface; and in accordance with a determination that the gesture is along a second direction opposite the first direction, displaying a second type of avatar in the avatar navigation user interface that is different from the first type.
A method is described. The method is performed at an electronic device having a display device. The method comprises the following steps: displaying, via a display device, an avatar editing user interface, including concurrently displaying: an avatar having a plurality of avatar characteristics; a first option selection area for the respective avatar feature, including a first set of feature options corresponding to a set of candidate values for a first characteristic of the respective avatar feature; and a second option selection area for the respective avatar feature, including a second set of feature options corresponding to a set of candidate values for a second characteristic of the respective avatar feature, wherein the second characteristic of the respective avatar feature is different from the first characteristic of the respective avatar feature; and in response to detecting selection of one of the feature options in the first set of feature options, changing the appearance of at least one of the second set of feature options from the first appearance to the second appearance.
A non-transitory computer-readable storage medium is described. A non-transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display apparatus, the one or more programs including instructions for: displaying, via a display device, an avatar editing user interface, including concurrently displaying: an avatar having a plurality of avatar characteristics; a first option selection area for the respective avatar feature, including a first set of feature options corresponding to a set of candidate values for a first characteristic of the respective avatar feature; and a second option selection area for the respective avatar feature, including a second set of feature options corresponding to a set of candidate values for a second characteristic of the respective avatar feature, wherein the second characteristic of the respective avatar feature is different from the first characteristic of the respective avatar feature; and in response to detecting selection of one of the feature options in the first set of feature options, changing the appearance of at least one of the second set of feature options from the first appearance to the second appearance.
A transitory computer-readable storage medium is described. A transitory computer-readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display apparatus, the one or more programs including instructions for: displaying, via a display device, an avatar editing user interface, including concurrently displaying: an avatar having a plurality of avatar characteristics; a first option selection area for the respective avatar feature, including a first set of feature options corresponding to a set of candidate values for a first characteristic of the respective avatar feature; and a second option selection area for the respective avatar feature, including a second set of feature options corresponding to a set of candidate values for a second characteristic of the respective avatar feature, wherein the second characteristic of the respective avatar feature is different from the first characteristic of the respective avatar feature; and in response to detecting selection of one of the feature options in the first set of feature options, changing the appearance of at least one of the second set of feature options from the first appearance to the second appearance.
An electronic device is described. The electronic device includes: a display device; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for: displaying, via a display device, an avatar editing user interface, including concurrently displaying: an avatar having a plurality of avatar characteristics; a first option selection area for the respective avatar feature, including a first set of feature options corresponding to a set of candidate values for a first characteristic of the respective avatar feature; and a second option selection area for the respective avatar feature, including a second set of feature options corresponding to a set of candidate values for a second characteristic of the respective avatar feature, wherein the second characteristic of the respective avatar feature is different from the first characteristic of the respective avatar feature; and in response to detecting selection of one of the feature options in the first set of feature options, changing the appearance of at least one of the second set of feature options from the first appearance to the second appearance.
An electronic device is described. The electronic device includes: a display device; apparatus for displaying an avatar editing user interface via a display device, comprising concurrently displaying: an avatar having a plurality of avatar characteristics; a first option selection area for the respective avatar feature, including a first set of feature options corresponding to a set of candidate values for a first characteristic of the respective avatar feature; and a second option selection area for the respective avatar feature, including a second set of feature options corresponding to a set of candidate values for a second characteristic of the respective avatar feature, wherein the second characteristic of the respective avatar feature is different from the first characteristic of the respective avatar feature; and means for changing the appearance of at least one of the second set of feature options from the first appearance to the second appearance in response to detecting selection of one of the first set of feature options.
A method is described. The method is performed at an electronic device having a display device. The method comprises the following steps: displaying, via a display device: a user interface object comprising respective features having a first set of one or more colors; and a plurality of color options for respective features; detecting selection of a color option of the plurality of color options corresponding to a second color; in response to detecting the selection: changing the color of the corresponding feature to the color option; and a first color adjustment control that displays color options corresponding to the second set of one or more colors; detecting an input corresponding to the first color adjustment control when the respective feature of the user interface object has a second set of one or more colors; and in response to detecting the input corresponding to the first color adjustment control, modifying the color of the respective feature from the second set of one or more colors to a modified version of the second set of one or more colors based on the second color.
A non-transitory computer-readable storage medium is described. A non-transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display apparatus, the one or more programs including instructions for: displaying, via a display device: a user interface object comprising respective features having a first set of one or more colors; and a plurality of color options for respective features; detecting selection of a color option of the plurality of color options corresponding to a second color; in response to detecting the selection: changing the color of the corresponding feature to the color option; and a first color adjustment control that displays color options corresponding to the second set of one or more colors; detecting an input corresponding to the first color adjustment control when the respective feature of the user interface object has a second set of one or more colors; and in response to detecting the input corresponding to the first color adjustment control, modifying the color of the respective feature from the second set of one or more colors to a modified version of the second set of one or more colors based on the second color.
A transitory computer-readable storage medium is described. A transitory computer-readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display apparatus, the one or more programs including instructions for: displaying, via a display device: a user interface object comprising respective features having a first set of one or more colors; and a plurality of color options for respective features; detecting selection of a color option of the plurality of color options corresponding to a second color; in response to detecting the selection: changing the color of the corresponding feature to the color option; and a first color adjustment control that displays color options corresponding to the second set of one or more colors; detecting an input corresponding to the first color adjustment control when the respective feature of the user interface object has a second set of one or more colors; and in response to detecting the input corresponding to the first color adjustment control, modifying the color of the respective feature from the second set of one or more colors to a modified version of the second set of one or more colors based on the second color.
An electronic device is described. The electronic device includes: a display device; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for: displaying, via a display device: a user interface object comprising respective features having a first set of one or more colors; and a plurality of color options for respective features; detecting selection of a color option of the plurality of color options corresponding to a second color; in response to detecting the selection: changing the color of the corresponding feature to the color option; and a first color adjustment control that displays color options corresponding to the second set of one or more colors; detecting an input corresponding to the first color adjustment control when the respective feature of the user interface object has a second set of one or more colors; and in response to detecting the input corresponding to the first color adjustment control, modifying the color of the respective feature from the second set of one or more colors to a modified version of the second set of one or more colors based on the second color.
An electronic device is described. The electronic device includes: a display device; means for displaying, via a display device: a user interface object comprising respective features having a first set of one or more colors; and a plurality of color options for respective features; means for detecting selection of a color option of the plurality of color options corresponding to a second color; in response to detecting the selection: means for changing the color of the respective feature to the color option; and means for displaying a first color adjustment control for color options corresponding to a second set of one or more colors; and means for detecting an input corresponding to the first color adjustment control when the respective feature of the user interface object has a second set of one or more colors; and in response to detecting the input corresponding to the first color adjustment control, means for modifying the color of the respective feature from the second set of one or more colors to a modified version of the second set of one or more colors based on the second color.
A method is described. The method is performed at an electronic device having a display device. The method comprises the following steps: displaying, via a display device, an avatar editing user interface, including displaying: an avatar having a plurality of avatar characteristics, said avatar characteristics including a first avatar characteristic having a first set of one or more colors and a second avatar characteristic having a set of one or more colors, said set of one or more colors being based on and different from said first set of one or more colors; and a plurality of color options corresponding to the first avatar characteristic; detecting selection of a respective color option of a plurality of color options; and in response to detecting selection of a respective color option of the plurality of color options for the first avatar characteristic, in accordance with a determination that the respective color option corresponds to the second set of one or more colors, updating the avatar appearance, including: changing the first avatar characteristic to a second set of one or more colors; and changing the second avatar characteristic to a set of one or more colors that is based on and different from the second set of one or more colors.
A non-transitory computer-readable storage medium is described. A non-transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display apparatus, the one or more programs including instructions for: displaying, via a display device, an avatar editing user interface, including displaying: an avatar having a plurality of avatar characteristics, said avatar characteristics including a first avatar characteristic having a first set of one or more colors and a second avatar characteristic having a set of one or more colors, said set of one or more colors being based on and different from said first set of one or more colors; and a plurality of color options corresponding to the first avatar characteristic; detecting selection of a respective color option of a plurality of color options; and in response to detecting selection of a respective color option of the plurality of color options for the first avatar characteristic, in accordance with a determination that the respective color option corresponds to the second set of one or more colors, updating the avatar appearance, including: changing the first avatar characteristic to a second set of one or more colors; and changing the second avatar characteristic to a set of one or more colors that is based on and different from the second set of one or more colors.
A transitory computer-readable storage medium is described. A transitory computer-readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display apparatus, the one or more programs including instructions for: displaying, via a display device, an avatar editing user interface, including displaying: an avatar having a plurality of avatar characteristics, said avatar characteristics including a first avatar characteristic having a first set of one or more colors and a second avatar characteristic having a set of one or more colors, said set of one or more colors being based on and different from said first set of one or more colors; and a plurality of color options corresponding to the first avatar characteristic; detecting selection of a respective color option of a plurality of color options; and in response to detecting selection of a respective color option of the plurality of color options for the first avatar characteristic, in accordance with a determination that the respective color option corresponds to the second set of one or more colors, updating the avatar appearance, including: changing the first avatar characteristic to a second set of one or more colors; and changing the second avatar characteristic to a set of one or more colors that is based on and different from the second set of one or more colors.
An electronic device is described. The electronic device includes: a display device; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for: displaying, via a display device, an avatar editing user interface, including displaying: an avatar having a plurality of avatar characteristics, said avatar characteristics including a first avatar characteristic having a first set of one or more colors and a second avatar characteristic having a set of one or more colors, said set of one or more colors being based on and different from said first set of one or more colors; and a plurality of color options corresponding to the first avatar characteristic; detecting selection of a respective color option of a plurality of color options; and in response to detecting selection of a respective color option of the plurality of color options for the first avatar characteristic, in accordance with a determination that the respective color option corresponds to the second set of one or more colors, updating the avatar appearance, including: changing the first avatar characteristic to a second set of one or more colors; and changing the second avatar characteristic to a set of one or more colors that is based on and different from the second set of one or more colors.
An electronic device is described. The electronic device includes: a display device; and means for displaying an avatar editing user interface via the display means, including displaying: an avatar having a plurality of avatar characteristics, said avatar characteristics including a first avatar characteristic having a first set of one or more colors and a second avatar characteristic having a set of one or more colors, said set of one or more colors being based on and different from said first set of one or more colors; and a plurality of color options corresponding to the first avatar characteristic; means for detecting selection of a respective color option of a plurality of color options; and in response to detecting selection of a respective color option of the plurality of color options for the first avatar characteristic, in accordance with a determination that the respective color option corresponds to the second set of one or more colors, means for updating the appearance of the avatar, comprising: means for changing the first avatar characteristic to a second set of one or more colors; and means for changing the second avatar characteristic to a set of one or more colors, the set of one or more colors being based on and different from the second set of one or more colors.
A method is described. The method is performed at an electronic device having a display device. The method comprises the following steps: displaying, via a display device, an avatar editing user interface, including displaying: an avatar having a plurality of avatar characteristics, the avatar characteristics including avatar hair having a selected avatar hair style; and a plurality of head portrait accessory options; detecting selection of a corresponding accessory option; and in response to detecting selection of a respective one of the plurality of avatar accessory options, changing an appearance of the avatar to include a representation of the respective accessory option, including in accordance with a determination that the respective accessory option is the first accessory option: displaying a representation of a first accessory option located on the avatar; and modifying the geometry of the first portion of the avatar hair based on the location of the representation of the first accessory option on the avatar while maintaining the selected avatar hair style.
A non-transitory computer-readable storage medium is described. A non-transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display apparatus, the one or more programs including instructions for: an avatar having a plurality of avatar characteristics, the avatar characteristics including avatar hair having a selected avatar hair style; and a plurality of head portrait accessory options; detecting selection of a corresponding accessory option; and in response to detecting selection of a respective one of the plurality of avatar accessory options, changing an appearance of the avatar to include a representation of the respective accessory option, including in accordance with a determination that the respective accessory option is the first accessory option: displaying a representation of a first accessory option located on the avatar; and modifying the geometry of the first portion of the avatar hair based on the location of the representation of the first accessory option on the avatar while maintaining the selected avatar hair style.
A transitory computer-readable storage medium is described. A transitory computer-readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display apparatus, the one or more programs including instructions for: an avatar having a plurality of avatar characteristics, the avatar characteristics including avatar hair having a selected avatar hair style; and a plurality of head portrait accessory options; detecting selection of a corresponding accessory option; and in response to detecting selection of a respective one of the plurality of avatar accessory options, changing an appearance of the avatar to include a representation of the respective accessory option, including in accordance with a determination that the respective accessory option is the first accessory option: displaying a representation of a first accessory option located on the avatar; and modifying the geometry of the first portion of the avatar hair based on the location of the representation of the first accessory option on the avatar while maintaining the selected avatar hair style.
An electronic device is described. The electronic device includes: a display device; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for: an avatar having a plurality of avatar characteristics, the avatar characteristics including avatar hair having a selected avatar hair style; and a plurality of head portrait accessory options; detecting selection of a corresponding accessory option; and in response to detecting selection of a respective one of the plurality of avatar accessory options, changing an appearance of the avatar to include a representation of the respective accessory option, including in accordance with a determination that the respective accessory option is the first accessory option: displaying a representation of a first accessory option located on the avatar; and modifying the geometry of the first portion of the avatar hair based on the location of the representation of the first accessory option on the avatar while maintaining the selected avatar hair style.
An electronic device is described. The electronic device includes: a display device; and means for displaying an avatar editing user interface via the display means, including displaying: an avatar having a plurality of avatar characteristics, the avatar characteristics including avatar hair having a selected avatar hair style; and a plurality of head portrait accessory options; means for detecting selection of a respective accessory option; and in response to detecting selection of a respective one of the plurality of avatar accessory options, means for changing an appearance of the avatar to include a representation of the respective accessory option, including in accordance with a determination that the respective accessory option is the first accessory option: means for displaying a representation of a first accessory option located on the avatar; and means for modifying the geometry of the first portion of the avatar hair while maintaining the selected avatar hair style based on the location of the representation of the first accessory option on the avatar.
A method is described. The method is performed on an electronic device having one or more cameras and a display device. The method comprises the following steps: displaying, via a display device, a virtual avatar having a plurality of avatar characteristics, the virtual avatar changing appearance in response to detecting a change in facial pose in a field of view of one or more cameras; detecting movement of a first facial feature when a face is detected in a field of view of one or more cameras, the face comprising a plurality of detected facial features including the first facial feature except for a user's tongue; and in response to detecting movement of the first facial feature: in accordance with a determination that the tongue of the user meets respective criteria, displaying the avatar tongue and modifying a position of the avatar tongue based on movement of the first facial feature, wherein the respective criteria include a requirement that the tongue of the user be visible to meet the respective criteria; and in accordance with a determination that the tongue of the user does not meet the corresponding criteria, forgoing displaying the avatar tongue.
A non-transitory computer-readable storage medium is described. A non-transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display apparatus and one or more cameras, the one or more programs including instructions for: displaying, via a display device, a virtual avatar having a plurality of avatar characteristics, the virtual avatar changing appearance in response to detecting a change in facial pose in a field of view of one or more cameras; detecting movement of a first facial feature when a face is detected in a field of view of one or more cameras, the face comprising a plurality of detected facial features including the first facial feature except for a user's tongue; and in response to detecting movement of the first facial feature: in accordance with a determination that the tongue of the user meets respective criteria, displaying the avatar tongue and modifying a position of the avatar tongue based on movement of the first facial feature, wherein the respective criteria include a requirement that the tongue of the user be visible to meet the respective criteria; and in accordance with a determination that the tongue of the user does not meet the corresponding criteria, forgoing displaying the avatar tongue.
A transitory computer-readable storage medium is described. A transitory computer-readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display and one or more cameras, the one or more programs including instructions for: displaying, via a display device, a virtual avatar having a plurality of avatar characteristics, the virtual avatar changing appearance in response to detecting a change in facial pose in a field of view of one or more cameras; detecting movement of a first facial feature when a face is detected in a field of view of one or more cameras, the face comprising a plurality of detected facial features including the first facial feature except for a user's tongue; and in response to detecting movement of the first facial feature: in accordance with a determination that the tongue of the user meets respective criteria, displaying the avatar tongue and modifying a position of the avatar tongue based on movement of the first facial feature, wherein the respective criteria include a requirement that the tongue of the user be visible to meet the respective criteria; and in accordance with a determination that the tongue of the user does not meet the corresponding criteria, forgoing displaying the avatar tongue.
An electronic device is described. The electronic device includes: a display device; one or more cameras; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for: displaying, via a display device, a virtual avatar having a plurality of avatar characteristics, the virtual avatar changing appearance in response to detecting a change in facial pose in a field of view of one or more cameras; detecting movement of a first facial feature when a face is detected in a field of view of one or more cameras, the face comprising a plurality of detected facial features including the first facial feature except for a user's tongue; and in response to detecting movement of the first facial feature: in accordance with a determination that the tongue of the user meets respective criteria, displaying the avatar tongue and modifying a position of the avatar tongue based on movement of the first facial feature, wherein the respective criteria include a requirement that the tongue of the user be visible to meet the respective criteria; and in accordance with a determination that the tongue of the user does not meet the corresponding criteria, forgoing displaying the avatar tongue.
An electronic device is described. The electronic device includes: a display device; one or more cameras; and means for displaying, via the display device, a virtual avatar having a plurality of avatar characteristics, the virtual avatar changing appearance in response to detecting a change in facial pose in the field of view of the one or more cameras; detecting movement of a first facial feature when a face is detected in a field of view of one or more cameras, the face comprising a plurality of detected facial features including the first facial feature except for a user's tongue; and means for, in response to detecting movement of the first facial feature: in accordance with a determination that the tongue of the user meets respective criteria, displaying the avatar tongue and modifying a position of the avatar tongue based on the movement of the first facial feature, wherein the respective criteria include a requirement that the tongue of the user be visible to meet the respective criteria; and means for foregoing display of the avatar tongue in accordance with a determination that the user's tongue does not meet the respective criteria.
Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors. Executable instructions for performing these functions are optionally included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
Accordingly, faster, more efficient methods and interfaces are provided for devices for creating and editing avatars, thereby increasing the effectiveness, efficiency, and user satisfaction of such devices. Such methods and interfaces may supplement or replace other methods for creating and editing an avatar.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following detailed description taken in conjunction with the following drawings, wherein like reference numerals designate corresponding parts throughout the figures.
FIG. 1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
Fig. 1B is a block diagram illustrating exemplary components for event processing, according to some embodiments.
FIG. 2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
Fig. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
Figure 4A illustrates an exemplary user interface for a menu of applications on a portable multifunction device according to some embodiments.
FIG. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface separate from a display, in accordance with some embodiments.
Fig. 5A illustrates a personal electronic device, according to some embodiments.
Fig. 5B is a block diagram illustrating a personal electronic device, according to some embodiments.
Fig. 6A-6 AN illustrate exemplary user interfaces for navigating between avatars in AN application.
FIG. 7 is a flow chart illustrating a method for navigating between avatars in an application.
Fig. 8A to 8CF show exemplary user interfaces for displaying an avatar editing user interface.
FIG. 9 is a flow diagram illustrating a method for displaying an avatar editing user interface.
Fig. 10A and 10B are flowcharts illustrating a method for displaying an avatar editing user interface.
Fig. 11A and 11B are flowcharts illustrating a method for displaying an avatar editing user interface.
Fig. 12A and 12B are flowcharts illustrating a method for displaying an avatar editing user interface.
Fig. 13A-13O illustrate exemplary user interfaces for modifying an avatar in an avatar navigation user interface.
14A and 14B are flow diagrams illustrating a method for modifying an avatar in an avatar navigation user interface.
Detailed Description
The following description sets forth exemplary methods, parameters, and the like. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure, but is instead provided as a description of exemplary embodiments.
Electronic devices need to provide efficient methods and interfaces for creating and editing avatars. For example, while programs for creating and editing avatars already exist, these programs are inefficient and difficult to use compared to techniques that allow users to create and edit realistic avatars and virtual avatars as desired. Such techniques may reduce the cognitive burden on the user creating and editing the avatar, thereby increasing productivity. Moreover, such techniques may reduce processor power and battery power that would otherwise be wasted on redundant user inputs.
Fig. 1A-1B, 2, 3, 4A-4B, and 5A-5B provide a description of exemplary devices for performing techniques for creating and editing an avatar.
Fig. 6A-6 AN illustrate exemplary user interfaces for navigating between avatars in AN application, according to some embodiments. FIG. 7 is a flow diagram illustrating a method of navigating between avatars in an application, according to some embodiments. The user interfaces in fig. 6A through 6AN are used to illustrate the processes described below, including the process in fig. 7.
Fig. 8A to 8CF show exemplary user interfaces for displaying an avatar editing user interface. Fig. 9, 10A, 10B, 11A, 11B, 12A, and 12B are flow diagrams illustrating methods for displaying an avatar editing user interface, according to some embodiments. The user interfaces in fig. 8A to 8CF are used to illustrate the processes described below, including the processes in fig. 9, 10A, 10B, 11A, 11B, 12A, and 12B.
Fig. 13A-13O illustrate exemplary user interfaces for modifying an avatar in an avatar navigation user interface. Fig. 14A and 14B are flow diagrams illustrating methods for modifying an avatar in an avatar navigation user interface, according to some embodiments. The user interfaces in fig. 13A to 13O are for illustrating a process described below including the processes in fig. 14A and 14B.
Although the following description uses the terms "first," "second," etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first touch may be named a second touch and similarly a second touch may be named a first touch without departing from the scope of various described embodiments. The first touch and the second touch are both touches, but they are not the same touch.
The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Depending on the context, the term "if" is optionally to be interpreted to mean "when", "at. Similarly, the phrase "if determined … …" or "if [ stated condition or event ] is detected" is optionally to be construed to mean "upon determination … …" or "in response to determination … …" or "upon detection of [ stated condition or event ] or" in response to detection of [ stated condition or event ] ", depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and related processes for using such devices are described herein. In some embodiments, the device is a portable communication device, such as a mobile phone, that also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of portable multifunction devices include, but are not limited to, those from Apple Inc
Figure BDA0003171879380000141
Device and iPod
Figure BDA0003171879380000142
An apparatus, and
Figure BDA0003171879380000143
an apparatus. Other portable electronic devices are optionally used, such as laptops or tablets with touch-sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the following discussion, an electronic device including a display and a touch-sensitive surface is described. However, it should be understood that the electronic device optionally includes one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.
The device typically supports various applications, such as one or more of the following: a mapping application, a rendering application, a word processing application, a website creation application, a disc editing application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, a fitness support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications executing on the device optionally use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device are optionally adjusted and/or varied for different applications and/or within respective applications. In this way, a common physical architecture of the device (such as a touch-sensitive surface) optionally supports various applications with a user interface that is intuitive and clear to the user.
Attention is now directed to embodiments of portable devices having touch sensitive displays. FIG. 1A is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display 112 is sometimes referred to as a "touch screen" for convenience, and is sometimes referred to or called a touch-sensitive display system. Device 100 includes memory 102 (which optionally includes one or more computer-readable storage media), a memory controller 122, one or more processing units (CPUs) 120, a peripheral interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, an input/output (I/O) subsystem 106, other input control devices 116, and an external port 124. The device 100 optionally includes one or more optical sensors 164. Device 100 optionally includes one or more contact intensity sensors 165 for detecting the intensity of contacts on device 100 (e.g., a touch-sensitive surface, such as touch-sensitive display system 112 of device 100). Device 100 optionally includes one or more tactile output generators 167 for generating tactile outputs on device 100 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 112 of device 100 or touch panel 355 of device 300). These components optionally communicate over one or more communication buses or signal lines 103.
As used in this specification and claims, the term "intensity" of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (surrogate) for the force or pressure of a contact on the touch-sensitive surface. The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., at least 256). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average) to determine the estimated contact force. Similarly, the pressure sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereof, the capacitance of the touch-sensitive surface in the vicinity of the contact and/or changes thereof and/or the resistance of the touch-sensitive surface in the vicinity of the contact and/or changes thereof are optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the surrogate measurement of contact force or pressure is used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the surrogate measurement). In some implementations, the surrogate measurement of contact force or pressure is converted into an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). The intensity of the contact is used as a property of the user input, allowing the user to access additional device functionality that is otherwise inaccessible to the user on smaller-sized devices with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or physical/mechanical controls, such as knobs or buttons).
As used in this specification and claims, the term "haptic output" refers to a physical displacement of a device relative to a previous position of the device, a physical displacement of a component of the device (e.g., a touch-sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a center of mass of the device that is to be detected by a user with the user's sense of touch. For example, where a device or component of a device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other portion of a user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a haptic sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of the touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is optionally interpreted by the user as a "down click" or "up click" of a physical actuation button. In some cases, the user will feel a tactile sensation, such as a "press click" or "release click," even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moving. As another example, even when there is no change in the smoothness of the touch sensitive surface, the movement of the touch sensitive surface is optionally interpreted or sensed by the user as "roughness" of the touch sensitive surface. While such interpretation of touch by a user will be limited by the user's individualized sensory perception, many sensory perceptions of touch are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click down," "click up," "roughness"), unless otherwise stated, the generated haptic output corresponds to a physical displacement of the device or a component thereof that would generate the sensory perception of a typical (or ordinary) user.
It should be understood that device 100 is merely one example of a portable multifunction device, and that device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of these components. The various components shown in fig. 1A are implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application specific integrated circuits.
The memory 102 optionally includes high-speed random access memory, and also optionally includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller 122 optionally controls access to memory 102 by other components of device 100.
Peripheral interface 118 may be used to couple the input and output peripherals of the device to CPU 120 and memory 102. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in the memory 102 to perform various functions of the device 100 and to process data. In some embodiments, peripherals interface 118, CPU 120, and memory controller 122 are optionally implemented on a single chip, such as chip 104. In some other embodiments, they are optionally implemented on separate chips.
RF (radio frequency) circuitry 108 receives and transmits RF signals, also called electromagnetic signals. The RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and so forth. RF circuitry 108 optionally communicates with networks, such as the internet, also known as the World Wide Web (WWW), intranets, and/or wireless networks, such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), as well as other devices via wireless communication. RF circuitry 108 optionally includes well-known circuitry for detecting Near Field Communication (NFC) fields, such as by short-range communication radios. The wireless communication optionally uses any of a number of communication standards, protocols, and technologies, including, but not limited to, global system for mobile communications (GSM), Enhanced Data GSM Environment (EDGE), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), evolution, pure data (EV-DO), HSPA +, dual cell HSPA (DC-HSPDA), Long Term Evolution (LTE), Near Field Communication (NFC), wideband code division multiple access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), bluetooth low power consumption (BTLE), wireless fidelity (Wi-Fi) (e.g., IEEE802.11a, IEEE802.11 b, IEEE802.11 g, IEEE802.11 n, and/or IEEE802.11ac), voice over internet protocol (VoIP), Wi-MAX, email protocol (e.g., Internet Message Access Protocol (IMAP), and/or Post Office Protocol (POP)) Instant messaging (e.g., extensible messaging and presence protocol (XMPP), session initiation protocol for instant messaging and presence with extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol including communication protocols not yet developed at the time of filing date of this document.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. The audio circuitry 110 receives audio data from the peripheral interface 118, converts the audio data to electrical signals, and transmits the electrical signals to the speaker 111. The speaker 111 converts the electrical signal into sound waves audible to the human ear. The audio circuit 110 also receives electrical signals converted by the microphone 113 from sound waves. The audio circuit 110 converts the electrical signals to audio data and transmits the audio data to the peripheral interface 118 for processing. Audio data is optionally retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripheral interface 118. In some embodiments, the audio circuit 110 also includes a headset jack (e.g., 212 in fig. 2). The headset jack provides an interface between the audio circuitry 110 and a removable audio input/output peripheral such as an output-only headphone or a headset having both an output (e.g., a monaural headphone or a binaural headphone) and an input (e.g., a microphone).
The I/O subsystem 106 couples input/output peripheral devices on the device 100, such as the touch screen 112 and other input control devices 116, to a peripheral interface 118. The I/O subsystem 106 optionally includes a display controller 156, an optical sensor controller 158, an intensity sensor controller 159, a haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive/transmit electrical signals from/to other input control devices 116. Other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels, and the like. In some alternative embodiments, one or more input controllers 160 are optionally coupled to (or not coupled to) any of: a keyboard, an infrared port, a USB port, and a pointing device such as a mouse. The one or more buttons (e.g., 208 in fig. 2) optionally include an up/down button for volume control of the speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206 in fig. 2).
Quick depression of the push button optionally unlocks the touch screen 112 or optionally begins the process of Unlocking the Device using a gesture on the touch screen, as described in U.S. patent application 11/322,549 (i.e., U.S. patent No.7,657,849) entitled "Unlocking a Device by Performing diagnostics on a Device an Unlock Image," filed on 23.12.2005, which is hereby incorporated by reference in its entirety. A long press of a button (e.g., 206) optionally turns the device 100 on or off. The functionality of one or more buttons is optionally customizable by the user. The touch screen 112 is used to implement virtual or soft buttons and one or more soft keyboards.
Touch-sensitive display 112 provides an input interface and an output interface between the device and the user. Display controller 156 receives and/or transmits electrical signals to and/or from touch screen 112. Touch screen 112 displays visual output to a user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively "graphics"). In some embodiments, some or all of the visual output optionally corresponds to a user interface object.
Touch screen 112 has a touch-sensitive surface, sensor, or group of sensors that accept input from a user based on tactile sensation and/or tactile contact. Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch screen 112 and convert the detected contact into interaction with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on touch screen 112. In an exemplary embodiment, the point of contact between touch screen 112 and the user corresponds to a finger of the user.
Touch screen 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch screen 112 and display controller 156 optionally detect contact and any movement or breaking thereof using any of a variety of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112. In one exemplary embodiment, projected mutual capacitance sensing technology is used, such as that available from Apple Inc. (Cupertino, California)
Figure BDA0003171879380000201
And iPod
Figure BDA0003171879380000202
The technique used in (1).
The touch sensitive display in some embodiments of touch screen 112 is optionally similar to a multi-touch sensitive trackpad described in the following U.S. patents: 6,323,846(Westerman et al), 6,570,557(Westerman et al) and/or 6,677,932(Westerman et al) and/or U.S. patent publication 2002/0015024a1, each of which is hereby incorporated by reference in its entirety. However, touch screen 112 displays visual output from device 100, while touch sensitive trackpads do not provide visual output.
In some embodiments, the touch sensitive display of touch screen 112 is as described in the following patent applications: (1) U.S. patent application No.11/381,313 entitled "Multipoint Touch Surface Controller" filed on 2.5.2006; (2) U.S. patent application No.10/840,862 entitled "Multipoint touch screen" filed on 6.5.2004; (3) U.S. patent application No.10/903,964 entitled "Gestures For Touch Sensitive Input Devices" (Gestures For Touch Sensitive Input Devices) filed on 30.7.2004; (4) U.S. patent application No.11/048,264 entitled "Gestures For Touch Sensitive Input Devices" filed on 31/1/2005; (5) U.S. patent application No.11/038,590 entitled "Mode-Based Graphical User Interfaces For Touch Sensitive Input Devices" (Pattern-Based Graphical User interface For Touch Sensitive Input Devices) filed on 18.1.2005; (6) U.S. patent application No.11/228,758 entitled "Virtual Input Device On A Touch Screen User Interface" (Virtual Input Device placed On a Touch Screen User Interface) filed On 16.9.2005; (7) U.S. patent application No.11/228,700 entitled "Operation Of A Computer With A Touch Screen Interface" (Operation Of a Computer With a Touch Screen Interface), filed on 16.9.2005; (8) U.S. patent application No.11/228,737 entitled "Activating Virtual Keys Of A Touch-Screen Virtual Keys" (Activating Virtual Keys Of a Touch Screen Virtual Keyboard) filed on 16.9.2005; and (9) U.S. patent application No.11/367,749 entitled "Multi-Functional Hand-Held Device" filed 3.3.2006. All of these applications are incorporated herein by reference in their entirety.
The touch screen 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen has a video resolution of about 160 dpi. The user optionally makes contact with touch screen 112 using any suitable object or appendage, such as a stylus, finger, or the like. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which may not be as accurate as stylus-based input due to the larger contact area of the finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the action desired by the user.
In some embodiments, in addition to a touch screen, device 100 optionally includes a trackpad for activating or deactivating particular functions. In some embodiments, the trackpad is a touch-sensitive area of the device that, unlike a touchscreen, does not display visual output. The touchpad is optionally a touch-sensitive surface separate from touch screen 112 or an extension of the touch-sensitive surface formed by the touch screen.
The device 100 also includes a power system 162 for powering the various components. Power system 162 optionally includes a power management system, one or more power sources (e.g., battery, Alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a Light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in a portable device.
The device 100 optionally further includes one or more optical sensors 164. FIG. 1A shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The optical sensor 164 optionally includes a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The optical sensor 164 receives light from the environment projected through one or more lenses and converts the light into data representing an image. In conjunction with imaging module 143 (also called a camera module), optical sensor 164 optionally captures still images or video. In some embodiments, an optical sensor is located on the back of device 100, opposite touch screen display 112 on the front of the device, so that the touch screen display can be used as a viewfinder for still and/or video image acquisition. In some embodiments, an optical sensor is located on the front of the device so that images of the user are optionally acquired for the video conference while the user views other video conference participants on the touch screen display. In some implementations, the position of the optical sensor 164 can be changed by the user (e.g., by rotating a lens and sensor in the device housing) such that a single optical sensor 164 is used with a touch screen display for both video conferencing and still image and/or video image capture.
The device 100 optionally also includes one or more depth camera sensors 175. FIG. 1A shows a depth camera sensor coupled to a depth camera controller 169 in I/O subsystem 106. The depth camera sensor 175 receives data from the environment to create a three-dimensional model of an object (e.g., a face) within a scene from a viewpoint (e.g., a depth camera sensor). In some embodiments, in conjunction with imaging module 143 (also referred to as a camera module), depth camera sensor 175 is optionally used to determine a depth map of different portions of an image captured by imaging module 143. In some embodiments, the depth camera sensor is located in the front of the device 100, such that user images with depth information are optionally acquired for the video conference while the user views other video conference participants on the touch screen display, and a self-portrait with depth map data is captured. In some embodiments, the depth camera sensor 175 is located on the back of the device, or on both the back and front of the device 100. In some implementations, the position of the depth camera sensor 175 can be changed by the user (e.g., by rotating a lens and sensor in the device housing) such that the depth camera sensor 175 is used with a touch screen display for both video conferencing and still image and/or video image capture.
In some implementations, a depth map (e.g., a depth map image) includes information (e.g., values) related to the distance of objects in a scene from a viewpoint (e.g., a camera, an optical sensor, a depth camera sensor). In one embodiment of the depth map, each depth pixel defines the location in the Z-axis of the viewpoint at which its corresponding two-dimensional pixel is located. In some implementations, the depth map is composed of pixels, where each pixel is defined by a value (e.g., 0 to 255). For example, a "0" value represents a pixel located farthest from a viewpoint (e.g., camera, optical sensor, depth camera sensor) in a "three-dimensional" scene, and a "255" value represents a pixel located closest to the viewpoint in the "three-dimensional" scene. In other embodiments, the depth map represents the distance between an object in the scene and the plane of the viewpoint. In some implementations, the depth map includes information about the relative depths of various features of the object of interest in the field of view of the depth camera (e.g., relative depths of eyes, nose, mouth, ears of the user's face). In some embodiments, the depth map comprises information enabling the apparatus to determine a contour of the object of interest in the z-direction.
Device 100 optionally further comprises one or more contact intensity sensors 165. FIG. 1A shows a contact intensity sensor coupled to an intensity sensor controller 159 in the I/O subsystem 106. Contact intensity sensor 165 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors for measuring the force (or pressure) of a contact on a touch-sensitive surface). Contact intensity sensor 165 receives contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. In some implementations, at least one contact intensity sensor is collocated with or proximate to a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on the back of device 100 opposite touch screen display 112, which is located on the front of device 100.
The device 100 optionally further includes one or more proximity sensors 166. Fig. 1A shows a proximity sensor 166 coupled to the peripheral interface 118. Alternatively, the proximity sensor 166 is optionally coupled to the input controller 160 in the I/O subsystem 106. The proximity sensor 166 optionally performs as described in the following U.S. patent applications: 11/241,839 entitled "Proximaty Detector In Handheld Device"; no.11/240,788 entitled "Proximity Detector In Handheld Device" (Proximity Detector In Handheld Device); no.11/620,702, entitled "Using Ambient Light Sensor To Automation Proximity Sensor Output" (enhanced Proximity Sensor Output Using Ambient Light Sensor); no.11/586,862, entitled "Automated Response To And Sensing Of User Activity In Portable Devices" (Automated Response And Sensing Of User Activity In Portable Devices); and U.S. patent application No.11/638,251, entitled "Methods And Systems For Automatic Configuration Of Peripherals" (Methods And Systems For Automatic Configuration Of Peripherals), which is hereby incorporated by reference in its entirety. In some embodiments, the proximity sensor turns off and disables the touch screen 112 when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call).
Device 100 optionally further comprises one or more tactile output generators 167. FIG. 1A shows a haptic output generator coupled to a haptic feedback controller 161 in the I/O subsystem 106. Tactile output generator 167 optionally includes one or more electro-acoustic devices such as speakers or other audio components; and/or an electromechanical device for converting energy into linear motion such as a motor, solenoid, electroactive polymer, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component for converting an electrical signal into a tactile output on the device). Contact intensity sensor 165 receives haptic feedback generation instructions from haptic feedback module 133 and generates haptic output on device 100 that can be felt by a user of device 100. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., into/out of the surface of device 100) or laterally (e.g., back and forth in the same plane as the surface of device 100). In some embodiments, at least one tactile output generator sensor is located on the back of device 100, opposite touch screen display 112, which is located on the front of device 100.
Device 100 optionally also includes one or more accelerometers 168. Fig. 1A shows accelerometer 168 coupled to peripheral interface 118. Alternatively, accelerometer 168 is optionally coupled to input controller 160 in I/O subsystem 106. Accelerometer 168 optionally performs as described in the following U.S. patent publications: U.S. patent publication 20050190059 entitled "Accelation-Based Detection System For Portable Electronic Devices" And U.S. patent publication 20060017692 entitled "Methods And apparatus For Operating A Portable Device Based On An Accelerometer", both of which are incorporated herein by reference in their entirety. In some embodiments, information is displayed in a portrait view or a landscape view on the touch screen display based on analysis of data received from one or more accelerometers. In addition to accelerometer 168, device 100 optionally includes a magnetometer and a GPS (or GLONASS or other global navigation system) receiver for obtaining information about the position and orientation (e.g., portrait or landscape) of device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or set of instructions) 128, a contact/motion module (or set of instructions) 130, a graphics module (or set of instructions) 132, a text input module (or set of instructions) 134, a Global Positioning System (GPS) module (or set of instructions) 135, and an application program (or set of instructions) 136. Further, in some embodiments, memory 102 (fig. 1A) or 370 (fig. 3) stores device/global internal state 157, as shown in fig. 1A, and fig. 3. Device/global internal state 157 includes one or more of: an active application state indicating which applications (if any) are currently active; display state indicating what applications, views, or other information occupy various areas of the touch screen display 112; sensor status, including information obtained from the various sensors of the device and the input control device 116; and location information regarding the location and/or pose of the device.
The operating system 126 (e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
The communication module 128 facilitates communication with other devices through one or more external ports 124, and also includes various software components for processing data received by the RF circuitry 108 and/or the external ports 124. External port 124 (e.g., Universal Serial Bus (USB), firewire, etc.) is adapted for direct coupling to other devices, or indirectly through a network (e.g., the internet, wireless LAN, etc.)And (4) coupling. In some embodiments, the external port is an external port
Figure BDA0003171879380000241
(trademark of Apple inc.) a multi-pin (e.g., 30-pin) connector that is the same as or similar to and/or compatible with the 30-pin connector used on the device.
Contact/motion module 130 optionally detects contact with touch screen 112 (in conjunction with display controller 156) and other touch sensitive devices (e.g., a touchpad or a physical click wheel). The contact/motion module 130 includes various software components for performing various operations related to contact detection, such as determining whether contact has occurred (e.g., detecting a finger-down event), determining contact intensity (e.g., force or pressure of contact, or a substitute for force or pressure of contact), determining whether there is movement of contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining whether contact has ceased (e.g., detecting a finger-up event or a contact-breaking). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining velocity (magnitude), velocity (magnitude and direction), and/or acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single finger contacts) or multiple point simultaneous contacts (e.g., "multi-touch"/multiple finger contacts). In some embodiments, the contact/motion module 130 and the display controller 156 detect contact on a touch pad.
In some embodiments, the contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by the user (e.g., determine whether the user has "clicked" on an icon). In some embodiments, at least a subset of the intensity thresholds are determined as a function of software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and may be adjusted without changing the physical hardware of device 100). For example, the mouse "click" threshold of the trackpad or touchscreen can be set to any one of a wide range of predefined thresholds without changing the trackpad or touchscreen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more intensity thresholds of a set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting multiple intensity thresholds at once with a system-level click on an "intensity" parameter).
The contact/motion module 130 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, the gesture is optionally detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event, and then detecting a finger-up (lift-off) event at the same location (or substantially the same location) as the finger-down event (e.g., at the location of the icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event, then detecting one or more finger-dragging events, and then subsequently detecting a finger-up (lift-off) event.
Graphics module 132 includes various known software components for presenting and displaying graphics on touch screen 112 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual characteristics) of the displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including without limitation text, web pages, icons (such as user interface objects including soft keys), digital images, videos, animations and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphic module 132 receives one or more codes for specifying a graphic to be displayed from an application program or the like, and also receives coordinate data and other graphic attribute data together if necessary, and then generates screen image data to output to the display controller 156.
Haptic feedback module 133 includes various software components for generating instructions for use by one or more haptic output generators 167 to produce haptic outputs at one or more locations on device 100 in response to user interaction with device 100.
Text input module 134, which is optionally a component of graphics module 132, provides a soft keyboard for entering text in various applications such as contacts 137, email 140, IM 141, browser 147, and any other application that requires text input.
The GPS module 135 determines the location of the device and provides this information for various applications (e.g., to the phone 138 for location-based dialing; to the camera 143 as picture/video metadata; and to applications that provide location-based services, such as weather desktop applets, local yellow pages desktop applets, and map/navigation desktop applets).
Application 136 optionally includes the following modules (or sets of instructions), or a subset or superset thereof:
a contacts module 137 (sometimes referred to as an address book or contact list);
a phone module 138;
a video conferencing module 139;
an email client module 140;
an Instant Messaging (IM) module 141;
fitness support module 142;
a camera module 143 for still and/or video images;
an image management module 144;
a video player module;
a music player module;
a browser module 147;
a calendar module 148;
Desktop applet module 149, optionally including one or more of: a weather desktop applet 149-1, a stock market desktop applet 149-2, a calculator desktop applet 149-3, an alarm desktop applet 149-4, a dictionary desktop applet 149-5, and other desktop applets acquired by the user, and a user created desktop applet 149-6;
a desktop applet creator module 150 for forming a user-created desktop applet 149-6;
a search module 151;
a video and music player module 152 that incorporates a video player module and a music player module;
a notepad module 153;
a map module 154; and/or
Online video module 155.
Examples of other applications 136 optionally stored in memory 102 include other word processing applications, other image editing applications, drawing applications, rendering applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, contacts module 137 is optionally used to manage contact lists or contact lists (e.g., stored in memory 102 or in application internal state 192 of contacts module 137 in memory 370), including: adding one or more names to the address book; delete names from the address book; associating a telephone number, email address, physical address, or other information with a name; associating the image with a name; classifying and classifying names; providing a telephone number or email address to initiate and/or facilitate communication via telephone 138, video conferencing module 139, email 140, or instant message 141; and so on.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, phone module 138 is optionally used to enter a sequence of characters corresponding to a phone number, access one or more phone numbers in contacts module 137, modify an entered phone number, dial a corresponding phone number, conduct a conversation, and disconnect or hang up when the conversation is complete. As noted above, the wireless communication optionally uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, text input module 134, contacts module 137, and telephony module 138, video conference module 139 includes executable instructions to initiate, execute, and terminate video conferences between the user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions to create, send, receive, and manage emails in response to user instructions. In conjunction with the image management module 144, the email client module 140 makes it very easy to create and send an email with a still image or a video image captured by the camera module 143.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, instant messaging module 141 includes executable instructions for: inputting a character sequence corresponding to an instant message, modifying previously input characters, transmitting a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for a phone-based instant message or using XMPP, SIMPLE, or IMPS for an internet-based instant message), receiving the instant message, and viewing the received instant message. In some embodiments, the transmitted and/or received instant messages optionally include graphics, photos, audio files, video files, and/or MMS and/or other attachments supported in an Enhanced Messaging Service (EMS). As used herein, "instant message" refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module, workout support module 142 includes executable instructions to create a workout (e.g., having time, distance, and/or calorie burning goals); communicating with fitness sensors (sports equipment); receiving fitness sensor data; calibrating a sensor for monitoring fitness; selecting and playing music for fitness; and displaying, storing and transmitting fitness data.
In conjunction with touch screen 112, display controller 156, one or more optical sensors 164, optical sensor controller 158, contact/motion module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions for: capturing still images or video (including video streams) and storing them in the memory 102, modifying features of the still images or video, or deleting the still images or video from the memory 102.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions for: arrange, modify (e.g., edit), or otherwise manipulate, tag, delete, present (e.g., in a digital slide show or album), and store still and/or video images.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions to browse the internet (including searching for, linking to, receiving and displaying web pages or portions thereof, and attachments and other files linked to web pages) according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, email client module 140, and browser module 147, calendar module 148 includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to-do, etc.) according to user instructions.
In conjunction with the RF circuitry 108, the touch screen 112, the display system controller 156, the contact/motion module 130, the graphics module 132, the text input module 134, and the browser module 147, the desktop applet module 149 is a mini-application (e.g., a weather desktop applet 149-1, a stock market desktop applet 149-2, a calculator desktop applet 149-3, an alarm clock desktop applet 149-4, and a dictionary desktop applet 149-5) or a mini-application created by a user (e.g., a user created desktop applet 149-6) that is optionally downloaded and used by the user. In some embodiments, the desktop applet includes an HTML (hypertext markup language) file, a CSS (cascading style sheet) file, and a JavaScript file. In some embodiments, the desktop applet includes an XML (extensible markup language) file and a JavaScript file (e.g., Yahoo! desktop applet).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, the desktop applet creator module 150 is optionally used by a user to create a desktop applet (e.g., to turn a user-specified portion of a web page into a desktop applet).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions to search memory 102 for text, music, sound, images, videos, and/or other files that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, speakers 111, RF circuitry 108, and browser module 147, video and music player module 152 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, as well as executable instructions for displaying, rendering, or otherwise playing back video (e.g., on touch screen 112 or on an external display connected via external port 124). In some embodiments, the device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple inc.).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, notepad module 153 includes executable instructions to create and manage notepads, backlogs, and the like according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 is optionally used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data related to stores and other points of interest at or near a particular location, and other location-based data) according to user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, speaker 111, RF circuitry 108, text input module 134, email client module 140, and browser module 147, online video module 155 includes instructions for: allowing a user to access, browse, receive (e.g., by streaming and/or downloading), playback (e.g., on a touch screen or on an external display connected via external port 124), send an email with a link to a particular online video, and otherwise manage online video in one or more file formats, such as h.264. In some embodiments, the link to a particular online video is sent using instant messaging module 141 instead of email client module 140. Additional descriptions of Online video applications may be found in U.S. provisional patent application 60/936,562 entitled "Portable Multi function Device, Method, and Graphical User Interface for Playing Online video," filed on year 2007, 20.6.2007 and U.S. patent application 11/968,067 entitled "Portable Multi function Device, Method, and Graphical User Interface for Playing Online video," filed on year 2007, 31.12.12, the contents of both of which are hereby incorporated by reference in their entirety.
Each of the modules and applications described above corresponds to a set of executable instructions for performing one or more of the functions described above as well as the methods described in this patent application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. For example, the video player module is optionally combined with the music player module into a single module (e.g., the video and music player module 152 in fig. 1A). In some embodiments, memory 102 optionally stores a subset of the modules and data structures described above. Further, memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device in which operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a trackpad. By using a touch screen and/or trackpad as the primary input control device for operating the device 100, the number of physical input control devices (e.g., push buttons, dials, etc.) on the device 100 is optionally reduced.
The predefined set of functions performed exclusively through the touchscreen and/or trackpad optionally includes navigation between user interfaces. In some embodiments, the touchpad, when touched by a user, navigates device 100 from any user interface displayed on device 100 to a main, home, or root menu. In such embodiments, a touchpad is used to implement a "menu button". In some other embodiments, the menu button is a physical push button or other physical input control device, rather than a touchpad.
Fig. 1B is a block diagram illustrating exemplary components for event processing, according to some embodiments. In some embodiments, the memory 102 (FIG. 1A) or 370 (FIG. 3) includes the event classifier 170 (e.g., in the operating system 126) and the corresponding application 136-1 (e.g., any of the aforementioned applications 137 and 151, 155, 380 and 390).
Event sorter 170 receives the event information and determines application 136-1 and application view 191 of application 136-1 to which the event information is to be delivered. The event sorter 170 includes an event monitor 171 and an event dispatcher module 174. In some embodiments, application 136-1 includes an application internal state 192 that indicates a current application view that is displayed on touch-sensitive display 112 when the application is active or executing. In some embodiments, device/global internal state 157 is used by event classifier 170 to determine which application(s) are currently active, and application internal state 192 is used by event classifier 170 to determine the application view 191 to which to deliver event information.
In some embodiments, the application internal state 192 includes additional information, such as one or more of: resume information to be used when the application 136-1 resumes execution, user interface state information indicating information being displayed by the application 136-1 or information that is ready for display by the application, a state queue for enabling a user to return to a previous state or view of the application 136-1, and a repeat/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripheral interface 118. The event information includes information about a sub-event (e.g., a user touch on touch-sensitive display 112 as part of a multi-touch gesture). Peripherals interface 118 transmits information it receives from I/O subsystem 106 or sensors such as proximity sensor 166, accelerometer 168, and/or microphone 113 (through audio circuitry 110). Information received by peripheral interface 118 from I/O subsystem 106 includes information from touch-sensitive display 112 or a touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to peripheral interface 118 at predetermined intervals. In response, peripheral interface 118 transmits the event information. In other embodiments, peripheral interface 118 transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or receiving more than a predetermined duration).
In some embodiments, event classifier 170 further includes hit view determination module 172 and/or active event recognizer determination module 173.
When touch-sensitive display 112 displays more than one view, hit view determination module 172 provides a software process for determining where within one or more views a sub-event has occurred. The view consists of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected optionally corresponds to a programmatic level within a programmatic or view hierarchy of applications. For example, the lowest level view in which a touch is detected is optionally referred to as a hit view, and the set of events identified as correct inputs is optionally determined based at least in part on the hit view of the initial touch that initiated the touch-based gesture.
Hit view determination module 172 receives information related to sub-events of the touch-based gesture. When the application has multiple views organized in a hierarchy, the hit view determination module 172 identifies the hit view as the lowest view in the hierarchy that should handle the sub-event. In most cases, the hit view is the lowest level view in which the initiating sub-event (e.g., the first sub-event in the sequence of sub-events that form an event or potential event) occurs. Once a hit view is identified by hit view determination module 172, the hit view typically receives all sub-events related to the same touch or input source identified as the hit view.
The active event recognizer determination module 173 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some implementations, the active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of the sub-event are actively participating views, and thus determines that all actively participating views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely confined to the area associated with a particular view, the higher views in the hierarchy will remain actively participating views.
The event dispatcher module 174 dispatches the event information to an event recognizer (e.g., event recognizer 180). In embodiments that include active event recognizer determination module 173, event dispatcher module 174 delivers event information to event recognizers determined by active event recognizer determination module 173. In some embodiments, the event dispatcher module 174 stores event information in an event queue, which is retrieved by the respective event receiver 182.
In some embodiments, the operating system 126 includes an event classifier 170. Alternatively, application 136-1 includes event classifier 170. In another embodiment, the event classifier 170 is a stand-alone module or is part of another module stored in the memory 102 (such as the contact/motion module 130).
In some embodiments, the application 136-1 includes a plurality of event handlers 190 and one or more application views 191, where each application view includes instructions for handling touch events occurring within a respective view of the application's user interface. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, the respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of the event recognizers 180 are part of a separate module, such as a user interface toolkit or a higher level object from which the application 136-1 inherits methods and other properties. In some embodiments, the respective event handlers 190 include one or more of: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or calls data updater 176, object updater 177 or GUI updater 178 to update application internal state 192. Alternatively, one or more of the application views 191 include one or more corresponding event handlers 190. Additionally, in some embodiments, one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
The corresponding event recognizer 180 receives event information (e.g., event data 179) from the event classifier 170 and recognizes events from the event information. The event recognizer 180 includes an event receiver 182 and an event comparator 184. In some embodiments, event recognizer 180 also includes metadata 183 and at least a subset of event delivery instructions 188 (which optionally include sub-event delivery instructions).
The event receiver 182 receives event information from the event sorter 170. The event information includes information about a sub-event such as a touch or touch movement. According to the sub-event, the event information further includes additional information, such as the location of the sub-event. When the sub-event relates to motion of a touch, the event information optionally also includes the velocity and direction of the sub-event. In some embodiments, the event comprises rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information comprises corresponding information about the current orientation of the device (also referred to as the device pose).
Event comparator 184 compares the event information to predefined event or sub-event definitions and determines an event or sub-event or determines or updates the state of an event or sub-event based on the comparison. In some embodiments, event comparator 184 includes event definitions 186. Event definition 186 contains definitions of events (e.g., predefined sub-event sequences), such as event 1(187-1), event 2(187-2), and others. In some embodiments, sub-events in event 187 include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition of event 1(187-1) is a double click on a displayed object. For example, a double tap includes a first touch (touch start) on the displayed object for a predetermined length of time, a first lift-off (touch end) for a predetermined length of time, a second touch (touch start) on the displayed object for a predetermined length of time, and a second lift-off (touch end) for a predetermined length of time. In another example, the definition of event 2(187-2) is a drag on the displayed object. For example, the drag includes a predetermined length of time of touch (or contact) on the displayed object, movement of the touch on the touch-sensitive display 112, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some embodiments, event definition 187 includes definitions of events for respective user interface objects. In some embodiments, event comparator 184 performs a hit test to determine which user interface object is associated with a sub-event. For example, in an application view that displays three user interface objects on touch-sensitive display 112, when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test to determine which of the three user interface objects is associated with the touch (sub-event). If each displayed object is associated with a corresponding event handler 190, the event comparator uses the results of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects the event handler associated with the sub-event and the object that triggered the hit test.
In some embodiments, the definition of the respective event 187 further includes a delay action that delays the delivery of the event information until it has been determined that the sequence of sub-events does or does not correspond to the event type of the event identifier.
When the respective event recognizer 180 determines that the series of sub-events does not match any event in the event definition 186, the respective event recognizer 180 enters an event not possible, event failed, or event ended state, after which subsequent sub-events of the touch-based gesture are ignored. In this case, other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the ongoing touch-based gesture.
In some embodiments, the respective event recognizer 180 includes metadata 183 having configurable attributes, tags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively participating event recognizers. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate how or how event recognizers interact with each other. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate whether a sub-event is delivered to a different level in the view or programmatic hierarchy.
In some embodiments, when one or more particular sub-events of an event are identified, the respective event identifier 180 activates an event handler 190 associated with the event. In some embodiments, the respective event identifier 180 delivers event information associated with the event to the event handler 190. Activating the event handler 190 is different from sending (and deferring) sub-events to the corresponding hit view. In some embodiments, the event recognizer 180 issues a flag associated with the recognized event, and the event handler 190 associated with the flag retrieves the flag and performs a predefined process.
In some embodiments, the event delivery instructions 188 include sub-event delivery instructions that deliver event information about sub-events without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the sequence of sub-events or to actively participating views. Event handlers associated with the sequence of sub-events or with actively participating views receive the event information and perform a predetermined process.
In some embodiments, data updater 176 creates and updates data used in application 136-1. For example, the data updater 176 updates a phone number used in the contacts module 137 or stores a video file used in the video player module. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, object updater 177 creates a new user interface object or updates the location of a user interface object. GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends the display information to graphics module 132 for display on the touch-sensitive display.
In some embodiments, one or more event handlers 190 include, or have access to, data updater 176, object updater 177, and GUI updater 178. In some embodiments, data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It should be understood that the above discussion of event processing with respect to user touches on a touch sensitive display also applies to other forms of user input utilizing an input device to operate multifunction device 100, not all of which are initiated on a touch screen. For example, mouse movements and mouse button presses, optionally in conjunction with single or multiple keyboard presses or holds; contact movements on the touchpad, such as tapping, dragging, scrolling, etc.; inputting by a stylus; movement of the device; verbal instructions; detected eye movement; inputting biological characteristics; and/or any combination thereof, is optionally used as input corresponding to sub-events defining the event to be identified.
Fig. 2 illustrates a portable multifunction device 100 with a touch screen 112 in accordance with some embodiments. The touch screen optionally displays one or more graphics within the User Interface (UI) 200. In this embodiment, as well as other embodiments described below, a user can select one or more of these graphics by making gestures on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figures) or with one or more styluses 203 (not drawn to scale in the figures). In some embodiments, selection of one or more graphics will occur when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up, and/or down), and/or a rolling of a finger (right to left, left to right, up, and/or down) that has made contact with device 100. In some implementations, or in some cases, inadvertent contact with a graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture that swipes over the application icon optionally does not select the corresponding application.
Device 100 optionally also includes one or more physical buttons, such as a "home" or menu button 204. As previously described, the menu button 204 is optionally used to navigate to any application 136 in a set of applications that are optionally executed on the device 100. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on touch screen 112.
In some embodiments, device 100 includes touch screen 112, menu buttons 204, push buttons 206 for powering the device on/off and for locking the device, one or more volume adjustment buttons 208, a Subscriber Identity Module (SIM) card slot 210, a headset jack 212, and docking/charging external port 124. Pressing the button 206 optionally serves to turn the device on/off by pressing the button and holding the button in a pressed state for a predefined time interval; locking the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or unlocking the device or initiating an unlocking process. In an alternative embodiment, device 100 also accepts voice input through microphone 113 for activating or deactivating certain functions. Device 100 also optionally includes one or more contact intensity sensors 165 for detecting the intensity of contacts on touch screen 112, and/or one or more tactile output generators 167 for generating tactile outputs for a user of device 100.
Fig. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, the device 300 is a laptop, desktop, tablet, multimedia player device, navigation device, educational device (such as a child learning toy), gaming system, or control device (e.g., a home controller or industrial controller). Device 300 typically includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. The communication bus 320 optionally includes circuitry (sometimes called a chipset) that interconnects and controls communication between system components. Device 300 includes an input/output (I/O) interface 330 with a display 340, typically a touch screen display. I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and a touchpad 355, a tactile output generator 357 (e.g., similar to one or more tactile output generators 167 described above with reference to fig. 1A) for generating tactile outputs on device 300, sensors 359 (e.g., optical sensors, acceleration sensors, proximity sensors, touch-sensitive sensors, and/or contact intensity sensors (similar to one or more contact intensity sensors 165 described above with reference to fig. 1A)). Memory 370 includes high speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory 370 optionally includes one or more storage devices located remotely from one or more CPUs 310. In some embodiments, memory 370 stores programs, modules, and data structures similar to or a subset of the programs, modules, and data structures stored in memory 102 of portable multifunction device 100 (fig. 1A). Further, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk editing module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (FIG. 1A) optionally does not store these modules.
Each of the above elements in fig. 3 is optionally stored in one or more of the previously mentioned memory devices. Each of the above modules corresponds to a set of instructions for performing a function described above. The modules or programs (e.g., sets of instructions) described above need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures described above. Further, memory 370 optionally stores additional modules and data structures not described above.
Attention is now directed to embodiments of user interfaces optionally implemented on, for example, portable multifunction device 100.
Fig. 4A illustrates an exemplary user interface of a menu of applications on portable multifunction device 100 according to some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
one or more signal strength indicators 402 for one or more wireless communications (such as cellular signals and Wi-Fi signals);
Time 404;
a bluetooth indicator 405;
a battery status indicator 406;
tray 408 with icons of common applications, such as:
an icon 416 of the telephony module 138 labeled "telephony", optionally including an indicator 414 of the number of missed calls or voice messages;
an icon 418 of the email client module 140 labeled "mail", optionally including an indicator 410 of the number of unread emails;
icon 420 of browser module 147, labeled "browser"; and
an icon 422 labeled "iPod" of video and music player module 152 (also referred to as iPod (trademark of Apple inc.) module 152); and
icons for other applications, such as:
icon 424 of IM module 141 labeled "message";
icon 426 of calendar module 148 labeled "calendar";
icon 428 of image management module 144 labeled "photo";
icon 430 of camera module 143 labeled "camera";
icon 432 of online video module 155 labeled "online video";
an icon 434 of the stock market desktop applet 149-2 labeled "stock market";
Icon 436 of map module 154 labeled "map";
icon 438 labeled "weather" for weather desktop applet 149-1;
icon 440 of alarm clock desktop applet 149-4 labeled "clock";
icon 442 labeled "fitness support" for fitness support module 142;
icon 444 of omicron notepad module 153 labeled "notepad"; and
an icon 446 labeled "settings" for setting applications or modules, which provides access to the settings of device 100 and its various applications 136.
Note that the icon labels shown in fig. 4A are merely exemplary. For example, icon 422 of video and music player module 152 is labeled "music" or "music player". Other tabs are optionally used for the various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 of fig. 3) having a touch-sensitive surface 451 (e.g., tablet or touchpad 355 of fig. 3) separate from a display 450 (e.g., touchscreen display 112). Device 300 also optionally includes one or more contact intensity sensors (e.g., one or more of sensors 359) to detect the intensity of contacts on touch-sensitive surface 451, and/or one or more tactile output generators 357 to generate tactile outputs for a user of device 300.
Although some of the examples below will be given with reference to input on touch screen display 112 (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects input on a touch-sensitive surface that is separate from the display, as shown in fig. 4B. In some implementations, the touch-sensitive surface (e.g., 451 in fig. 4B) has a primary axis (e.g., 452 in fig. 4B) that corresponds to a primary axis (e.g., 453 in fig. 4B) on the display (e.g., 450). In accordance with these embodiments, the device detects contacts (e.g., 460 and 462 in fig. 4B) with the touch-sensitive surface 451 at locations that correspond to respective locations on the display (e.g., in fig. 4B, 460 corresponds to 468 and 462 corresponds to 470). Thus, when the touch-sensitive surface (e.g., 451 in FIG. 4B) is separated from the display (450 in FIG. 4B) of the multifunction device, user inputs (e.g., contacts 460 and 462, and their movements) detected by the device on the touch-sensitive surface are used by the device to manipulate the user interface on the display. It should be understood that similar methods are optionally used for the other user interfaces described herein.
Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contact, single-finger tap gesture, finger swipe gesture), it should be understood that in some embodiments one or more of these finger inputs are replaced by inputs from another input device (e.g., mouse-based inputs or stylus inputs). For example, the swipe gesture is optionally replaced by a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, a flick gesture is optionally replaced by a mouse click (e.g., instead of detecting a contact, followed by ceasing to detect a contact) while the cursor is over the location of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be understood that multiple computer mice are optionally used simultaneously, or mouse and finger contacts are optionally used simultaneously.
Fig. 5A illustrates an exemplary personal electronic device 500. The device 500 includes a body 502. In some embodiments, device 500 may include some or all of the features described with respect to devices 100 and 300 (e.g., fig. 1A-4B). In some embodiments, the device 500 has a touch-sensitive display screen 504, hereinafter referred to as a touch screen 504. Instead of or in addition to the touch screen 504, the device 500 has a display and a touch-sensitive surface. As with devices 100 and 300, in some embodiments, touch screen 504 (or touch-sensitive surface) optionally includes one or more intensity sensors for detecting the intensity of an applied contact (e.g., touch). One or more intensity sensors of the touch screen 504 (or touch-sensitive surface) may provide output data representing the intensity of a touch. The user interface of device 500 may respond to the touch based on the strength of the touch, meaning that different strengths of the touch may invoke different user interface operations on device 500.
Exemplary techniques for detecting and processing touch intensity are found, for example, in the following related patent applications: international patent Application Ser. No. PCT/US2013/040061, entitled "Device, Method, and Graphical User Interface for Displaying User Interface Objects reforming to an Application", filed on 8.5.2013, published as WIPO patent publication No. WO/2013/169849; and International patent application Ser. No. PCT/US2013/069483 entitled "Device, Method, and Graphical User Interface for transiting Between Touch Input to Display Output Relationships", filed on 11/2013, published as WIPO patent publication No. WO/2014/105276, each of which is hereby incorporated by reference in its entirety.
In some embodiments, the device 500 has one or more input mechanisms 506 and 508. The input mechanisms 506 and 508 (if included) may be in physical form. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, device 500 has one or more attachment mechanisms. Such attachment mechanisms, if included, may allow for attachment of the device 500 with, for example, a hat, glasses, earrings, necklace, shirt, jacket, bracelet, watchband, bracelet, pants, belt, shoe, purse, backpack, and the like. These attachment mechanisms allow the user to wear the device 500.
Fig. 5B illustrates an exemplary personal electronic device 500. In some embodiments, the apparatus 500 may include some or all of the components described with reference to fig. 1A, 1B, and 3. The device 500 has a bus 512 that operatively couples an I/O portion 514 with one or more computer processors 516 and a memory 518. The I/O portion 514 may be connected to the display 504, which may have a touch sensitive member 522 and optionally an intensity sensor 524 (e.g., a contact intensity sensor). Further, I/O portion 514 may interface with communication unit 530 for receiving application programs and operating system data using Wi-Fi, Bluetooth, Near Field Communication (NFC), cellular, and/or other wireless communication techniques. Device 500 may include input mechanisms 506 and/or 508. For example, the input mechanism 506 is optionally a rotatable input device or a depressible input device and a rotatable input device. In some examples, the input mechanism 508 is optionally a button.
In some examples, the input mechanism 508 is optionally a microphone. Personal electronic device 500 optionally includes various sensors, such as a GPS sensor 532, an accelerometer 534, an orientation sensor 540 (e.g., a compass), a gyroscope 536, a motion sensor 538, and/or combinations thereof, all of which may be operatively connected to I/O portion 514.
The memory 518 of the personal electronic device 500 may include one or more non-transitory computer-readable storage media for storing computer-executable instructions that, when executed by the one or more computer processors 516, may, for example, cause the computer processors to perform the techniques described below, including processes 700, 900, 1000, 1100, 1200, 1400 (fig. 7, 9, 10A, 10B, 11A, 11B, 12A, 12B, 14A, and 14B). A computer readable storage medium may be any medium that can tangibly contain or store computer-executable instructions for use by or in connection with an instruction execution system, apparatus, or device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer readable storage medium may include, but is not limited to, magnetic storage devices, optical storage devices, and/or semiconductor storage devices. Examples of such storage devices include magnetic disks, optical disks based on CD, DVD, or blu-ray technology, and persistent solid state memory such as flash memory, solid state drives, and the like. The personal electronic device 500 is not limited to the components and configuration of fig. 5B, but may include other components or additional components in a variety of configurations.
As used herein, the term "affordance" refers to a user-interactive graphical user interface object that is optionally displayed on a display screen of device 100, 300, and/or 500 (fig. 1A, 3, and 5A-5B). For example, images (e.g., icons), buttons, and text (e.g., hyperlinks) optionally each constitute an affordance.
As used herein, the term "focus selector" refers to an input element that is used to indicate the current portion of the user interface with which the user is interacting. In some implementations that include a cursor or other position marker, the cursor acts as a "focus selector" such that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 355 in fig. 3 or touch-sensitive surface 451 in fig. 4B) while the cursor is over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations that include a touch screen display (e.g., touch-sensitive display system 112 in fig. 1A or touch screen 112 in fig. 4A) that enables direct interaction with user interface elements on the touch screen display, a detected contact on the touch screen acts as a "focus selector" such that when an input (e.g., a press input by the contact) is detected at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element) on the touch screen display, the particular user interface element is adjusted in accordance with the detected input. In some implementations, the focus is moved from one area of the user interface to another area of the user interface without corresponding movement of a cursor or movement of a contact on the touch screen display (e.g., by moving the focus from one button to another using tab or arrow keys); in these implementations, the focus selector moves according to movement of the focus between different regions of the user interface. Regardless of the particular form taken by the focus selector, the focus selector is typically a user interface element (or contact on a touch screen display) that is controlled by the user to deliver the user's intended interaction with the user interface (e.g., by indicating to the device the element with which the user of the user interface desires to interact). For example, upon detection of a press input on a touch-sensitive surface (e.g., a touchpad or touchscreen), the location of a focus selector (e.g., a cursor, contact, or selection box) over a respective button will indicate that the user desires to activate the respective button (as opposed to other user interface elements shown on the device display).
As used in the specification and in the claims, the term "characteristic intensity" of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. The characteristic intensity is optionally based on a predefined number of intensity samples or a set of intensity samples acquired during a predetermined time period (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, 5 seconds, 10 seconds) relative to a predefined event (e.g., after detecting contact, before detecting contact liftoff, before or after detecting contact start movement, before or after detecting contact end, before or after detecting an increase in intensity of contact, and/or before or after detecting a decrease in intensity of contact). The characteristic intensity of the contact is optionally based on one or more of: a maximum value of the intensity of the contact, a mean value of the intensity of the contact, an average value of the intensity of the contact, a value at the top 10% of the intensity of the contact, a half-maximum value of the intensity of the contact, a 90% maximum value of the intensity of the contact, and the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an operation. For example, the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold. In this example, a contact whose characteristic intensity does not exceed the first threshold results in a first operation, a contact whose characteristic intensity exceeds the first intensity threshold but does not exceed the second intensity threshold results in a second operation, and a contact whose characteristic intensity exceeds the second threshold results in a third operation. In some embodiments, a comparison between the feature strengths and one or more thresholds is used to determine whether to perform one or more operations (e.g., whether to perform the respective operation or to forgo performing the respective operation) rather than to determine whether to perform the first operation or the second operation.
In some implementations, a portion of the gesture is recognized for determining the feature intensity. For example, the touch-sensitive surface optionally receives a continuous swipe contact that transitions from a starting location and reaches an ending location where the contact intensity increases. In this example, the characteristic intensity of the contact at the end location is optionally based on only a portion of the continuous swipe contact, rather than the entire swipe contact (e.g., only the portion of the swipe contact at the end location). In some embodiments, a smoothing algorithm is optionally applied to the intensity of the swipe contact before determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of: a non-weighted moving average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some cases, these smoothing algorithms eliminate narrow spikes or dips in the intensity of the swipe contact for the purpose of determining the feature intensity.
Contact intensity on the touch-sensitive surface is optionally characterized relative to one or more intensity thresholds, such as a contact detection intensity threshold, a light press intensity threshold, a deep press intensity threshold, and/or one or more other intensity thresholds. In some embodiments, the light press intensity threshold corresponds to an intensity that: at which intensity the device will perform the operations typically associated with clicking a button of a physical mouse or touchpad.
In some embodiments, the deep press intensity threshold corresponds to an intensity that: at which intensity the device will perform a different operation than that typically associated with clicking a button of a physical mouse or trackpad.
In some embodiments, when a contact is detected whose characteristic intensity is below a light press intensity threshold (e.g., and above a nominal contact detection intensity threshold, a contact below the nominal contact detection intensity threshold is no longer detected), the device will move the focus selector in accordance with movement of the contact on the touch-sensitive surface without performing operations associated with a light press intensity threshold or a deep press intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface drawings.
Increasing the contact characteristic intensity from an intensity below the light press intensity threshold to an intensity between the light press intensity threshold and the deep press intensity threshold is sometimes referred to as a "light press" input. Increasing the contact characteristic intensity from an intensity below the deep press intensity threshold to an intensity above the deep press intensity threshold is sometimes referred to as a "deep press" input. Increasing the contact characteristic intensity from an intensity below the contact detection intensity threshold to an intensity between the contact detection intensity threshold and the light press intensity threshold is sometimes referred to as detecting a contact on the touch surface. The decrease in the characteristic intensity of the contact from an intensity above the contact detection intensity threshold to an intensity below the contact detection intensity threshold is sometimes referred to as detecting lift-off of the contact from the touch surface. In some embodiments, the contact detection intensity threshold is zero. In some embodiments, the contact detection intensity threshold is greater than zero.
In some embodiments described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting a respective press input performed with a respective contact (or contacts), wherein the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or contacts) above a press input intensity threshold. In some embodiments, the respective operation is performed in response to detecting an increase in intensity of the respective contact above a press input intensity threshold (e.g., a "down stroke" of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below the press input intensity threshold, and the respective operation is performed in response to detecting a subsequent decrease in intensity of the respective contact below the press input threshold (e.g., an "up stroke" of the respective press input).
In some embodiments, the device employs intensity hysteresis to avoid accidental input sometimes referred to as "jitter," where the device defines or selects a hysteresis intensity threshold having a predefined relationship to the press input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press input intensity threshold, or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below a hysteresis intensity threshold corresponding to the press input intensity threshold, and the respective operation is performed in response to detecting a subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., an "upstroke" of the respective press input). Similarly, in some embodiments, a press input is detected only when the device detects an increase in contact intensity from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press input intensity threshold and optionally a subsequent decrease in contact intensity to an intensity at or below the hysteresis intensity, and a corresponding operation is performed in response to detecting the press input (e.g., depending on the circumstances, the increase in contact intensity or the decrease in contact intensity).
For ease of explanation, optionally, a description of an operation performed in response to a press input associated with a press input intensity threshold or in response to a gesture that includes a press input is triggered in response to detection of any of the following: the contact intensity increases above the press input intensity threshold, the contact intensity increases from an intensity below the hysteresis intensity threshold to an intensity above the press input intensity threshold, the contact intensity decreases below the press input intensity threshold, and/or the contact intensity decreases below the hysteresis intensity threshold corresponding to the press input intensity threshold. Additionally, in examples in which operations are described as being performed in response to detecting that the intensity of the contact decreases below the press input intensity threshold, the operations are optionally performed in response to detecting that the intensity of the contact decreases below a hysteresis intensity threshold that corresponds to and is less than the press input intensity threshold.
Attention is now directed to embodiments of a user interface ("UI") and associated processes implemented on an electronic device, such as portable multifunction device 100, device 300, or device 500.
Fig. 6A-6 AN illustrate exemplary user interfaces for navigating between avatars in AN application (e.g., AN instant messaging application), according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the process in FIG. 7.
Fig. 6A depicts a device 600 having a display 601, which in some cases is a touch-sensitive display. In some embodiments, device 600 further includes a camera 602 that includes at least an image sensor capable of capturing data representing a portion of a spectrum (e.g., visible, infrared, or ultraviolet light). In some embodiments, camera 602 includes multiple image sensors and/or other types of sensors. In addition to capturing data representing sensed light, in some embodiments, camera 602 can capture other types of data such as depth data. For example, in some embodiments, the camera 602 also captures depth data using speckle, time-of-flight, parallax, or focus based techniques. Image data captured by device 600 using camera 602 includes data corresponding to a portion of the spectrum of a scene within the camera's field of view. Additionally, in some embodiments, the captured image data further includes depth data for the light data. In some other embodiments, the captured image data comprises data sufficient to determine or generate depth data for the data of the portion of the spectrum. In some embodiments, the device 600 includes one or more features of the device 100, 300, or 500.
In some examples, the electronic device 600 includes a depth camera, such as an infrared camera, a thermal imaging camera, or a combination thereof. In some examples, the device further includes a light emitting device (e.g., a light projector), such as an IR floodlight, a structured light projector, or a combination thereof. Optionally, the light emitting device is used to illuminate the object during capturing of images by the visible light camera and the depth camera (e.g. an IR camera), and information from the depth camera and the visible light camera is used to determine depth maps of different parts of the object captured by the visible light camera. In some implementations, a depth map (e.g., a depth map image) includes information (e.g., values) related to the distance of objects in a scene from a viewpoint (e.g., a camera). In one embodiment of the depth map, each depth pixel defines the location in the Z-axis of the viewpoint at which its corresponding two-dimensional pixel is located. In some examples, the depth map is composed of pixels, where each pixel is defined by a value (e.g., 0 to 255). For example, a "0" value represents a pixel located farthest from a viewpoint (e.g., camera) in a "three-dimensional" scene, and a "255" value represents a pixel located closest to the viewpoint in the "three-dimensional" scene. In other examples, the depth map represents a distance between an object in the scene and a plane of the viewpoint. In some implementations, the depth map includes information about the relative depths of various features of the object of interest in the field of view of the depth camera (e.g., relative depths of eyes, nose, mouth, ears of the user's face). In some embodiments, the depth map comprises information enabling the apparatus to determine a contour of the object of interest in the z-direction. In some implementations, the lighting effects described herein are displayed using parallax information from two cameras (e.g., two visible light cameras) for backward images, and depth information from a depth camera is used in conjunction with image data from the visible light cameras for forward images (e.g., self-portrait images). In some implementations, the same user interface is used when determining depth information using two visible light cameras and when determining depth information using depth cameras, thereby providing a consistent experience for the user even when using distinct techniques to determine information used in generating a lighting effect. In some embodiments, upon displaying the camera user interface with one of the lighting effects applied thereto, the device detects selection of the camera switching affordance and switches from a forward-facing camera (e.g., a depth camera and a visible light camera) to a rearward-facing camera (e.g., two visible light cameras spaced apart from each other) (or vice versa) while maintaining display of user interface controls to apply the lighting effect and replacing the display of the field of view of the forward-facing camera with the field of view of the rearward-facing camera (or vice versa).
In fig. 6A, the device 600 displays an instant messaging user interface 603 for an instant messaging application. The instant message user interface 603 includes a message display area 604 that includes an instant message 605 (represented by a recipient identifier 606) that is delivered to a participant in the messaging session. The instant message user interface 603 also includes a message composition field 608 for displaying input (e.g., text input, multimedia input, etc.) sent to the participants in the message session. Instant messaging user interface 603 also includes an application taskbar affordance 610, a keyboard display area 612, and a text suggestion area 614.
In FIG. 6B, device 600 detects input 616 (e.g., a touch input on display 601) at a location corresponding to application taskbar affordance 610.
In 6C, in response to detecting the input 616, the device 600 replaces the text suggestion region 614 with an application taskbar 618 having application affordances 620 corresponding to various applications. Device 600 also replaces keyboard display area 612 with application display area 622 for displaying an application user interface corresponding to one of the selected application affordances.
In FIG. 6D, device 600 detects input 624 selecting application affordance 620a (e.g., a touch input on display 601 at a location corresponding to application affordance 620 a).
In FIG. 6E, in response to detecting the input 624, the device 600 replaces the application taskbar 618 and the application display area 622 with an avatar splash screen 626 having an example avatar 628. The example avatar includes an example customizable avatar 628a and an example non-customizable avatar 628 b. In the embodiment shown in FIG. 6E, customizable avatar 628a is positioned above non-customizable avatar 628 b. In some embodiments, avatar splash screen 626 includes an animated display of example avatars 628 that move and change facial expressions to give the appearance of the example avatars interacting with each other (e.g., appearing to talk to each other, blink, laugh, smile, etc.). In some embodiments, device 600 displays avatar splash screen 626 only when application affordance 620a is first selected or when a customizable avatar is not created. In some embodiments, the device 600 optionally displays an avatar selection interface, such as a condensed avatar selection interface 668 (see fig. 6L and corresponding discussion below), when the avatar splash screen 626 is not displayed.
In some embodiments, the virtual avatar is a user representation (e.g., a graphical representation of the user) that can be graphically depicted. In some embodiments, the virtual avatar is non-realistic (e.g., cartoon). In some embodiments, the virtual avatar includes an avatar face having one or more avatar features (e.g., avatar facial features). In some embodiments, the avatar features correspond to (e.g., map) one or more physical features of the user's face, such that detected movement of the physical features of the user (e.g., determined based on a camera such as a depth sensing camera) affects the avatar features (e.g., affects a graphical representation of the features).
In some examples, the user can manipulate characteristics or features of the virtual avatar using camera sensors (e.g., camera module 143, optical sensor 164) and, optionally, depth sensors (e.g., depth camera sensor 175). When the physical features (such as facial features) and position (such as head position, head rotation, or head tilt) of the user change, the electronic device detects the change and modifies the displayed image of the virtual avatar to reflect the change in the physical features and position of the user. In some embodiments, changes to the user's physical characteristics and location are indicative of various expressions, emotions, contexts, moods, or other non-verbal communication. In some embodiments, the electronic device modifies the displayed image of the virtual avatar to represent these expressions, emotions, contexts, moods, or other non-verbal communication.
In some embodiments, the customizable avatar is a virtual avatar that can be selected and customized by a user, for example, to achieve a desired appearance (e.g., to look like the user). Customizable avatars typically have the appearance of a human character, rather than an anthropomorphic construct of a non-human character such as an animal or other non-human object. Additionally, features of the avatar may be created or changed, if desired, using an avatar editing user interface (e.g., the avatar editing user interface discussed below with reference to fig. 8A-8 CF). In some embodiments, a customizable avatar may be created and configured to achieve a customized physical appearance, physical configuration, or modeling behavior.
In some embodiments, the non-customizable avatars are virtual avatars that can be selected by a user, but are generally not fundamentally configurable, although their appearance can be changed by face tracking, as described in more detail below. In contrast, non-customizable avatars are pre-configured and typically do not have feature components that are modifiable by the user. In some embodiments, the non-customizable avatar has the appearance of a non-human character, such as an anthropomorphic construct of an animal or other non-human object. The user cannot create the non-customizable avatar, or modify it to achieve significant changes in the physical appearance, physical configuration, or modeling behavior of the non-customizable avatar.
In FIG. 6F, device 600 detects input 630 (e.g., a touch gesture on display 601) on continuation affordance 632.
In fig. 6G, in response to detecting the input 630, the device 600 displays an expanded avatar selection interface 634, which provides an initial set of avatar options that may be selected for the instant message user interface 608 (e.g., sent to participants in a message session). In embodiments discussed herein, an avatar is a representation of a virtual character that may be animated to display changes (e.g., in response to a device detecting a change in a user's face). The avatar may correspond to an avatar option, which is a static representation of the avatar having the same appearance and characteristics as the avatar, but is typically not animated. The avatar option is typically a selectable representation of the avatar. Typically, when the avatar option is selected, the corresponding avatar is displayed.
The extended avatar selection interface 634 includes an avatar display area 636 and an avatar option area 638. Avatar option area 638 includes a set of selectable avatar options 640. The selected avatar option is represented by a border 642, which is shown in fig. 6G as surrounding the initially selected monkey avatar option 640 a. The selected avatar option is represented in avatar display area 636 as avatar 645 (e.g., avatar 645 is a monkey corresponding to monkey avatar option 640 a). Each avatar option 640 may be selected by tapping the corresponding avatar option. Thus, in response to receiving a selection of a different one of avatar options 640, device 600 modifies displayed avatar 645 to represent the newly selected avatar option and moves bezel 642 to the selected avatar option. Thus, if the device 600 detects selection of the unicorn portrait option 640b, the device 600 will display a bezel 642 around the unicorn portrait option 640b and modify the portrait 645 to display as a unicorn corresponding to the unicorn portrait option 640 b.
The avatar display area 636 also includes a capture affordance 644 that is selectable to capture an image of the avatar 645 for sending to participants in the messaging session (see instant messaging user interface 603). In some embodiments, the captured image is a still image or a video recording depending on the type of gesture detected on the capture affordance 644. For example, if device 600 detects a flick gesture on capture affordance 644, device 600 captures a still image of avatar 645 when the flick gesture occurs. If device 600 detects a tap and hold gesture on capture affordance 644, device 600 captures a video recording of avatar 645 during the duration of the tap and hold gesture. In some embodiments, video recording stops when the finger is lifted off the affordance. In some embodiments, video recording continues until a subsequent input (e.g., a tap input) is detected at the location corresponding to the affordance. In some embodiments, the captured image (e.g., still image or video recording) of avatar 645 is then inserted into message composition field 608 for subsequent transmission to the participants of the message conversation. In some embodiments, the captured image of avatar 645 is sent directly to the participants of the message conversation without inserting the captured image into message composition field 608.
In some embodiments, device 600 tracks the movement and positioning (e.g., rotational movement and/or translational movement) of a user's face located in the field of view of a camera (e.g., camera 602), and in response updates the appearance of avatar 645 based on detected changes in the user's face (referred to herein generally as a "face tracking" function). For example, as shown in fig. 6H, device 600 updates the appearance of avatar 645 in response to detecting (e.g., using camera 602) a change to the user's face. In the example of fig. 6H, the head portrait 645 is shown with the head open obliquely and the eyes open, mirroring similar expressions and positions of the user's face that is in the field of view of the camera 602. This change to avatar 645 may be captured using capture affordance 644 and, optionally, sent to participants of the message session shown in fig. 6A. Although the avatar 645 shown in fig. 6H is a non-customizable avatar, the device 600 may modify the customizable avatar in a similar manner.
In the extended avatar selection interface shown in fig. 6G-6I, all of the avatar options 640 displayed in avatar option area 638 are non-customizable avatars pre-configured for immediate selection by the user. This is because no customizable avatar has been created. However, as shown in FIG. 6I, device 600 displays an avatar creation prompt 646 extending from avatar creation icon 648 to prompt the user to select avatar creation icon 648, which causes device 600 to initiate a process for creating a new customizable avatar that can then be added to avatar option area 638 and, optionally, used in instant messaging user interface 603. The combination of the displayed avatar creation prompt 646 and avatar creation icon 648 (having a "+" shape in FIG. 6I) informs the user to select the avatar creation icon 648 to allow the user to create a customizable avatar that can be added to the extended avatar selection interface 634 and the library interface 686 in FIG. 6U.
In some embodiments, avatar creation prompt 646 appears after a slight delay, and displays animated occurrences of various exemplary customizable avatars, and changes facial expressions, for example. For example, in fig. 6I, avatar creation prompt 646 shows an exemplary customizable avatar 646a showing a male avatar wearing a hat and glasses and having a smiley facial expression. In fig. 6J, avatar creation prompt 646 transitions to show an exemplary customizable avatar 646b showing a female avatar with black hair separated from the middle and having a full smiling facial expression.
In some embodiments, the device 600 displays new customizable avatars (such as those created after selecting the avatar creation icon 648) appearing in the avatar option area 638 at the end of the set of avatar options 640, rather than between any two non-customizable avatars. For example, all newly created customizable avatars may be displayed at the back end of the set of avatars (e.g., after the unicorn avatar option 640b, but not between the unicorn avatar option 640b and the chicken avatar option 640 c) or at the front end of the set of avatars (e.g., next to the avatar creation icon 648 or between the avatar creation icon 648 and the monkey avatar option 640 a). Thus, all customizable avatars are displayed grouped together and separated (e.g., isolated or separated) from non-customizable avatars. This separation of customizable and non-customizable avatars is maintained in the various user interfaces described with respect to fig. 6A-6 AN.
In fig. 6J, device 600 detects input 650 (e.g., a tap gesture on display 601) at a location corresponding to avatar-creating icon 648.
In FIG. 6K, in response to detecting input 650, device 600 displays avatar-editing user interface 652 with cancel affordance 654 and complete affordance 666. The avatar editing user interface 652 is similar to the avatar editing user interface 801 shown in fig. 8A. Avatar editing user interface 652 may be used to create customizable avatars in accordance with the disclosure provided below with respect to fig. 8A-8 CF. For the sake of brevity, details regarding creating and editing the avatar are not repeated here, but may be found in the following disclosure (e.g., fig. 8A-8 CE and related disclosure).
After the user customizes the avatar in avatar editing user interface 652, the user may select completion affordance 652 to save the avatar as a new customized avatar (shown in FIG. 6L as customizable female avatar 670). In response to detecting selection of the completion affordance 652, the device 600 saves the new custom avatar and displays the instant message user interface 603 with the condensed avatar selection interface 668, as shown in FIG. 6L. Alternatively, the user may select the cancel affordance 654 to discard the new custom avatar and return to the extended avatar selection interface 634 shown in FIG. 6H. After the device 600 saves the new custom avatar, the new custom avatar may be viewed in the extended avatar selection interface 634 by returning to the extended avatar selection interface 634 as described below. Since the new customized avatar is a customizable avatar and not a non-customizable avatar, when the new customized avatar is viewed in the extended avatar selection interface 634, it will be separated from the non-customizable avatar (e.g., at the end of the set of avatar options 640, but not between any two non-customizable avatars) and combined with the other customizable avatars.
In FIG. 6L, the device 600 displays a condensed avatar selection interface 668 that provides a close-up view of the avatar option shown in the extended avatar selection interface 634 (e.g., the avatar option 640). The simplified avatar selection interface 668 includes a scrollable list of avatars 675 (corresponding to avatar option 640) that may be selected by the user. The device 600 displays the currently selected avatar (e.g., the female avatar 670 in FIG. 6L) at a central location in the condensed avatar selection interface 668. When the currently selected avatar is a customizable avatar (e.g., female avatar 670), device 600 also displays an options affordance 674 that can be selected to display an options menu (discussed below with reference to fig. 6W). By centering them, different avatars may be selected in the condensed avatar selection interface 668, as discussed in more detail below.
In FIG. 6L, the condensed avatar selection interface 668 is displayed in the instant message user interface 603 at the location previously occupied by the text suggestion region 614 and the keyboard display region 612. The application taskbar 618 is optionally displayed under the condensed avatar selection interface 668 showing a selected application affordance 620a indicated by a border 672. By displaying the condensed avatar selection interface 668 in the instant message user interface 603, the device 600 provides the user with convenient access to select an avatar (e.g., as a sticker, avatar image, or avatar record) for sending to a message session participant.
The device 600 groups the displayed customized avatars and non-customized avatars by type and arranges the groups in series such that scrolling in one direction provides access to one type of avatar (e.g., a non-customized avatar) and scrolling in the opposite direction provides access to a different type of avatar (e.g., a customizable avatar).
Device 600 displays a customizable female avatar 670 in the center of reduced avatar selection area 668 and a border area between the customizable avatar and the non-customizable avatar (e.g., with the customizable avatar on one side of female avatar 670 and the non-customizable avatar on the other side of female avatar 670, see also fig. 6AE through 6 AG). Thus, the list of avatars 675 scrolling the displayed avatars in one direction displays the non-customizable avatars, and scrolling the customizable avatars in the opposite direction. In some embodiments, the list of avatars 675 can be scrolled to display an avatar creation affordance 669 (similar to the function of avatar creation icon 648) that is positioned at the end of the customizable avatar opposite the non-customizable avatar, such that avatar creation affordance 669 is positioned at one end of the grouping of customizable avatars and the grouping of non-customizable avatars is positioned at the other end of the grouping of customizable avatars. In such embodiments, avatar creation affordance 669 may be selected to create a new customizable avatar in a manner similar to that discussed above with respect to fig. 6I-6K.
As shown in fig. 6M-6R, the device 600 modifies the avatar (e.g., the customizable female avatar 670) displayed in the condensed avatar selection interface 668 in response to detecting a facial change. For reference, fig. 6M-6R include representations of faces 673 detected in the field of view of the camera (e.g., 602). Fig. 6M-6R illustrate modification of displayed avatars (e.g., customizable avatar 670 and non-customizable avatar 671) in response to changes in detected face 673. In some embodiments, the views of the face 673 in fig. 6M-6R are shown from the perspective of a device positioned facing the face 673. Thus, the corresponding changes to the displayed avatar are shown in fig. 6M-6R as mirror images relative to the movement of the face 673.
In fig. 6M, the device 600 detects that the face 673 is tilted and makes an expression where the lips 673-1 are left down, the eyebrows 673-2 are wrinkled, and the eyes 673-3 are slightly squinted. In response, the device 600 may modify the displayed avatar 670 to have the same facial expression (e.g., an expression with the head tilted and frown).
In fig. 6M-6O, the device 600 detects a horizontal gesture 676 (e.g., a swipe or touch and drag input on the display 601) that starts from the right side of the list of avatars 675 and moves left toward the left side of the list of avatars 675. In response to detecting the horizontal gesture 676, the device 600 displays a list of avatars 675 scrolling to the left based on the magnitude (and direction) of the horizontal gesture 676, such that the customizable female avatar 670 scrolls to the left and the non-customizable monkey avatar 671 is scrolled to the center of the condensed avatar selection interface 668.
As the female avatar 670 scrolls from the center position in fig. 6M to the left-shifted position in fig. 6O, the device 600 displays that the animation of the female avatar 670 transitions from the 3D, face-tracking state in fig. 6M (the avatar 670 has a pose matching the pose of the face 673) to the static state in fig. 6O (the avatar 670 has a default pose not determined based on the pose of the face 673). When the animated transition is displayed, the device 600 stops modifying the woman's avatar 670 based on the face 673 (although the face 673 may still optionally be tracked by the device 600). For example, the face 673 still has the frown pose in fig. 6N and 6O, but now the head is not tilted, and the female avatar 670 has a different pose from the face 673 in fig. 6N and 6O.
Fig. 6N shows an intermediate state of the animation of the avatar 670 moving from the face-tracking state in fig. 6M to the static position in fig. 6O. In fig. 6N, the device 600 does not modify the female avatar 670 based on the detected face 673, but rather shows the female avatar 670 transitioning from a frown in fig. 6M to a static smile pose in fig. 6O. Specifically, fig. 6N shows that the head of the female avatar is moved to an upright position, her mouth is in a position between unpleasant and smiling (e.g., between the detected mouth position of the face and the mouth position of the static avatar), and she is not frown.
Fig. 6N also shows that the monkey head portrait 671 is in a slightly off-center position as the monkey head portrait 671 moves from the right-side displaced position in fig. 6M to the center position in fig. 6O. The monkey head 671 has a static smiling pose in fig. 6M to 6O.
In fig. 6O, the female avatar 670 is fully displaced to the left position in a static smile pose, and the monkey avatar 671 is in a central position. The apparatus 600 has not modified the monkey head portrait 671 based on the detected face 673. In some embodiments, the device 600 generates haptic feedback (e.g., haptic output) and, optionally, audio output to indicate when the scrolling avatar 675 is centered in the condensed avatar selection interface 668. The haptic feedback informs the user that the avatar is positioned such that the release level gesture 676 causes the device 600 to select the avatar.
After the monkey appears in the center of the screen in fig. 6O, the device 600 detects the termination of the input 676 and resumes modifying the centered avatar (e.g., monkey avatar 671) based on the detected face 673 in fig. 6P. Thus, in fig. 6P, the monkey avatar 671 assumes a frown pose of the face 673 (e.g., the device 600 modifies the monkey avatar 671 to transition from a static pose to a pose of the face 673).
In some embodiments, as the user scrolls through the list of avatars 675, the device 600 modifies the avatars to present the pose (e.g., position and facial expression) of the face 673 as each avatar stops at the center position of the condensed avatar selection interface 668. Thus, the user may maintain a particular facial expression and the device 600 will modify the central avatar to match the facial expression. As the user holds the facial expression and swipes to a different avatar, the device 600 displays an animation of the currently selected avatar transitioning from the held facial expression of the user's face to a static default pose as the next avatar is scrolled to a center position. The device 600 then displays the next avatar transitioning from its static pose to the facial expression held by the user. In some embodiments, the device 600 does not begin modifying the avatar at the center of the condensed avatar selection interface 668 until after the avatar pauses at the center position (in response to a detected face, or with an animated transition from a tracked face to a static pose). Thus, when the user quickly scrolls through the list of avatars 675 (e.g., where the avatars are scrolled without stopping on a particular avatar), the device 600 does not display or modify the avatars based on the detected facial animation as the avatars scroll.
Since the monkey avatar 671 is the non-customizable avatar selected in FIG. 6P, device 600 does not display option affordance 674. Since the customizable avatar and non-customizable avatars are grouped as previously discussed, continuing to scroll in the left direction causes the device 600 to display additional non-customizable avatars (such as, for example, the robot avatar 678), but not the customizable avatar. The customizable avatar may be displayed by scrolling in a rightward direction, as discussed below with reference to fig. 6Q and 6R.
In fig. 6Q and 6R, the device 600 detects a horizontal gesture 680 (e.g., a swipe or touch and drag input on the display 601) moving to the right of the list of avatars 675. In response to detecting the horizontal gesture 680, the device 600 scrolls to the right based on the magnitude (and direction) of the horizontal gesture 680 display list of avatars 675, such that the non-customizable robot avatar 678 scrolling disappears from the display 601, the monkey avatar 671 scrolls to a right shifted position, and the customizable female avatar 670 is scrolled to the center of the display. Since the customizable avatars and non-customizable avatars are grouped as previously discussed, continuing to scroll in the rightward direction causes the device 600 to display additional customizable avatars (or, alternatively, avatars creating affordances 669) instead of non-customizable avatars. As previously described, the non-customizable avatar may be displayed by scrolling in the left direction.
Fig. 6Q and 6R also show scrolling of the avatar in an animated transition similar to that described above with respect to fig. 6M-6P, but moving in the opposite direction. In fig. 6Q and 6R, as the avatar is shifted to the right, the device 600 animates the monkey avatar 671 transitioning from the face tracking state in fig. 6P to the static state shown in fig. 6R, where the appearance of the transition is shown in fig. 6Q. For example, in fig. 6P, the apparatus 600 modifies the monkey avatar 671 based on the face 673 (e.g., the monkey avatar 671 has a pose matching the face 673). As shown in fig. 6Q and 6R, the device 600 stops modifying the monkey avatar 671 based on the face 673 (e.g., the face 673 remains frown expression, but the monkey avatar 671 has a different pose), and displays an animated transition of the monkey avatar 671 moving from the pose of fig. 6P to the static appearance of fig. 6R. Fig. 6Q shows an intermediate state of the animation transition, in which the mouth of the monkey avatar 671 is in a position between unpleasant and smile, and the eyebrow thereof is in an uncreped position. The female avatar 670 is slightly shifted to the right, moving to a central position while maintaining a static default smiling pose.
In FIG. 6R, the female avatar 670 is positioned in the center of the condensed avatar selection interface 668, with a static smile gesture. The monkey avatar 671 is in a right-shifted position and has a static pose (the static pose of the monkey avatar 671 is also a smiling pose similar to the static pose of the female avatar 670, but the static pose of each avatar may be different). The face 673 transitions to a neutral pose (e.g., smiling slightly, not frown). In fig. 6R, the device 600 does not modify the female avatar 670 based on the detected face 673.
In FIG. 6S, the device 600 displays a customizable female avatar 670 selected by being located in the center of the condensed avatar selection interface 668. Likewise, because the female avatar 670 is a customizable avatar, the device 600 displays the option affordance 674. The device 600 also displays an edit affordance 682 that is selectable to access the avatar library. In some embodiments, device 600 displays an editing affordance 682 regardless of whether the displayed avatar is customizable or non-customizable.
In FIG. 6T, device 600 detects input 684 on edit affordance 682 (e.g., a tap gesture on device 601). In response to detecting input 684, device 600 displays a library interface 686 as shown in FIG. 6U.
In FIG. 6U, the device 600 displays a library interface 686 in response to detecting user input on an editing affordance (e.g., editing affordance 682). In the embodiment shown in fig. 6U, device 600 shows a library interface 686 with a female avatar option 670a and a new customized male avatar option 688 a. The female avatar option 670a corresponds to the female avatar 670 and the male avatar option 688a corresponds to the male avatar 688 (as shown in fig. 6 AE). In the embodiment shown in fig. 6U, customized male avatar option 688a is a customizable avatar option, which corresponds to a customizable male avatar (e.g., 688) created according to the steps discussed above with respect to fig. 6I-6K. For the sake of brevity, these steps are not repeated here. The device 600 displays a male avatar option 668a and a female avatar option 670a (customizable avatar option) grouped together and provided separately from non-customizable avatar options.
In fig. 6V, device 600 detects input 690 (e.g., a touch input on display 601) for selecting female avatar option 670 a.
In fig. 6W, in response to detecting input 690 selecting the feminine avatar option 670a, device 600 displays an options menu 692. Device 600 displays an options menu 692 having an avatar (e.g., female avatar 670) corresponding to the avatar option selected from library interface 686 (e.g., female avatar option 670a) and an edit option 692a, a copy option 692b, and a delete option 692 c. Each of the edit, copy, and delete options can be selected to initiate a respective process for editing, copying, or deleting the avatar option (and corresponding avatar) selected in the library interface 686. In some embodiments, the device 600 modifies the avatar displayed in the options menu 692 according to the face tracking features discussed herein.
In fig. 6X, device 600 detects input 693a (e.g., a touch input on display 601) on edit option 692 a. In response to detecting input 693a, device 600 displays avatar editing user interface 694 (shown in FIG. 6Z), which is similar to avatar editing user interface 652 (but displays selected avatar 670, or a copy of selected avatar 670, instead of the default new avatar).
In fig. 6Y, device 600 detects input 693b (e.g., a touch input on display 601) on copy option 692 b. In response to detecting the input 693b, the device 600 creates a duplicate version of the avatar option selected in the library interface 686 (e.g., a duplicate of the female avatar 670 a) and a duplicate version of the corresponding avatar (e.g., the female avatar 670). The device 600 displays an avatar editing user interface 694 (shown in fig. 6Z) with a copy avatar.
In FIG. 6Z, the device 600 displays an avatar editing user interface 694 in response to detecting an input 693a or 693 b. When device 600 detects input 693a on edit option 692a, device 600 shows avatar editing user interface 694, which displays an avatar (e.g., avatar 670) corresponding to the avatar option selected in library interface 686. However, when device 600 detects input 693b on copy option 692b, device 600 creates a copy of the selected avatar (e.g., a copy of female avatar 670) and displays the copied avatar in avatar editing user interface 694. In the embodiment shown in fig. 6Z, device 600 displays a duplicate avatar 695. In some embodiments, in response to detecting the input 693a, the device displays the library interface 686 with a copy avatar option instead of displaying the avatar editing user interface.
The avatar editing user interface 694 is similar to the avatar editing user interface described below with reference to fig. 8A-8 CF. For brevity, the avatar editing user interface 694 is not reused to edit the details of the avatar.
In fig. 6AA, device 600 displays a library interface 686 (shown modified based on selection of different avatar characteristics using avatar editing user interface 694) with a copy avatar option 695a corresponding to copy avatar 695. After saving the modified copy avatar 695 (e.g., detecting a selection of "done" in the avatar editing user interface 694), device 600 displays copy avatar option 695a at a location next to the selected avatar option from which the copy was created (e.g., next to avatar option 670 a).
Fig. 6AB shows the options menu 692 after selecting the avatar option 670a in fig. 6V. In response to detecting input 693c on delete option 692c, the device deletes the selected avatar option 670a from library interface 686. In this case, the device 600 removes the avatar option 670a from the library interface, for example as shown in FIG. 6 AC. However, if device 600 does not detect any of inputs 693a through 693c, but rather detects selection of cancel affordance 696, options menu 692 is closed and device 600 displays library interface 686 having male avatar option 688a and female avatar option 670a, as shown in FIG. 6AD (similar to the state of library interface 686 shown in FIG. 6U).
In FIG. 6AD, the device 600 detects an input 697 (e.g., a touch gesture on the display 601) on the completion affordance 698 and, in response, exits the library interface 686 and displays a condensed avatar selection interface 668, as shown in FIG. 6 AE. The condensed avatar selection interface 668 includes a male avatar 688, a female avatar 670, and a non-customizable avatar 645.
In FIG. 6AE, three different gestures are represented on the condensed avatar selection interface 668. As described below, when device 600 detects a gesture in a particular direction (e.g., left or right), device 600 replaces the displayed avatar (e.g., customizable female avatar 670) with a particular type of avatar determined by the direction of the gesture. For example, if the gesture is along the left direction, the displayed avatar is replaced with a first type of avatar (e.g., an un-customizable avatar, or an avatar that is shaped to represent a non-human character). Conversely, if the gesture is in the right direction, the displayed avatar is replaced with a second type of avatar (e.g., a customizable avatar, or an avatar that is shaped to represent a human).
For example, in fig. 6AE, when the device 600 detects the left horizontal gesture 699a (e.g., a swipe or touch and drag gesture in a left direction on the display 601), the device 600 displays the embodiment shown in fig. 6AF, which shows the customizable female avatar 670 moving to the left (outside the center of the condensed avatar selection interface 668 (e.g., indicating the location of the unselected female avatar 670)), and the non-customizable monkey avatar 645 positioned in the center of the condensed avatar selection interface 668 (e.g., indicating the location of the monkey avatar 645 selected). Thus, in response to detecting the left horizontal gesture 699a, device 600 displays a selection of non-customizable avatars. In some embodiments, the left horizontal gesture 699a causes the device 600 to scroll the condensed avatar selection interface 668 such that the customizable female avatar 670 moves completely away from the screen and only one or more non-customizable avatars are displayed (e.g., similar to the embodiment of fig. 6O).
When the device 600 detects the right horizontal gesture 699b (e.g., a swipe or touch and drag gesture in the right direction on the display 601), the device 600 displays the embodiment shown in fig. 6AG showing the customizable female avatar 670 moving to the right (outside the center of the condensed avatar selection interface 668 (e.g., indicating a location where the female avatar 670 is not selected)) and the customizable male avatar 688 located in the center of the condensed avatar selection interface 668 (e.g., indicating a location where the male avatar 688 is selected), and optionally showing the avatar creation affordance 669. Thus, in response to detecting the right horizontal gesture 699b, the device 600 displays a selection of customizable avatars without displaying non-customizable avatars.
In some embodiments, the device 600 may display a scene in which the initially selected avatar (at the center of the thin avatar selection interface 668) is a non-customizable avatar, and in response to detecting a horizontal gesture, the display thin avatar selection interface 668 scrolls such that the non-customizable avatar moves completely away from the screen and only one or more customizable avatars are displayed in the thin avatar selection area 668.
When the device 600 detects the vertical gesture 699c along an upward direction (e.g., a vertical swipe or vertical touch and drag gesture on the display 601), the device 600 expands the reduced avatar selection interface 668 to display the expanded avatar selection interface 634 in fig. 6H.
In fig. 6AH, device 600 displays an extended avatar selection interface 634 with custom male avatar option 688a and custom female avatar option 670a in avatar option area 638. A female avatar option 670a is selected and the female avatar option 670 is displayed in avatar display area 636. The device 600 also displays an option affordance 674 in the avatar display area 638. The device 600 also displays a capture affordance 644 that is selectable for recording the avatar 670 (e.g., when modified based on detected changes in the user's face).
In fig. 6 AI-6 AJ, the device 600 detects a scroll gesture 6100 (e.g., a vertical swipe or tap and drag gesture on the display 601) on the avatar option area 638. In response to detecting the scroll gesture 6100, the device 600 scrolls the display of avatar options shown in the avatar option area 638.
In fig. 6AK, device 600 detects an input 6102 (e.g., a tap gesture on display 601) on option affordance 674. In response to detecting the input 6102, in fig. 6AL, the device 600 replaces the displayed avatar option area 638 with an options menu area 6104 that includes editing, copy, and delete options similar to the respective editing, copy, and delete options discussed above (e.g., 692a, 962b, 962 c). The device 600 also displays a cancel affordance 6106. In response to detecting an input 6108 on the cancel affordance 6106 (e.g., a tap gesture on the display 601), the device 600 removes the options menu area 6104 and again displays the avatar options area 638 shown in fig. 6 AM.
In some embodiments, the device 600 changes the avatar displayed in the avatar display area 636 in response to selection of a different avatar option. For example, in fig. 6AM, device 600 detects input 6110 (e.g., a tap gesture on display 601) on bowel movement option 6112 a. In response, the device 600 removes the avatar 670 and displays the potty avatar 6112 as shown in fig. 6 AN. In addition, the device 600 has removed the editing affordance 674 because the selected avatar option (e.g., 6112a) corresponds to a non-customizable avatar (e.g., the potty avatar 6112).
Fig. 7 is a flow diagram illustrating a method for navigating between avatars in an application using an electronic device (e.g., 600), in accordance with some embodiments. Method 700 is performed at a device (e.g., 100, 300, 500, 600) having a display apparatus and one or more input devices. Some operations in method 700 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 700 provides an intuitive way for navigating between avatars in an application. The method reduces the cognitive burden on the user to manage the avatar, thereby creating a more efficient human-machine interface. For battery-driven electronic devices, enabling a user to navigate between avatars in an application faster and more efficiently saves power and increases the time interval between battery charges.
The electronic device displays (702) an avatar navigation user interface (e.g., 668) via a display device. The avatar navigation user interface includes an avatar (e.g., 670).
While displaying the avatar navigation user interface (e.g., 668), the electronic device detects (704) a gesture (e.g., 699a, 699b) for the avatar navigation user interface (e.g., 668) via one or more input devices (e.g., a swipe gesture at a location on the touchscreen display corresponding to the avatar navigation user interface).
In response to (706) detecting the gesture (e.g., 699a, 699b), in accordance with a determination (708) that the gesture is along a first direction (e.g., a horizontal swipe gesture along a right direction), the electronic device displays (710) a first type of avatar (e.g., 670, 688) in an avatar navigation user interface (e.g., 668) that is shaped to represent an avatar of a human rather than a non-human character, or that is configurable or creatable from an avatar prototype or template. Displaying the first type of avatar provides visual feedback to the user confirming that an input has been received and that the device is now in a state in which the first type of avatar may be selected. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In response to (706) detecting the gesture (e.g., 699a), in accordance with a determination (714) that the gesture (e.g., 699a) is along a second direction (e.g., a horizontal swipe gesture along a left direction) opposite the first direction, the electronic device displays (716) a second type (e.g., 645) of avatar in the avatar navigation user interface that is different from the first type (e.g., 670, 688) (e.g., shapes the avatar to represent a non-human character, or an optional but non-configurable avatar). Displaying the second type of avatar provides visual feedback to the user confirming that an input has been received and that the device is now in a state in which the second type of avatar can be selected. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, further in accordance with a determination (708) that the gesture is in the first direction, the electronic device forgoes (712) displaying the second type of avatar (e.g., 645) in the avatar navigation user interface (e.g., 668). Further, in accordance with a determination (714) that the gesture is in a second direction opposite the first direction, the electronic device forgoes (718) displaying the avatar of the first type (e.g., 670, 688) in the avatar navigation user interface (e.g., 668). By not displaying a particular type of avatar, the electronic device provides visual feedback to the user confirming that an input has been received and that the device is not in a state in which the particular type of avatar may be selected. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, after displaying the first type of avatar (e.g., 670), the electronic device detects a second gesture (e.g., 699b) along a second direction. In response to detecting the second gesture, the electronic device displays a second avatar of the first type (e.g., 688).
According to some embodiments, after displaying the second type of avatar (e.g., 645), the electronic device detects a third gesture along the second direction. In response to detecting the third gesture, the electronic device displays a second avatar of a second type (e.g., 678).
According to some embodiments, the first type of avatar (e.g., avatar 670) has the appearance of a human character (e.g., an avatar shaped to represent a human character rather than a non-human character). In some embodiments, such an avatar includes customizable (e.g., selectable or configurable) avatar features (e.g., head, hair, eyes, and lips as shown in fig. 8A-8 BB) that generally correspond to physical features of a person. For example, such avatars may include representations of people having various body, human features or characteristics (e.g., an elderly female with a dark skin color and having long, straight, brown hair). Such an avatar will also include a representation of a person (e.g., as shown in fig. 8 BB-8 CF) having various non-human characteristics (e.g., cosmetic enhancements, hats, glasses, etc.) that are typically associated with the appearance of the person. In some embodiments, such an avatar will not include anthropomorphic constructs, such as a stylized animal, a binning robot, or stylized, generally inanimate or generally non-human subject. The appearance of the avatar provides feedback to the user indicating the type of characteristics of the avatar that can be customized. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the second type of avatar (e.g., avatar 645; the avatar corresponding to the avatar option shown in fig. 6G) has the appearance of a non-human character (e.g., an avatar shaped to represent the non-human character, including, for example, a non-human character constructed as a anthropomorphic (e.g., a gridded animal, a gridded robot, or a stylized generally inanimate or generally non-human object)). The appearance of the avatar provides feedback to the user indicating the type of characteristics of the avatar that can be customized. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, a first type of avatar (e.g., 670) includes a plurality of avatar characteristics (e.g., 851, 828) that are user configurable (e.g., creatable, selectable, customizable). In some embodiments, such avatars may be created by a user, or may be preconfigured with a plurality of features that may be configured by a user. In some embodiments, the configuration of the avatar features results in a significant change in the physical appearance or physical configuration of the virtual avatar.
According to some embodiments, the second type of avatar (e.g., 645) does not include user-configurable (e.g., creatable, selectable, customizable) avatar characteristics. In some embodiments, such avatars are pre-configured and do not have features that can be configured by a user. In some cases, such avatars may change slightly (e.g., change the color of the avatar or change the size of the avatar), but such changes do not significantly change the physical appearance or physical configuration of the virtual avatar.
According to some embodiments, the avatar navigation user interface includes a sub-region (e.g., 686) having a plurality of avatars. The plurality of avatars includes a first set of avatars of a first type (e.g., 670a, 688a, 670a) and a second set of avatars of a second type (e.g., 640 a). The first set of avatars of the first type are separate (e.g., apart) from the second set of avatars of the second type. In some embodiments, the first type of avatar is separate from the second type of avatar such that when the avatar navigation user interface is displayed and the electronic device detects a user gesture (e.g., a swipe gesture), the device displays or selects the first type of avatar in the avatar navigation user interface when the gesture is along a first direction; or when the gesture is in a second direction opposite the first direction, displaying a second type of avatar. In some embodiments, this allows the user to immediately select either the first type or the second type of avatar without having to scroll through multiple avatars of the same type to obtain different types of avatars. Providing visual separation of various types of avatars provides feedback to the user indicating that multiple types of avatars are displayed (and may be customized), and informing the user of the characteristic types of avatars that may be customized. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the avatar (e.g., 670) is a selected one of the plurality of avatars displayed at a location (e.g., a border region (e.g., 675)) between one or more of the first set of avatars of the first type and one or more of the second set of avatars of the second type (e.g., the avatar initially displayed in the avatar navigation user interface is located between the set of avatars of the first type and the set of avatars of the second type).
According to some embodiments, the first set of avatars of the first type includes the selected one of the plurality of avatars. In accordance with a determination that the gesture is along the first direction, the electronic device replaces the selected one of the plurality of avatars with a different first type of avatar (e.g., the selected avatar (e.g., 670) is replaced with a different one of a first type of avatar from a first set of avatars of the first type (e.g., 688)). In accordance with a determination that the gesture is along the second direction, the electronic device replaces the selected one of the plurality of avatars with a second type of avatar (e.g., the selected avatar (e.g., 670) is replaced with one of a second type of avatar from a second set of avatars of the second type (e.g., 645)).
According to some embodiments, the second set of avatars of the second type includes the selected one of the plurality of avatars. In accordance with a determination that the gesture is along the first direction, the electronic device replaces the selected one of the plurality of avatars with a first type of avatar (e.g., the selected avatar (e.g., 645) is replaced with one of a first type of avatar from a first set of avatars of the first type (e.g., 670)). In accordance with a determination that the gesture is along the second direction, the electronic device replaces the selected one of the plurality of avatars with a different avatar of the second type (e.g., the selected avatar (e.g., 645) is replaced with a different one of the second type avatars (e.g., 678) from a second set of avatars of the second type).
Displaying the particular type of avatar provides visual feedback to the user confirming that an input has been received and that the device is now in a state in which the particular type of avatar can be selected. By replacing the avatar, the electronic device provides visual feedback that the device is in a state in which the replaced avatar can no longer be selected by the user. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the avatar navigation user interface includes a first affordance (e.g., 682) (e.g., a selectable, displayed avatar or "edit" affordance (not an avatar)). While displaying the avatar navigation user interface, the electronic device detects a gesture for the first affordance via one or more input devices (e.g., a touch gesture at a location on the touch screen display corresponding to the avatar of the "edit" affordance or display, or a swipe gesture along a third direction different from the first direction, such as a swipe-up gesture). In response to detecting the gesture to the first affordance, the electronic device displays an avatar library user interface (e.g., 686). The avatar library user interface includes a second affordance (e.g., 648) (e.g., a "new avatar" or "plus sign" affordance) and one or more avatars of the first type.
According to some embodiments, while displaying the avatar library user interface, the electronic device detects a gesture (e.g., a touch gesture at a location on the touch screen display corresponding to the "new avatar" affordance) for the second affordance (e.g., 648) via one or more input devices. In response to detecting the gesture for the second affordance, the electronic device displays an avatar editing user interface (e.g., 652). The avatar editing user interface is a user interface for generating (e.g., editing a new avatar to be added to the avatar library user interface) a new avatar of a first type. In some embodiments, the electronic device displays an avatar editing user interface and receives user input to create a new avatar of a first type. Once the new avatar of the first type is created, the device displays the new avatar of the first type in the avatar library user interface. For example, a new avatar of the first type is added to the end of the first type avatar displayed in the avatar library.
According to some embodiments, the electronic device generates a new avatar of the first type and displays the new avatar in the avatar library user interface (e.g., 686). The new avatar is displayed at a position after a last of the one or more avatars of the first type (e.g., at a sequential last position of the one or more avatars of the first type).
According to some embodiments, the avatar navigation user interface also includes an affordance (e.g., a "delete" affordance) (e.g., 692c) associated with a function for removing (e.g., deleting or hiding) the avatar from the displayed avatar navigation user interface. The electronic device detects, via one or more input devices, a gesture for an affordance associated with the function (e.g., a touch gesture at a location on the touch screen display corresponding to the "delete" affordance). In response to detecting the gesture to the affordance associated with the function, the electronic device removes (e.g., deletes or hides) the avatar from the displayed avatar navigation user interface.
According to some embodiments, the avatar navigation user interface is displayed in an instant messaging user interface (e.g., 603) (e.g., an interface for sending messages between participants of a conversation hosted by the communication platform). In some embodiments, the avatar may be accessed from an avatar navigation user interface displayed as part of the instant messaging user interface, such that an avatar selected from the avatar navigation user interface is displayed in the instant messaging user interface to send the conversation to the participants.
Displaying the avatar navigation user interface in the instant messaging user interface enables a user to navigate between avatars without leaving the instant messaging user interface, thus avoiding the need to provide user input to switch between applications of the electronic device. Reducing the number of user inputs required enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, in accordance with a determination that the avatar navigation user interface does not include the first type of avatar, the electronic device displays an avatar launch user interface (e.g., 626) (e.g., an avatar splash screen) having an affordance (e.g., 632) (e.g., a "continue" affordance) associated with generating a new avatar of the first type. While displaying the avatar launch user interface, the electronic device detects a gesture (e.g., 630) for the affordance associated with generating the first type of new avatar (e.g., a touch gesture on the touch screen display at a location corresponding to the "continue" affordance). In response to detecting the gesture for the affordance associated with generating the first type of new avatar, the electronic device displays an avatar editing user interface (e.g., 652, 801). The avatar editing user interface is a user interface for generating (e.g., editing a new avatar to be added to the avatar library user interface) a new avatar of a first type.
According to some embodiments, in accordance with a determination that the avatar navigation user interface includes a first type of avatar, the electronic device displays the first type of avatar and an affordance (e.g., 682) (e.g., an "edit" affordance) associated with managing one or more features of the displayed first type of avatar (e.g., 670). In some embodiments, when one or more avatars of the first type have been created, the avatar navigation user interface displays one of the avatars of the first type and an affordance (e.g., "edit" the affordance). In some embodiments, in response to detecting selection of the affordance, the electronic device displays an avatar library user interface (e.g., 686) that includes representations of a first type of avatar (e.g., 670) and other first type of avatars (e.g., 688). In some embodiments, the electronic device displays an avatar library user interface in response to detecting selection of the displayed first type of avatar. In some embodiments, in response to detecting selection of the first type of avatar that is affordable or displayed, the electronic device displays an avatar editing user interface (e.g., 652, 801) that provides a user interface for editing the first type of avatar.
According to some embodiments, displaying the first type of avatar includes displaying the first type of avatar transitioning from a non-interactive state (e.g., 670 in fig. 6L) (e.g., the first type of avatar has a predetermined appearance that does not react to changes in the user's face) to an interactive state (e.g., 670 in fig. 6M) (e.g., the first type of avatar has a dynamic appearance that reacts to changes in the user's face). According to some embodiments, displaying the second type of avatar includes displaying the second type of avatar (e.g., 678) transitioning from a non-interactive state (e.g., 678 in fig. 6O) (e.g., the second type of avatar has a predetermined appearance that does not react to changes in the user's face) to an interactive state (e.g., 678 in fig. 6P) (e.g., the second type of avatar has a dynamic appearance that reacts to changes in the user's face).
According to some embodiments, the electronic device displays, via a display device, an avatar library user interface (e.g., 686) that includes one or more saved (e.g., previously created) first-type avatars (e.g., 688, 670). The electronic device detects selection of (e.g., detects a gesture directed to) one of the saved first type avatars (e.g., a touch gesture at a location on the touch screen display that corresponds to the saved first type avatar). In response to detecting selection of (e.g., detecting a gesture directed to) one of the saved first-type avatars, the electronic device displays a menu (e.g., 692) having one or more menu affordances (e.g., an "edit" affordance 692a, a "copy" affordance 692b, or a "delete" affordance 692c), wherein the menu affordance is associated with one of an edit function, a copy function, and a delete function of the one of the saved first-type avatars.
According to some embodiments, the electronic device detects selection of (e.g., detects a gesture directed to) a first affordance (e.g., 692b) associated with a copy function (e.g., a touch gesture at a location on the touch screen display corresponding to the "copy" affordance). In response to detecting selection of the first affordance, the electronic device generates a copied version of one of the saved avatars (e.g., 695) and displays the copied version in an avatar editing user interface (e.g., 694) (e.g., after selecting the "copy" affordance, the selected avatar is copied and then a copied version of the avatar is displayed in the avatar editing user interface with avatar characteristics that match the selected one of the saved avatars). In some embodiments, the copied avatar may be edited in an avatar editing user interface (e.g., 652, 694, 801) and then saved in a library (e.g., 686) after editing. In some embodiments, after saving the copied avatar, the avatar is displayed in the avatar library at a location adjacent to the selected one of the saved avatars (e.g., immediately adjacent to the copied avatar, or at a next location in the sequence, where the next location in the sequence immediately follows the location of the copied avatar in the sequence).
According to some embodiments, the electronic device detects selection of (e.g., detects a gesture directed to) a second affordance (e.g., 692a) associated with an editing function (e.g., a touch gesture at a location on the touch screen display corresponding to the "edit" affordance). In response to detecting the gesture to the second affordance, the electronic device displays an avatar editing user interface (e.g., 652, 694, 801) that includes one of the saved avatars (e.g., the avatar selected when the editing function is selected).
According to some embodiments, the electronic device detects selection of (e.g., detects a gesture directed to) a third affordance (e.g., 692c) associated with a delete function (e.g., a touch gesture at a location on the touch screen display corresponding to the "delete" affordance). In response to detecting selection of the third affordance (e.g., detecting a gesture directed thereto), the electronic device removes the displayed one of the saved avatars from the avatar library user interface.
According to some embodiments, the electronic device (e.g., 600) displays the respective avatar (e.g., 670, 671) of the first type or the second type, including displaying, via the display device (e.g., 601), that the respective avatar moves along a direction on the avatar navigation user interface (e.g., 671 moves on interface 668 in fig. 6M-6O) according to the magnitude and direction of the detected gesture (e.g., 676). In accordance with a determination that the respective avatar reaches a first location (e.g., a first threshold location determined based on the magnitude and direction of the detected gesture; e.g., a location associated with selecting the respective avatar), the electronic device displays an animation of the respective avatar transitioning from a non-interactive state (e.g., a static state) having a predetermined appearance (e.g., 671 in fig. 6O) in which the respective avatar has a predetermined appearance that does not change in response to detecting a change in the user's face) to an interactive state (e.g., 671 in fig. 6P) (e.g., a dynamic state in which the respective avatar changes in response to detecting a change in the user's face) having an appearance determined based on the detected face (e.g., 673) (e.g., a face detected within the field of view of one or more cameras of the electronic device). In some embodiments, the animation of the respective avatar transitioning from the non-interactive state to the interactive state includes gradually changing the facial expression, position, orientation, and/or size of the avatar from a neutral facial expression, position, orientation, and/or size to a facial expression, position, orientation, and/or size of the avatar based on the user's facial/head tracking. The appearance of the avatar provides feedback to the user indicating the movement of the avatar according to the magnitude and direction of the user's gesture. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in accordance with a determination that the respective avatar (e.g., 671) reaches a second location (e.g., 671 in FIG. 6Q) (e.g., a second threshold location determined based on the magnitude and direction of the detected gesture; e.g., a location associated with swiping across the respective avatar (e.g., to select a different avatar)), the electronic device (e.g., 600) displays an animation of the respective avatar transitioning from an interactive state (e.g., 671 in FIG. 6P) having an appearance determined based on the detected face (e.g., 673) to a non-interactive state (e.g., 671 in FIG. 6R) having a predetermined appearance. In some embodiments, the animation of the respective avatar transitioning from the interactive state to the non-interactive state includes gradually changing the facial expression, position, orientation, and/or size of the avatar from a facial expression, position, orientation, and/or size based on the user's face/head tracking to a neutral facial expression, position, orientation, and/or size. The animation of the avatar transitioning from the interactive state to the non-interactive state is displayed to provide visual feedback of the non-interactive appearance of the avatar. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
It should be noted that the details of the process described above with reference to method 700 (e.g., fig. 7) also apply in a similar manner to the methods described below. For example, method 900 optionally includes one or more features of the various methods described above with reference to method 700. For example, in some embodiments, the navigation user interface invokes a process for creating or editing a customizable avatar, which may be implemented in accordance with the method 900 described below with reference to FIG. 9. As further examples, methods 1000, 1100, 1200, and 1400 optionally include one or more characteristics of the various methods described above with reference to method 700. For example, in some embodiments, the navigation user interface invokes a process for creating or editing a customizable avatar, which may be implemented according to the methods described below with reference to fig. 10-12. As another example, in some embodiments, the navigation user interface invokes a process for modifying the virtual avatar, which may be implemented according to the method described below with reference to fig. 14A-14B. For the sake of brevity, these details are not repeated in the following.
Fig. 8A-8 CF illustrate exemplary user interfaces for displaying avatar editing user interfaces, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 9-12.
In fig. 8A, the apparatus 600 displays a avatar editing user interface 801 having an avatar display area 803 and an avatar property area 804. The avatar display area 803 is visually distinguished from the avatar characteristic area 804 by, for example, a line 806, and includes an avatar 805 and an avatar feature area 807. Avatar feature area 807 includes avatar feature affordances 809 that correspond to avatar features that may be edited in an avatar editing user interface. The avatar characteristics area 804 includes the displayed avatar characteristic characteristics and corresponding characteristic options. The avatar feature characteristics and feature options correspond to the currently selected avatar feature in avatar feature region 807. In FIG. 8A, the device displays an avatar head affordance 809a located directly below the avatar 805, and is highlighted to indicate that the avatar head feature is currently selected for editing. As the avatar head feature is selected for editing, device 600 displays avatar feature characteristics and feature options corresponding to the avatar head feature in avatar characteristics area 804.
The device displays the avatar 805 to represent the current state of the avatar based on modifications that have been made to the avatar characteristics when the avatar is edited in the avatar editing user interface. In the embodiment shown in fig. 8A, avatar 805 is displayed with several default (e.g., preset or predetermined) features because the avatar features are not selected or modified. For example, the avatar 805 has a predetermined facial structure (e.g., predetermined facial shape, nose, lips, eyes, ears, and eyebrows). The avatar 805 also has no selected skin tone, hair or hairstyle, facial hair (except eyebrows), and no accessories. When the device receives input to update the avatar characteristics, the device 600 updates the avatar 805 to reflect the selected update to the avatar characteristics.
In some embodiments, prior to detecting selection or modification of any avatar feature, device 600 displays avatar 805 oscillating between two or more colors (e.g., yellow and white), which may indicate to the user that device 600 is ready to receive input to modify avatar 805. In some embodiments, prior to detecting selection or modification of any avatar feature, device 600 displays avatar 805 without tracking the user's face (e.g., displays avatar 805, but does not modify the avatar in response to changes in the user's face). In some embodiments, upon detecting an input on avatar editing user interface 801 (e.g., selecting a color option, scrolling a displayed feature option, a gesture on the avatar, a gesture on an affordance (e.g., a "start face tracking" affordance), etc.), device 600 stops oscillating the display of avatar 805 and/or starts tracking the user's face (e.g., modifies avatar 805 in response to detecting a change in the user's face).
The avatar characteristics area 804 includes a displayed list of avatar characteristic characteristics corresponding to the currently selected avatar characteristic (e.g., avatar head). Each avatar feature characteristic includes a set of selectable feature options that may be selected to modify the corresponding characteristic of the selected avatar feature. More specifically, each selectable feature option in the set of selectable feature options corresponds to a value for modifying a corresponding characteristic of the selected avatar feature. The changed features are then reflected in the changes to the displayed avatar 805, as well as other avatar feature options including representations of the displayed characteristics.
The device 600 displays avatar feature options to represent available options for modifying the characteristics of the currently selected avatar feature. The displayed avatar feature options may be dynamically updated based on other selected avatar feature options. Other selected avatar feature options include a different avatar feature option corresponding to the same currently selected avatar feature and a selected avatar feature option corresponding to a different avatar feature (e.g., an avatar feature that is not currently selected, such as a previously modified avatar feature). For example, changes in characteristics of the avatar head features (e.g., selecting skin tones) may be displayed in avatar feature options corresponding to the avatar head features (e.g., facial shape feature options) and, optionally, in avatar feature options corresponding to other avatar features such as hair or eyes. In this example, in response to detecting a selection of a skin tone, the device updates the currently displayed avatar feature option (e.g., facial shape option) to display the selected skin tone. Additionally, when a different avatar characteristic (e.g., eye) is selected, the avatar characteristic options displayed for the eye also include the selected skin tone.
As shown in FIG. 8A, avatar head affordance 809a is selected, whereby device 600 displays avatar feature characteristics and feature options corresponding to the avatar head features. The displayed avatar feature characteristics include a skin tone characteristic 808 and a facial shape characteristic 810 (the avatar head characteristics may include other avatar feature characteristics). The skin tone feature 808 includes a color option 812 that can be selected to modify the color of the avatar head feature (e.g., the skin tone of the avatar 805). When the device detects selection of the particular color option 812, the device modifies the skin tone color of the currently selected avatar feature (e.g., the avatar header in fig. 8A) to match the selected color. In some embodiments, selection of the skin tone color option 812 also affects the color of another avatar feature such as a facial hair feature (e.g., eyebrows, beard, etc.), eye color, or lip color. In the embodiment shown in fig. 8A, the skin color characteristics 808 include a set of color options 808 that are expanded relative to the set of color options initially displayed for other avatar characteristics (see, e.g., hair color characteristics 838 in fig. 8P). In some embodiments, the set of expanded color options 808 are non-scrollable in the horizontal direction (but may be scrollable in the vertical direction) and do not include a selectable option for expanding the set of color options (such as color selector option 886 in fig. 8 AX). The facial shape characteristics 810 include a facial shape option 814 that can be selected to modify the facial shape of the avatar 805.
In some embodiments, the selected feature option is indicated by a display border around the selected feature option. For example, a border 818 displayed around the face shape option 814a indicates that the face shape option 814a is the currently selected avatar face shape. Accordingly, avatar 805 is shown to have the same facial shape (e.g., a rounded chin) as selected facial shape option 814 a. In contrast, because color option 812 is not selected, avatar 805 and facial shape option 814 are shown without skin tones (e.g., default or pre-selected skin tones).
In some implementations, each of the displayed avatar characteristics is visually distinguished from other adjacent avatar characteristic. In some implementations, the avatar feature characteristics are visually distinguished by respective headers of the avatar feature characteristics. For example, in fig. 8A, the skin tone characteristic 808 is visually distinguished from the face shape characteristic 810 by a face shape header 816. In some embodiments, the avatar characteristic is visually distinguished by other indicators, such as horizontal lines extending fully or partially across the width of display 601.
In fig. 8B, device 600 detects selection of color option 812a in response to receiving input 820 (e.g., a touch input on display 601) on color option 812 a.
In fig. 8C, device 600 indicates that color option 812a is selected by displaying a border 824 around color option 812 a. The device 600 also modifies the avatar 805 and facial shape options 814 to have a skin tone that matches the selected color option 812 a. Additionally, device 600 displays a skin color slider 822 that can be adjusted in a manner similar to that discussed below with respect to hair color slider 856 (see fig. 8W-8 AC). The color slider 822 is used to adjust the gradient of the selected color option 812 a. In some embodiments, the gradient may represent various characteristics of the selected color option (e.g., skin color option 812a), such as shading, saturation, undertone, mid-tone, highlight, warm, or hue. In some embodiments, the particular characteristic is determined based on the selected skin tone color. For example, in some embodiments, if a lighter skin color is selected, the property adjusted with the slider is a shading property, and when a dark skin color is selected, the property adjusted with the slider is saturation. In response to detecting an adjustment to the hue of the selected color option (e.g., selected color option 812a), the device 600 modifies the skin tone of the avatar (e.g., avatar 805), any feature options that display skin tones (e.g., face shape option 814), and any avatar features affected by skin tone color.
In some embodiments, the selected skin tone affects the color or color attributes (e.g., base color, hue, brightness, shading, saturation, mid-tone, highlight, warm, undertone, etc.) of other avatar characteristics (e.g., hair, lips, etc.). For example, the avatar hair or facial hair (e.g., eyebrows or beards) may have an undertone determined based on the selected skin tone. For example, darker skin tones produce hair with a darker undertone (e.g., brown or black undertone), while lighter skin tones produce lighter undertone (e.g., gold or red undertone). These undertones may affect the color applied to a particular avatar characteristic, as discussed in more detail below. Similarly, the avatar lip color may have an undertone based on the selected skin tone. For example, the avatar lip color may have a color based on the selected skin tone, and optionally, a different color such as red or pink. In some embodiments, different colors are combined with skin tone colors by an amount determined based on the adjustment setting of the color slider 822. For example, adjustment slider 822 in one direction increases the different color values comprising the avatar lip color (e.g., the amount of red or pink in the avatar lip color), and adjustment slider 822 in a different direction decreases the different color values comprising the avatar lip color.
In fig. 8C, updating the skin tone of the head features of the avatar includes changing the skin tone of the nose, ears, face, and lips 828 of the avatar. In some embodiments, updating the skin tone of the lips 828 of the avatar includes changing the skin tone of the outer regions 828a of the lips and leaving the inner portions 828b of the lips unchanged. In some embodiments, the device 600 also updates the color of other avatar characteristics that are different from the skin of the avatar, such as the eyebrows 827 and the avatar eyes 829. In some embodiments, the updated color of other features (e.g., the eyebrow 827 and the eye 829) is based on the selected skin color. For example, the updated color of the eyebrow 827 is updated to a color determined to be darker than the selected skin color option 812 a. These updates are displayed in avatar 805 and other avatar feature options such as face shape option 814.
In fig. 8D, device 600 detects selection of facial shape option 814b in response to receiving input 826 on facial shape option 814b (e.g., a touch input on display 601). In response, device 600 removes skin color slider 822 from avatar characteristic region 804 in fig. 8E and indicates the selected facial shape option by moving bezel 818 from facial shape option 814a to facial shape option 814b and modifies avatar 805 to transition from the circular facial shape option of 814a to the different facial shape (e.g., pointy chin, narrow cheek) represented by facial shape option 814 b. Thus, the avatar 805 is shown transitioning from having a rounded chin as shown in fig. 8D to a sharp chin with a narrow cheek as shown in fig. 8E.
In some embodiments, after selecting the feature option, device 600 displays an animation to guide the user in selecting the next avatar feature in avatar feature area 807. For example, in FIG. 8F, the device 600 highlights the avatar hair affordance 809b, prompting the user to select the avatar hair affordance 809b to advance to the next avatar feature for editing. In some embodiments, the animation is only displayed when the device first displays the avatar editing user interface.
In fig. 8G, device 600 detects selection of avatar hair affordance 809b in response to receiving input 830 on avatar hair affordance 809b (e.g., a touch input on display 601). In response to detecting selection of the avatar hair affordance 809b, the device 600 updates the avatar display area 803 to indicate that an avatar hair feature has been selected, and updates the avatar characteristics area 804 to display avatar characteristic characteristics and feature options corresponding to the avatar hair feature. This transition is illustrated in fig. 8H to 8O.
In some embodiments, the corresponding avatar characteristic affordance 809 may be selected by a tap gesture on the corresponding avatar characteristic affordance 809 or by a swipe gesture on the avatar characteristic region 807 (or, alternatively, a swipe gesture anywhere on the avatar display region 803 other than the avatar 805). In such an embodiment, the swipe gesture may horizontally scroll avatar feature area 807 to position the desired avatar feature affordance 809 directly below avatar 805. In response to detecting lift-off of the touch, device 600 selects an avatar feature affordance (including highlighting the affordance) that is located directly below avatar 805 after scrolling is complete.
As shown in fig. 8H, the device 600 updates the avatar display area 803 by highlighting the avatar hair affordance 809b and displaying the avatar feature affordance 809 shifted to the left so that the avatar hair affordance 809b is located directly below the avatar 805. The avatar-eye affordance 809c moves to the left (relative to its position in fig. 8G), and the avatar lip affordance 809d is now displayed at the rightmost edge of the display 601.
The device 600 updates the avatar characteristics area 804 by ceasing to display avatar characteristic characteristics (e.g., skin tone characteristics 808 and facial shape characteristics 810) corresponding to the avatar facial features and displaying new avatar characteristic characteristics and feature options corresponding to the newly selected avatar characteristics. In some embodiments, such as shown in fig. 8H-8O, the device 600 displays new avatar feature characteristics and feature options in a cascading effect, where avatar feature characteristics corresponding to avatar hair features are displayed in the avatar feature area 804 in order from side to side (left to right), and in order from top to bottom (e.g., from a first avatar feature characteristic at the top of the avatar feature area 804 to a last avatar feature characteristic at the bottom of the avatar feature area 804).
For example, fig. 8H and 8I show hair color options 832 appearing on display 601, with animation of the hair color options sliding across display 601 from left to right. Before filling all of the hair color options 832, the device 600 begins to display an animation (ending in fig. 8L) of the hair texture option 834 appearing on the display 601 below (starting in fig. 8J) the hair color options 832, one at a time and in a left-to-right order. After filling the hair texture option 834, the device 600 displays hair style options 836 (beginning in fig. 8M) below the hair texture option 834 on the display 601, appearing one at a time and in a left-to-right sequence (ending in fig. 8O). It should be appreciated that the continuous filling of the respective set of feature options may begin before the previous set of feature options is filled (e.g., similar to the timing of the hair texture option 834 with respect to the hair color option 832), or after the previous set of feature options is filled (e.g., similar to the timing of the hair style option 836 with respect to the hair texture option 834).
As described above, some feature options of the selected avatar feature are displayed with a sliding stack-up effect as discussed above with respect to the appearance of the hair color option, while other feature options of the selected avatar feature are displayed with an iterative fill-up stack-up effect as discussed with respect to the hair texture option 834 and the hair style option 836. Any of these cascading effects may be used to display the population of feature options according to any of the embodiments discussed herein.
In fig. 8P, the device 600 displays a hair color characteristic 838 with a hair color option 832, a hair texture characteristic 840 with a hair texture option 834 and a texture header 841, and a hair style characteristic 842 with a hair style option 836 and a hair style header 843. No hair color option is selected in fig. 8P. However, the straight hair texture option 834a and the head hair style option 836a are selected as indicated by boxes 844 and 846, respectively. The avatar 805 is shown as having an optical head hairstyle, however, the straight hair texture is not discernable on the avatar 805 due to the optical head hairstyle. However, the straight hair texture is reflected in the pixie hair style option 836b and the bob hair style option 836c, which show different hair styles with straight hair texture.
As shown in fig. 8P, the device 600 detects selection of the pixie hair style option 836b in response to receiving an input 848 on the short hair style option 836b (e.g., a touch input on the display 601). In fig. 8Q, the device 600 displays an avatar hair 851 with a straight texture corresponding to the pixie hair style option 836b selected in fig. 8P, and an avatar 805 with a straight texture corresponding to the selected straight texture 834 a. The device 600 also displays a bezel 846 that moves from the head hair style option 836a to the piezo hair style option 836b to provide visual confirmation of the selection of the detected piezo hair style option 836 b.
In some embodiments, the feature options include enlarged (e.g., expanded) views of the respective avatar feature corresponding to the feature options. These feature options are typically feature options that are useful for a close-up view of their avatar feature to illustrate details sufficient to distinguish between different avatar feature options. For example, in fig. 8R, device 600 shows hair texture options 834 corresponding to hair texture features 840. Each hair texture option 834 shows an expanded view of the avatar's hair, so that the different hair textures represented by hair texture options 834 are better shown so that the user can easily distinguish them. Straight texture option 834a shows an expanded view of the avatar hair with straight texture. Wave hair texture option 834b shows an expanded view of the avatar hair with wave texture. Curl texture option 834c shows an expanded view of the avatar hair with curl texture.
As shown in fig. 8R, device 600 detects selection of wave hair texture option 834b in response to receiving input 850 (e.g., a touch input on display 601) on wave hair texture option 834 b.
Fig. 8S-8U illustrate the device 600 updating the avatar 805 and corresponding hair style feature options 836 in response to detecting the selected wave texture option 834b in fig. 8R. For example, avatar hair 851 transitions from an appearance having a straight hair texture in fig. 8R to an appearance having a wavy hair texture in fig. 8S.
Additionally, in embodiments discussed herein, the feature options showing the avatar feature affected by the selection of a different feature option are updated to reflect the selection of the different feature option. For example, in fig. 8S-8U, the pixie hair style option 836b and the bob hair style option 836c illustrate avatar hair (specifically, avatar hair affected by selection of the wave-shaped hair texture option 834 b), thereby updating each of the hair styles illustrated in the hair style options 836b and 836c to show that the respective hair style options transition from an appearance having the straight hair texture in fig. 8R to a different appearance having the selected wave-shaped hair texture. The head hairstyle option 836a does not display the avatar hair. Thus, the head hairstyle option 836a is not shown transitioning to a different appearance.
In some embodiments, when a feature option is selected for a particular avatar feature characteristic, the feature options shown for that feature do not change in response to the selection, while the feature options for other avatar feature characteristics do change. For example, in fig. 8S-8U, when the wave-shaped hair texture option 834b is selected, the hair texture option 834 does not change, but the hair style option 836 does change. Similarly, as shown in fig. 8AN through 8AQ (discussed below), when a different hair style option is selected, the hair style option does not change, but other characteristic options (e.g., hair texture option) do change (e.g., changed hair texture option 834 in fig. 8 AQ).
The transition between the Pixie hair style option 836b and the bob hair style option 836c is illustrated in FIGS. 8S-8U. The Pixie hair style option 836b is shown transitioning from the appearance of fig. 8R with a straight hair texture to a different appearance of fig. 8S and 8T with a selected wavy hair texture. This transition includes zooming in on the displayed pixie hair style option 836b during the transition from the straight hair texture to the wavy hair texture, and optionally the border 846 (see zoomed-in pixie hair style option 836b 'and zoomed-in border 846' in fig. 8S), and then retracting the pixie hair style option 836b to the original size in fig. 8T after the transition to the appearance with wavy hair texture is complete. Bob hair style option 836c is shown transitioning from the appearance in fig. 8S with straight hair texture to a different appearance in fig. 8T and 8U with a selected wavy hair texture. The transition includes enlarging the displayed bob hair style option 836c (see enlarged bob hair style option 836c' in fig. 8T) during the transition from the straight hair texture to the wavy hair texture, and then retracting the bob hair style option 836c back to the original size in fig. 8U after the transition to the appearance with the wavy hair texture is complete.
After the transition of the pixie hair style option 836b is completed (e.g., after the enlarged pixie hair style option 836b' is displayed back to its original size in fig. 8T), the bob hair style option 836c makes the transition. The display effect of such an instant enlarged transition feature option, in combination with the timing of the transition being accomplished in the display sequence, gives the appearance of a ripple effect, providing the user with a specific visual indication that a particular feature option transitions based on the user's selection of a different feature (e.g., a feature option other than the feature option being switched). The visual effect also accurately indicates to the user when the respective feature option is in the process of transitioning (e.g., when the feature option is zoomed in), and also provides an indication of when the transition is complete (e.g., when the feature option returns to its smaller original size). This also presents a visual confirmation to the user that a particular feature option is not affected by the selection, as such feature options (if any) are not shown with a momentary magnification.
In fig. 8V, device 600 detects selection of hair color option 832a in response to receiving input 852 on hair color option 832a (e.g., a touch input on display 601).
In fig. 8W, device 600 indicates that color option 832a is selected by displaying a border 854 around hair color option 832 a. The device 600 also modifies the avatar hair 851, eyebrow 827, hair texture options 834, and hair style options 836 (e.g., 836b and 836c) to match hair color to the selected hair color option 832 a. In some embodiments, the color (or color attribute) of the eyebrow 827 is determined based on a combination of skin color and hair color. For example, the tone of the eyebrow 827 may be determined based on a selected hair color, and the brightness of the eyebrow 827 may be determined based on a selected skin color. The transition between the hair texture option 834 and the hair style option 836 may be displayed according to the ripple effect appearance discussed above. For example, the hair texture options 834 a-834 c transition in sequence (e.g., with an instant zoom in), followed by sequentially transitioning the hair style options 836b and 836c (e.g., with an instant zoom in).
The device 600 also displays a hair color slider 856 for adjusting the gradient of the selected hair color option 832 a. The hair color slider 856 includes a selector affordance 858 (also referred to herein as a scroll bar) having an initial (e.g., default) position within a gradient region 857 (also referred to herein as a track) that extends between a high gradient value 857a and a low gradient value 857b of the selected color 832 a. The selector affordance 858 may be moved within a region 857 (e.g., according to the magnitude and direction of the input on the slider) to adjust the gradient of the selected color 832a based on the position of the selector affordance 858 within the gradient region 857. Adjusting the gradient of the selected hair color option 832a causes the device to modify any avatar feature having the selected color 832a (including displaying the feature options of such avatar feature as well as the color of the selected hair color option (e.g., 832a changes in fig. 8AY and 8AZ as the affordance 858 moves in area 857)). Unless otherwise noted, when reference is made herein to modifying a particular color option, the modification also applies to the respective feature associated with the color option and the feature option showing the respective avatar feature.
In some embodiments, the gradient may represent various characteristics of the selected hair color, such as shading, saturation, undertone, intermediate hue, highlight, warm, lightness, or hue. In some embodiments, the gradient may represent the undertone of the avatar's hair, which is different from the selected color, and optionally, based on the selected skin tone of the avatar. The gradient of the undertone can be adjusted by moving the selector affordance 858 within the gradient region 857, which ultimately modifies the appearance of the selected hair color and the avatar hair 851. In some embodiments, the base color of the hair corresponds to a natural hair color determined based on the selected skin color (skin color). For example, for darker skin tones, the hair has a darker undertone (e.g., brown or black undertone) hair, while lighter skin tones produce a lighter undertone (e.g., gold or red undertone) hair. Adjusting the base color based on the gradient of the base color allows the hair to have not only the appearance of applying a particular color, but also the intensity of that color. For example, for a head portrait hair having an unnatural selected hair color (e.g., purple), adjusting the base color to a low gradient value 857b provides little or no base color of the natural hair color (e.g., brown). This emphasizes the purple hair color, giving the head portrait the appearance of a large quantity of purple hair dye. Conversely, adjusting the background color to a high gradient value 857a emphasizes the natural background color of the hair (or other avatar features, such as the avatar's eyebrows or lips), so that the avatar has the appearance of being lightly colored with purple hair dye. By adjusting the position of the selector affordance 858 along the slider, the user may adjust the gradient that the device 600 applies to the base color of the selected color 832 a.
In some embodiments, the selector affordance 858 includes a color that represents a gradient of the currently selected color 832 a. In its initial position, the selector affordance 858 has the same color as the selected color 832a when the selected color 832a is initially displayed. In other words, the selected color 832a has an initial (e.g., default or pre-selected) color when it is first selected (e.g., see fig. 8V). When hair color slider 856 is first displayed, selector affordance 858 has an initial position centered in region 857 and a color corresponding to the initial color of selector color 832 a. Moving the position of the selector affordance 858 from its initial position to a different position causes the device to modify the gradient of the selected color 832a, the corresponding color of the selector affordance 858, and any avatar feature having the selected color 832a (including an avatar option to display such avatar feature) based on the new position of the selector affordance 858. In the embodiment shown in FIG. 8X, moving the selector affordance 858 toward a high gradient value 857a dims the selected color 832a, and moving the selector affordance 858 toward a low gradient value 857b dims the selected color 832 a.
For example, in fig. 8X-8Z, device 600 detects touch and drag input 860 on selector affordance 858 and, in response, displays movement of selector affordance 858 within area 857 based on the drag movement of input 860 and updates the color of selector affordance 858, selected color 832a, as well as avatar hair 851 and the color of any avatar hair displayed in feature options (e.g., 834 a-834 c, 836b, and 836c) based on the location of selector affordance 858 within area 857.
In FIG. 8X, input 860 has an initial position 860' corresponding to selector affordance 858 that is the center of region 857. Because the selector affordance 858 is in its initial (e.g., default) position, the device 600 does not modify the color of the selected color 832a, the selector affordance 858, or any other displayed feature having the selected color 832 a. In some embodiments, when the selector affordance 858 is in its default position (or when the selector affordance 858 moves from a different position to a default position (e.g., the center of the slider 856)), in response to detecting an input (e.g., input 860) on the selector affordance 858, the device 600 generates feedback such as haptic feedback (e.g., haptic output), which is optionally generated with or without audio output. This provides feedback to notify the user when the selector affordance 858 is in its initial (e.g., default) position corresponding to the initial color (e.g., value) of the selected color 832 a.
In fig. 8Y, the device 600 detects that the input 860 has moved to the second position 860 "and, in response, displays the selector affordance 858 at the second position that corresponds to the second position 860". The second position of the selector affordance 858 corresponds to a greater gradient (e.g., darker shading or greater undertone) of the selected color 832a along the region 857 (as compared to the gradient shown in fig. 8X). Thus, the device 600 displays the selector affordance 858 with a greater gradient based on the relative position of the selector affordance 858 within the region 857. The device 600 also updates the selected color 832a and updates any features (e.g., avatar hair 851, hair texture options 834 a-834 c, and hair style options 836b and 836c) that display the selected color 832a to have a greater gradient (e.g., shading or undertone).
In fig. 8Z, device 600 detects movement of input 860 to third position 860 "', and in response, displays selector affordance 858 at a third position corresponding to third position 860"'. The third position of the selector affordance 858 corresponds to a greater gradient (e.g., a darker shading or a greater undertone) than that shown in fig. 8Y. Thus, the device 600 displays the selector affordance 858 having the greater gradient based on the relative position of the selector affordance 858 within region 857. The device 600 also updates the selected color 832a and updates the features that display the selected color 832a (e.g., avatar hair 851, hair texture options 834 a-834 c, and hair style options 836b and 836c) to have a greater gradient (e.g., a darker shading or a greater undertone) as shown in fig. 8Z.
In fig. 8AA, device 600 detects termination of input 860 (e.g., lift-off of touch and drag inputs) when selector affordance 858 is in a position (e.g., 858') corresponding to position 860' ″ shown in fig. 8Z. Thus, the device 600 maintains the selected gradient of the selected color 832a (and any features of the selected color 832 a) upon termination of the input 860. In some embodiments (e.g., see fig. 8AS and 8AT discussed below), the device 600 retains the modified hair color slider 856 (including the location of the selector affordance 858) and the modified gradient of the selected color 832a even after selection of the different color option 832.
In fig. 8AB, device 600 detects selection of hair color option 832b in response to receiving input 861 (e.g., a touch input on display 601) on hair color option 832 b.
In fig. 8AC, device 600 indicates that color option 832b is selected by displaying a border 862 around hair color option 832 b. Device 600 also modifies the displayed hair color slider 856 by moving the selector affordance 858 to a default position for the selected hair color option 832b and updating the color of the selector affordance 858 to the color corresponding to the selected hair color option 832 b. The device 600 also modifies the avatar hair 851, hair texture options 834, and hair style options 836 (e.g., 836b and 836c) to match the hair color to the selected hair color option 832 b. The transition between hair texture options 834 and hair style options 836 are shown according to the rippling appearance discussed above. For example, the hair texture options 834 a-834 c transition in sequence (e.g., with an instant zoom in), followed by sequentially transitioning the hair style options 836b and 836c (e.g., with an instant zoom in).
In fig. 8 AD-AL, device 600 detects input 864, which is a touch and drag gesture on display 601, the initial touch corresponding to a location within avatar characteristics area 804. In response to detecting that the input 864 moves in the vertical direction, the device 600 scrolls the displayed avatar feature characteristics and corresponding feature options displayed in the avatar characteristic area 804 based on the direction of movement of the input 864 (e.g., based on the direction of the drag). In addition, the device 600 resizes the avatar display area 803 (including the displayed avatar 805 and, optionally, the avatar feature area 807) and the avatar property area 804 based on the drag direction and the current state (e.g., size) of the avatar display area 803 and the avatar property area 804.
For example, fig. 8 AD-8 AF illustrate the avatar display area 803 and avatar 805 transitioning (e.g., compressing) from the initial fully expanded state in fig. 8AD to the compressed state in fig. 8AF in response to detecting the input 864 moving in an upward direction (e.g., in a direction toward the avatar display area 803). At the same time as the avatar display area 803 transitions, the device 600 displays that the avatar characteristic area 804 transitions (e.g., expands) from the initial state in fig. 8AD to the fully expanded state in fig. 8 AF. Fig. 8AE shows that the avatar display area 803 (including the avatar 805) and the avatar characteristic area 804 each have a corresponding intermediate state (e.g., size) when the relative position of the input 864 is between the corresponding positions shown in fig. 8AD and 8 AF. Thus, the device 600, in response to the drag gesture in the upward direction, successively refines the avatar display area 803 and the avatar 805 while expanding the avatar characteristic area 804 (and moving the line 806 upward) until the avatar display area 803 and the avatar 805 reach a compressed state and the avatar characteristic area 804 reaches a fully expanded state. When the avatar display area 803 and avatar 805 are in a compressed state, the device 600 no longer condenses the avatar display area 803 and avatar 805 in response to further movement of the drag gesture in an upward direction (or in response to a subsequent upward drag gesture), or further expands the avatar characteristics area 804. Instead, the device 600 continues to scroll through the avatar feature characteristics and feature options (see fig. 8 AG-8 AH, illustrating additional hair style options 836 d-836 f of the hair style feature 842, and moving the hair color characteristics 838, including the hair color features 832 and the hair color slider 856, away from the displayed portion of the avatar characteristic area 804) in response to further movement of the drag gesture in the upward direction (or in response to a subsequent upward drag gesture on the avatar characteristic area 804 when the avatar display area 803 is in a compressed state).
In contrast, the display 600 expands the avatar display area 803 from a compressed (or intermediate) state in response to detecting movement of the input 864 in a downward direction, as shown in fig. 8AH through 8 AJ. While the avatar display area 803 is expanded, the device 600 displays an avatar property area 804 that transitions (e.g., shrinks) from the expanded state in fig. 8AH (or an intermediate state in fig. 8 AI) to its original state (e.g., size) shown in fig. 8 AJ. By expanding the avatar display area 803 in response to the downward movement of the input 864, the device 600 enlarges the avatar 805 so that the user can more easily see the avatar 805 without the user having to scroll back to the avatar feature characteristics and the initial position of the feature options in the avatar characteristics area 804 (see, e.g., FIG. 8 AD).
By refining the avatar display area 803, the device 600 displays a larger avatar characteristics area 804 to display additional avatar characteristic characteristics and/or characteristic options. The avatar feature characteristics and feature options do not change in size when the avatar characteristics area 804 expands or contracts. Thus, as the avatar characteristics area 804 expands, the device 600 displays more avatar characteristic and/or characteristic options; and when the avatar characteristics area 804 shrinks, fewer avatar characteristic characteristics and/or characteristic options are displayed.
In some embodiments, when the device 600 displays scrolling avatar feature characteristics (e.g., 808, 810, 838, 840, 842) and their corresponding feature options (e.g., 812, 814, 832, 834, 836), the device 600 remains to display the corresponding headers of the avatar feature characteristics at the top of the avatar characteristic area 804 when a portion of the avatar feature characteristics are partially scrolled off the top edge of the avatar characteristic area 804 (e.g., below the line 806). For example, as shown in FIG. 8AH, the device 600 "freezes" the texture header 841 at the top of the avatar characteristics area 804 (e.g., located directly below line 806) when the hair texture characteristics 840 are scrolled off the displayed portion of the avatar characteristics area 804. The "freeze" texture header 841 remains displayed at the top of the avatar characteristic area 804 until the entire hair texture characteristic 840 is scrolled off the avatar characteristic area 804 (e.g., in an upward direction), or until the entire hair texture characteristic 840 is below the line 806 (e.g., when no portion of the hair texture characteristic 840 is scrolled off the top edge of the avatar characteristic area 804). In some embodiments, when the "frozen" header is released from a position below line 806, it is replaced with a header of an adjacent avatar characteristic (e.g., see hair style header 843 in fig. 8 AL). In some implementations, the frozen header (e.g., texture header 841) is visually distinguished from the feature options in the avatar characteristics area 804. For example, as shown in fig. 8AH, texture header 841 is visually distinguished by lines 806 and 867.
In fig. 8AK and 8AL, the device 600 detects that the input 864 moves in the upward direction (after moving downward as shown in fig. 8AI and 8 AJ), and condenses the avatar display area 803 and the avatar 805, while expanding the avatar characteristic area 804, and moves the hair style header 843 to the edge of the avatar characteristic area 804 (replacing the texture header 841), as described above. Movement of the input 864 also scrolls the content displayed in the avatar characteristics area 804 to display additional hair style options 836 g-836 i.
In fig. 8AM, device 600 detects termination (e.g., lift-off) of input 864. The apparatus 600 displays the avatar display area 803 and the avatar 805 in a compressed state, and the avatar property area 804 in a fully expanded state. The avatar characteristic area 804 shows hair style characteristics 842 with hair style options 836 a-836 i (each with a wavy hair texture based on the previously selected wavy hair texture option 834b in fig. 8R, and a hair color based on the selected hair color option 832b in fig. 8 AB), and a hair style header 843 located below line 806 and visually distinct from hair style options 836 a-836 c by line 867. The pixie hair style 836b is shown in the selected state, as indicated by a border 846 located around the pixie hair style 836b, and as indicated by the avatar hair 851 with the pixie hair style (and wavy hair texture) displayed on the avatar 805. The avatar hair affordance 809b is highlighted to indicate that the avatar hair feature is currently selected for editing.
In some embodiments, in response to detecting selection of the feature option, avatar display area 803 and avatar 805 transition directly from the compressed state to the fully expanded state. For example, in fig. 8AN, device 600 detects input 869 (e.g., a touch input on display 601) at a location corresponding to pointed piercing option 836 f. In response to detecting selection of the sharp-prick type option 836f, the device 600 displays the avatar display area 803 and avatar 805 with the fully expanded state in fig. 8 AO. The device 600 modifies the avatar hair 851 to match the selected sharp piercing type option 836f (with the wavy hair texture based on the previous selection in fig. 8R and the hair color based on the hair color option 832b selected in fig. 8 AB). The device 600 also indicates the selection of the pointed prick type option 836f by removing the border 846 from the pixie type option 836b and displaying the border 846 around the pointed prick type option 836 f.
In fig. 8AP and 8AQ, device 600 detects input 870, which is a touch and drag gesture on display 601, the initial touch corresponding to a location within avatar characteristics area 804. The device 600 detects that the input 870 is moving in a downward direction and, in response, scrolls the feature options (e.g., hair style options 836 a-836 i) and hair style header 843 in a downward direction to display a portion of the hair color features 838, the hair texture features 840, and the hair style features 842, as shown in fig. 8 AQ.
In fig. 8AR, device 600 detects termination (e.g., lift-off) of input 870. The device 600 displays a hair color feature 838 with a hair color slider 856 and hair color options 832 (including a selected hair color option 832b indicated by a border 862), a hair texture characteristic 840 with a texture header 841 and hair texture options 834 a-834 c (including a selected wavy hair texture option 834b indicated by a border 844), and a hair style characteristic 842 with a hair style header 843 and hair style options 836 a-836 c.
In fig. 8AS, device 600 detects an input 871 (e.g., a touch input) on hair color option 832 a. In response, device 600 displays avatar hair 851 transitioning from the selected color 832b in fig. 8AS to a color 832a corresponding to the selected color in fig. 8AT (which is the color resulting from the modifications discussed above with respect to fig. 8X-8 AA). Since only the selected hair color changes with the input 871, the avatar hair 851 still appears to have a sharp hair style corresponding to the selected sharp hair style option 836f, and a wavy hair texture corresponding to the selected wavy hair texture option 834 b. In addition, device 600 displays feature options including hair colors, each of which transitions from the display state in fig. 8AS to the display state in fig. 8 AT. Accordingly, the device 600 displays the hair texture options 834 a-834 c and hair style options 836b and 836c transitioning from a hair color corresponding to the hair color option 832b to a hair color corresponding to the hair color option 832 a. The transition may include the appearance of a ripple effect discussed above (e.g., a sequential transition with an instantaneously magnified transition feature option).
As shown in fig. 8AT, device 600 also displays the remaining modified setting of hair color slider 856 transitioning to color 832a in response to detecting input 871, which was previously set in response to input 860 discussed above with reference to fig. 8X-8 AA. This includes displaying the selector affordance 858 transitioning to the same modified color as color option 832a, and transitioning to the same modified position within the color hue region 857 as shown in fig. 8AA immediately prior to the device 600 detecting the input 861 on the hair color option 832 b.
In fig. 8AU, the device 600 detects an input 872 (e.g., touch input) on the bob hair style option 836c, which is shown as partially off-screen in the avatar characteristics area 804. In response, the device 600 displays an avatar hair 851 transitioning from a sharp-prick style to a bob style corresponding to the selected bob style option 836c, as shown in fig. 8 AV. The device 600 also displays a slight scroll of the avatar characteristics area 804 to display a complete representation of the selected bob hair style option 836c (and remove the hair color option 832), and a border 846 is displayed around the bob hair style option 836c to indicate selection of this feature option. Because input 872 selects no other feature options, the other avatar features other than avatar hair 851 are not modified. Additionally, the device 600 does not display any feature option updates since no other displayed feature options display a sufficient amount of avatar hair to show the selected hairstyle.
In fig. 8AW, the device 600 detects an input 873 (e.g., a touch input) on the avatar lip affordance 809 c. In response to detecting the input 873, the device 600 updates the avatar display area 803 as shown in FIG. 8AX to indicate that the avatar lip feature is selected (e.g., by thickening and positioning the avatar lip affordance 809c directly below the avatar 805), and updates the avatar characteristics area 804 to display avatar characteristic characteristics and feature options corresponding to the avatar lip feature. The avatar feature characteristics and feature options shown in fig. 8AX through 8BA correspond to the avatar lip feature. Thus, in response to detecting selection of any such feature options, the device 600 modifies the avatar lips 828 displayed on the avatar 805 and, in some cases, updates the representation of the avatar lips displayed in the feature options according to whether the selected feature options display avatar lips.
As shown in fig. 8AX, the updated avatar characteristics field 804 includes lip color characteristics 875 with various lip color options and lip shape characteristics 878 with lip shape options 880. The lip color options include a natural lip color option 882, an unnatural lip color option 884, and a color selector option 886. The natural lip color option 882 represents a natural human lip color option, which in some embodiments is determined based on a selected skin color of the avatar 805. In some embodiments, the unnatural lips color option 884 is not determined based on the selected skin color, but rather represents a color that may correspond to the color of lipstick, or other colors (e.g., blue, green, etc.) that are not the natural color of human lips. Color selector option 886 is a selectable option for displaying other color selections that may be selected to adjust the color of the avatar lips. In some implementations, the lip color options (e.g., 882, 884, 886) can scroll in a horizontal direction in response to an input (e.g., tap, swipe, drag, etc.) on the lip color options. Scrolling the lip color option may display other lip color options (e.g., 882, 884, 886). In some embodiments, the color selector option 886 is located at the end of the lip color option and is not displayed until the lip color option scrolls to the end of the lip color option.
In fig. 8AX, device 600 detects input 887 (e.g., a touch gesture on display 601) on color selector option 886. In fig. 8AY, in response to detecting the input 887, the device 600 displays an expanded color palette 888 displaying various color options, including natural lip color options and unnatural lip color options. In some embodiments, the expanded palette 888 may be displayed as a pop-up menu that appears on a portion of the avatar characteristics area 804, including any displayed avatar characteristic and characteristic options.
In fig. 8AY, device 600 detects input 889 (e.g., a touch gesture on display 601) on the selected lip color option 890.
In fig. 8AZ, in response to detecting selection of the selected lip color option 890, the device 600 displays an updated avatar lip 828 to match the color of the selected lip color option 890. In addition, the lip shape options (e.g., lip shape options 880a) are updated (e.g., according to the dimple effect appearance discussed herein) to include the selected lip color option 890. The device 600 also updates one of the lip color options (represented in fig. 8AX as lip color option 884a) to match the selected lip color option 890 and displays a border 891 around the updated lip color option 884 a.
The device 600 also displays a lip color slider 892 that can be controlled in a similar manner as the other color sliders described herein. The lip color slider 892 includes a selector affordance 893 that is positionable along the lip color slider to adjust the gradient of the selected lip color 884a from a high gradient value at 892a to a low gradient value at 892 b. In some embodiments, the gradient may represent various characteristics of the selected lip color, such as shading, saturation, undertone, intermediate hue, highlight, warm color, or hue. In some embodiments, the gradient may represent the undertone of the lip of the avatar, which is different from the selected color, and optionally, based on the selected skin tone of the avatar. The gradient of the ground color can be adjusted by moving the selector affordance 893 along the lip color slider 892, which ultimately modifies the appearance of the selected lip color and the avatar lip 828. For example, the base color of the selected color may be red, or some other color corresponding to a natural skin color (e.g., brown), while the selected lip color (e.g., the selected lip color 884a) may be any color (including any unnatural color). Adjusting the base color based on the gradient of the base color causes the lips of the avatar to have not only the appearance of a particular color applied to the lips, but also the intensity of that color. For example, for an avatar lip having an unnatural selected lip color (e.g., green), adjusting the background color to a low gradient value 892b provides little or no background color of the natural lip color (e.g., red). This emphasizes the green lip color, giving the avatar the appearance of a heavily smeared green lipstick or unnaturally colored lips. Conversely, adjusting the base color to a high gradient value 892a accentuates the base color of the lips, giving the avatar an appearance of slightly smearing green lipstick. By adjusting the position of the selector affordance 893 along the slider, the user can adjust the gradient that the device 600 applies to the base color of the selected color 884 a.
In fig. 8BA, device 600 detects input 8100 (e.g., touch input) on avatar accessory affordance 809 d. In response to detecting the input 8100, the device 600 updates the avatar display area 803 as shown in fig. 8BB to indicate that the avatar accessory feature is selected (e.g., by thickening and positioning the avatar accessory affordance 809d directly below the avatar 805), and updates the avatar characteristic area 804 to display avatar characteristic characteristics and feature options corresponding to the avatar accessory feature. The avatar feature characteristics and feature options shown in fig. 8 BA-8 CF correspond to avatar accessory features. Thus, in response to detecting selection of any such feature options, device 600 modifies avatar 805 based on the selected feature options, and in some cases, updates the representation of the avatar accessory displayed in the feature options based on the selected feature options.
As shown in fig. 8BB, the avatar characteristic area 804 includes earring characteristics 8102 with earring options 8104, hat characteristics 8106 with hat options 8108, and eyeglass characteristics 8110 with eyeglass options 8112 (as shown in fig. 8 BM). Device 600 displays a border 8114 around earring option 8104a to indicate that earring option 8104a is currently selected (no earrings). Device 600 also displays a border 8116 around cap option 8108a to indicate that cap option 8108a (no cap) is currently selected. The device 600 displays the avatar 805 without earrings and hat based on the selected earring option 8104a and hat option 8108 a.
In some embodiments, the feature options may be scrolled horizontally to display more feature options. For example, in fig. 8BB, device 600 displays earring option 8104d and hat option 8108d partially off the screen, indicating that earring option 8104 and hat option 8108 can scroll horizontally (e.g., in response to a horizontal swipe or touch and drag gesture, as shown in fig. 8 BV-8 BW).
In some embodiments, device 600 displays the feature options to represent the potential appearance of the avatar (e.g., avatar 805) if the corresponding feature option is selected. However, in some embodiments, if the corresponding feature option is selected, device 600 displays the feature option that does not fully represent the potential appearance of the avatar. For example, device 600 may display a feature option having a representation of an avatar feature, with a portion of the corresponding avatar feature omitted. Omitting a portion of the respective avatar feature from the feature options displays other avatar features in the feature options that would otherwise be occluded by the omitted portion if displayed, but do not fully represent the potential appearance of the avatar if the feature option was selected. For example, in fig. 8BB, device 600 displays earring option 8104 with a representation of avatar hair (e.g., avatar hair 851), but omits a portion of the avatar hair, thereby displaying a representation of the avatar ears, and in some cases, displays a representation of the earrings. Portions of the avatar hair are omitted from earring option 8104 to display a blocked view of the various earring options that may be selected. However, if the earring option is selected, earring option 8104 does not represent the potential appearance of avatar 805, as the currently selected avatar hair style (e.g., as indicated by avatar hair 851) covers the ears of the avatar (and potentially any selected avatar earrings). Thus, in some embodiments, certain avatar feature options do not affect the location of other avatar features when selected. For example, the avatar accessory options corresponding to the nose ring will not result in modification (e.g., adjustments to the geometry of the feature due to the final placement of the avatar feature on the avatar) of other avatar features such as the avatar hair. Similarly, the avatar accessory options corresponding to certain earrings do not result in modification of the avatar hair.
Device 600 also displays hat feature 8106 with hat option 8108. If the corresponding hat option is selected, the displayed hat option 8108 represents a potential change to the avatar 805. In addition to modifying the avatar 805 to include the selected cap, such potential changes include reshaping of the avatar hair 851 and lighting effects such as casting shadows on the face of the avatar 805. In fig. 8BB, reshaping of the avatar's hair 851 is represented in hat options 8108b through 8108c ( more hat options 8108d and 8108e are shown in fig. 8BW and discussed in more detail below). For example, the hair line cap option 8108b shows a cap having a cap line 8118 that is narrower than the display width of the hair in the cap option 8108b (e.g., at the location where the cap meets the hair on the head of the avatar (the opening at the bottom of the cap)). Accordingly, device 600 displays hair having a reshaped (e.g., modified) appearance, wherein the avatar hair is tucked at hat line 8118, thereby giving the realistic appearance that the hat compresses the hair of the avatar to conform to the head of the avatar. The effect is also displayed in the cowboy hat option 8108c and the headband hat option 8108 d. In the headband cap option 8108d, the device 600 displays that the headband tightens the avatar hair, which again reshapes the avatar hair to fit the hat line of the headband, but also gives the appearance that the avatar hair is simultaneously tightened at the hat line of the headband and protrudes above the top of the headband (e.g., tucking under the headband). When the towel cap option 8108d is selected, the avatar 805 is shown in fig. 8 BY-8 CB, discussed in more detail below.
The hat option 8108c also shows a potential change in the avatar 805 that will display a lighting effect on the avatar 805. For example, the cowboy hat option 8108c includes a large hat (e.g., cowboy hat) that casts a shadow 8120 on the forehead of the avatar below the bill of the cowboy hat. By displaying a hat option 8108c with a cowboy hat, reshaped hairline, and shadow 8120, device 600 indicates that selecting hat option 8108c will result in modification of avatar 805, including displaying the cowboy hat on avatar 805, reshaping the hairline of avatar hair 851, and casting a shadow on the forehead of avatar 805 (see, e.g., fig. 8CC, showing avatar 805 with a peaked cap, reshaped hair, and shadow on the forehead).
Fig. 8BC and 8BD illustrate device 600 detecting an input 8122 (e.g., a touch input) selecting earring option 8104c and indicating selection of earring option 8104c by moving bezel 8114 from earring option 8104a to earring option 8104 c. Fig. 8BD also shows device 600 displaying a head portrait 805 with a head portrait earring 8125 corresponding to the earring displayed in the selected earring option 8104 c. The ear loop 8125 is partially covered by the head portrait hair 851 positioned above the ears of the head portrait. However, the ear 8125 is large enough so that it extends beyond the avatar hair 851 and is therefore shown partially protruding from under the avatar hair 851. Device 600 also updates hat option 8108 to display earrings that apply to the displayed hat option 8108, as shown in fig. 8 BD.
In some embodiments, device 600 detects a user's face that is in the field of view of a camera (e.g., camera 602), and modifies (e.g., continuously) the appearance of avatar 805 based on detected changes in the user's face (e.g., changes in the user's facial pose, changes in the relative positions of facial features, etc.). For example, in FIG. 8BE, device 600 displays real-time modifications to the facial features of avatar 805 based on detected corresponding changes in the user's face. In fig. 8BE, device 600 detects (e.g., using camera 602) that the user has tilted their head to the side, blinks, and smiles. The device 600 modifies the avatar 805 in real-time to mirror the detected user movement.
In fig. 8BF, the device 600 detects (e.g., using the camera 602) that the user returns to a neutral position where the user does not tilt their head, smile, or blink. The device modifies the avatar 805 in real-time to mirror the user's return to a neutral position.
In some embodiments, the apparatus 600 modifies selected avatar characteristics, such as the avatar characteristics represented in avatar 805, based on the physical model applied to the corresponding selected avatar characteristics. For example, in FIG. 8BF, when device 600 displays avatar 805 returning to a neutral position, avatar earring 8125 is shown swinging to reflect the physical characteristics of the tilt motion of the user's head. It should be understood that the physical model is not limited to earring 8125. The physical model may be applied to other selected avatar characteristics.
In some embodiments, device 600 modifies the orientation and/or magnification of the display of avatar 805 in response to detecting an input on avatar display area 803, or more specifically, in some embodiments, detecting an input on avatar 805. For example, in fig. 8BG, device 600 detects an input 8128 (e.g., a touch and drag gesture or a swipe gesture) on avatar display area 803. In response to detecting movement of input 8128 from the initial position in fig. 8BG to the second position in fig. 8BH, device 600 displays a rotation of avatar 805 corresponding to the movement of input 8128. In fig. 8BH, the device 600 shows a final side view of the avatar 805.
In some embodiments, device 600 displays the selected avatar feature movement based on the physical model of the application in response to a detected input on avatar display area 803 (or avatar 805). For example, in fig. 8BI, device 600 shows earring 8125 swinging toward the front of the face of avatar 805 in response to rotation of the display of avatar 805 in fig. 8BG and 8 BH.
In fig. 8BJ, device 600 detects input 8130 (e.g., a spread gesture) on avatar 805 and, in response, enlarges avatar 805 based on movement of input 8130 (e.g., the length of the spread gesture), as shown in fig. 8 BK.
In fig. 8BL and 8BM, the device 600 detects an input 8132 (e.g., a touch and drag gesture or a swipe gesture) on the avatar characteristic region 804 and scrolls the displayed avatar characteristic (e.g., 8102, 8106, 8110) and feature options (e.g., 8104, 8108, 8112) based on the movement direction of the input 8132. The avatar 805 remains displayed with an enlarged appearance. In some embodiments, the enlarged appearance allows the user to better view the avatar 805, and in some embodiments, apply various accessories to the avatar 805, such as, for example, a foundation, tattoos, scars, freckles, birthmarks, and other customized features or accessories of the avatar.
In fig. 8BN, the apparatus 600 detects termination of the input 8132 (e.g., liftoff of a touch and drag gesture) and displays an avatar characteristics region 804 with hat features 8106 with hat options 8108a through 8108d and eye characteristics 8110 with eye options 8112a through 8112 d. The avatar 805 remains displayed with an enlarged appearance. Device 600 displays a border 8134 around the glasses option 8112a to indicate that glasses option 8112a is currently selected (no glasses). Device 600 displays eyewear options 8112 b-8112 d (e.g., light reflections 8136 on eyewear lenses) with lighting effects.
In fig. 8BO, device 600 detects an input 8138 (e.g., a touch gesture) on eyewear option 8112 b. As shown in fig. 8BP, device 600, in response to detecting input 8138 on glasses option 8112b, removes bezel 8134 from glasses option 8112a and displays it around glasses option 8112 b. In addition, the device 600 modifies the avatar 805 and hat option 8108 (shown in fig. 8 BV) to include avatar glasses 8140 corresponding to the glasses style displayed in the selected glasses option 8112 b. The device 600 also modifies the appearance of the avatar 805 based on the selected avatar feature options (e.g., the glasses 8140).
For example, as shown in fig. 8BO and 8BP, the device 600 adjusts (e.g., modifies) a position of a portion 8145 of the avatar hair 851 in response to detecting selection of the avatar glasses option 8112b, and displays avatar glasses 8140 located on the face of the avatar 805. The modified portion 8145 displaying the avatar hair 851 is pushed aside to accommodate the frame 8140-1 of the avatar glasses 8140. In addition, the device 600 creates a lighting effect on the avatar 805 by displaying a shadow 8147 adjacent to the modified portion 8145 of the avatar hair 851 and a shadow 8142 under the eyes of the avatar. The device 600 displays shadows 8142 and 8147 to illustrate the lighting effects caused by adding the avatar glasses 8140 to the avatar 805. The lighting effect may also be shown by displaying reflections 8150 (similar to light reflections 8136) on the lenses of the avatar glasses 8140 (see fig. 8 BT). In some embodiments, the lighting effect is determined based on the position of the avatar 805, glasses 8140, and hair 851 relative to the light sources (e.g., light sources detected in the field of view of the camera 602 or simulated light sources).
In response to detecting the input 8138, the device 600 also expands the eyewear characteristics 8110 to display color options for the frame 8140-1 and lens 8140-2 (see, e.g., fig. 8BT) of the selected eyewear 8140. Frame color options 894 include various color options (including an extensible color selector option 894-2) that may be selected for changing the color of eyeglass frame 8140-1. The lens color options 896 include various color options that may be selected for changing aspects of the eyeglass 8140 lens 8140-2. In some embodiments, frame color option 894 includes an extensible color selector option (e.g., 894-2). In some embodiments, the lens color option 896 does not include an extensible color selector option.
In FIG. 8BQ, device 600 detects input 895 on frame color option 894-1 and displays frame color slider 897 in FIG. 8 BR. The frame color slider 897 is similar to the other color sliders discussed herein and may be used to adjust the color (or other aspects) of the eyeglass frame 8140-1 in accordance with the various color slider embodiments discussed herein. In some embodiments, selecting the frame color option 894 also changes the color of the frame 8140-1 of the eyewear 8140. In FIG. 8BQ, color option 894-1 corresponds to the current color of frame 8140-1 that has been previously selected and modified (e.g., using frame color slider 897). Thus, when device 600 displays frame color slider 897, the color slider is shown with the previously modified settings (e.g., selector affordance 897-1 is located at the leftmost end of track 897-2, and color option 894-1 matches the color setting of slider 897), as shown in FIG. 8 BQ.
In FIG. 8BS, the device 600 detects an input 898 on the lens color option 896-1 and displays a lens color slider 899 (in addition to the frame color slider 897) as shown in FIG. 8 BT. The device 600 also zooms out and rotates the display view of the head image 805, showing the eyeglass lens 8140-2. The eye of the head portrait can be seen slightly through lens 8140-2 and a reflection 8150 is shown on lens 8140-2. In some embodiments, when a first slider (e.g., slider 897) is displayed and the device 600 detects selection of a color option (e.g., 896-1) associated with a feature different from the first slider, the device hides the first slider and displays a second slider (e.g., slider 899) of the selected color option for the different feature.
The lens color slider 899 is similar to other color sliders discussed herein and can be used to adjust the color (or other aspects) of the eyeglass lens 8140-2 according to various slider embodiments discussed herein. In the embodiment shown in fig. 8BT, the lens color slider 899 controls the opacity of the lens 8140-2 (although it may be used to control or adjust the colors or other color attributes discussed herein). In response to detecting movement of selector affordance 899-1 along track 899-2, device 600 modifies the opacity of lens 8140-2. For example, device 600 increases the opacity of lens 8140-2 as selector affordance 899-1 is moved toward end 899-3. As selector affordance 899-1 is moved toward end 899-4, device 600 decreases the opacity of lens 8140-2. As shown in FIG. 8BT, lens 8140-2 has an opacity that corresponds to the mid-position of selector affordance 899-1 in track 899-2. When the selector affordance 899-1 is moved to end 899-4, lens 8140-2 has little or no opacity, as shown in FIG. 8 BU.
In some embodiments, both the frame color slider 897 and the lens color slider 899 adjust their respective avatar characteristics in the same manner. For example, when the selector affordance 897-1 moves to the left, the frame color slider 897 changes the color option 894-1 from a cooler color to a warmer color; and when the selector affordance 899-1 is moved to the left, the lens color slider 899 changes the color option 896-1 from a cooler tint to a warmer tint. As another example, the frame color slider 897 modifies the color option 894-1 by increasing a first color value (e.g., red) when the selector affordance 897-1 moves left, and by decreasing a second color value (e.g., green) when the selector affordance 897-1 moves right; and the lens color slider 899 modifies the color selection 896-1 by increasing a first color value when the selector affordance 899-1 moves to the left and increasing a second color value when the selector affordance 899-1 moves to the right.
In fig. 8BU, the device 600 detects a vertical scroll gesture 8157 and vertically scrolls the avatar characteristics area 804 to display hat characteristics 8106 including hat options 8108, where the hat characteristics are updated with glasses according to the selection discussed above.
In fig. 8BV, the device 600 detects a horizontal scroll gesture 8158 and horizontally scrolls the hat option 8108 to display the headband option 8108d and the cricket hat option 8108e in fig. 8 BW.
In fig. 8BX, the device 600 detects an input 8159 on the headband option 8108d and displays the updated avatar 805 in fig. 8BY to include the headband 8160 corresponding to the selected headband option 8108 d. The kerchief 8160 ties the head portrait hair 851 to the head portrait head and reshapes the hair to fit the cap line 8118 of the kerchief. This gives the appearance that the hair 851 of the avatar is simultaneously tied up by the headband (e.g., tucked under the headband) and protrudes (e.g., bulges) on top of (e.g., above the upper hat line 8118-1 of) and below (e.g., below the lower hat line 8118-2 of) the headband 8160.
The avatar 805 is also displayed to move in response to detected facial gestures of the user (e.g., detected in the camera 602). As the user moves their head, the device 600 modifies the avatar 805 in real-time to mirror the user's movements. As the avatar 805 moves, the earring 8125 and avatar hair 851 swing in response to the movement of the avatar head. In some embodiments, as device 600 modifies avatar 805 to mirror real-time movements of the user, device 600 also modifies lighting effects on avatar 805, including moving the display positions of reflections 8150 and shadows 8142 based on the relative positions of the modeled light source and avatar 805 (as well as selected avatar features such as avatar glasses 8140).
For example, in fig. 8BX, the device 600 displays the avatar 805 in its default position (e.g., not enlarged or rotated) and has avatar glasses 8140 with reflections 8150 on the lenses of the avatar glasses 8140 and shadows 8142 on the face of the avatar below the avatar glasses. When the device 600 modifies the avatar 805 in response to movement of the user's face in fig. 8BY, the reflection 8150 moves to a different location in the lens and the shadow (e.g., 8142) moves or disappears on the face (in some embodiments, movement of the hair 851 causes a new shadow to dynamically appear on the face of the avatar).
In some embodiments, the device 600 modifies the physical movement of the avatar characteristics (e.g., such as the avatar hair 851) based on the characteristics applied to the avatar. For example, as shown in FIG. 8BY, a kerchief 8160 is positioned over the avatar 805, tightening the hair 851 at the lower hat line 8118-2. As the avatar 805 moves, the hairs 851 swing out from the lower hat line 8118-2 because the kerchief 8160 tightens the hairs 851 at the lower hat line, thereby restricting its movement. If the avatar 8160 is not worn, the avatar hair will swing out of the avatar head from a higher position on the avatar head because no headband secures the avatar hair 851 to the lower position of the avatar head (e.g., lower hat line 8118-2).
In some embodiments, if the user's face is not detected in the field of view (e.g., 602) of the camera for a threshold amount of time, the device 600 stops modifying the avatar and displays a prompt indicating that face tracking is stopped and instructs the user to resume face tracking. For example, fig. 8BZ shows the avatar 805 in a neutral position at the center of the avatar display area 803, with brackets 8162 displayed around the avatar, and text 8164 indicating that the user resumes face tracking. In some embodiments, the device 600 resumes tracking the user's face (and modifying the avatar 805) in response to detecting various inputs, such as (e.g., detected with the gyroscope 536, motion sensor 538, accelerometer 534, etc.) the user lifting the device 600 or inputs on the display 601. In fig. 8BZ, the device 600 detects a touch input 8166 on the avatar display area 803 and resumes face tracking as shown in fig. 8 CA.
In some embodiments, the device 600 modifies some avatar characteristics in response to changing other avatar characteristics. For example, fig. 8CA and 8CB illustrate how the device 600 modifies the hat option 8108 when a different hairstyle or hair texture is selected for the avatar 805. Fig. 8CA shows the avatar 805 and cap option 8108 when the avatar 805 has short, wavy hair 851-1 (e.g., short hairstyle and wavy hair texture selected according to embodiments discussed herein). Fig. 8CB illustrates the avatar 805 and hat option 8108 when the avatar 805 has a long curl 851-2 (e.g., a long hairstyle and curl texture is selected according to embodiments discussed herein). The avatar hair 851-2 has a larger volume than the avatar hair 851-1. When the device 600 modifies the avatar 805 from short, wavy hair 851-1 to long curls 851-2, the device 600 updates the size of the kerchief 8160 and cap option 8108 based on the changed hair volume, but maintains a common cap line 8118 for all cap options 8108.
For example, in fig. 8CA, hair 851-1 is a smaller, less bulky hair feature, and thus device 600 shows a smaller sized kerchief 8160 that fits over the head of a head portrait (e.g., kerchief 8160 and cap line 8118 have a smaller circumference). When the avatar 805 is updated with avatar hair 851-2, the device 600 increases the size of the kerchief 8160 to accommodate the increased volume of the hair 851-2 (e.g., the kerchief 8160 and cap line 8118 have a larger circumference), as shown in fig. 8 CB. In addition, since the avatar hair 851-2 is a longer hairstyle, when the cap is displayed on avatar 805, device 600 modifies the hair to protrude farther out of cap line 8118 (as compared to hair 851-1 shown in FIG. 8 CA).
In addition, device 600 updates the displayed feature options based on the changed hair. For example, hat option 8108 shown in fig. 8CA has a smaller size than hat option 8108 shown in fig. 8 CB. Thus, the device 600 increases the size of the hat option when larger hair 851-2 is applied to the avatar 805. Similarly, when the avatar has hair 851-2, the cap applied to avatar 805 is larger than when avatar 805 has hair 851-1 (e.g., as shown in FIG. 8CC and FIG. 8 CD). In some embodiments, all of the cap options 8108 have a common cap line 8118 despite the change in size of the cap options 8108, and all of the cap options 8108 affect the shape of the avatar hair 851 based on the corresponding cap being positioned on the avatar head as described above.
In some embodiments, when a different avatar option is selected, the newly selected avatar option is modified based on the avatar characteristics already present on the avatar. For example, in fig. 8CA, the avatar hair 851-1 is modified to accommodate the avatar glasses 8140, as discussed above with respect to fig. 8 BP. When the new avatar hair 851-2 is applied to the avatar 805 in FIG. 8CB, the new avatar hair 851-2 is modified to be similar to the avatar hair 851-1 to accommodate the avatar glasses 8140. As another example, when avatar cap option 8108 is selected, the size of the selected cap option is determined based on the current state of avatar hair 851 (e.g., cap option 8108 is displayed smaller when avatar 805 has avatar hair 851-1 and larger when avatar 805 has hair 851-2).
In fig. 8CB, the device 600 detects an input 8168 on the cricket option 8108 e. In fig. 8CC, the device 600 modifies the avatar 805 to include a peaked cap 8170 corresponding to the peaked cap option 8108 e. The peaked cap 8170 has cap lines 8118 (e.g., matching cap lines 8118-2 of a headband 8160) identical to other cap options, wherein the hair 851-2 of the head image extends from the cap lines 8118 of the peaked cap 8170. The device 600 also displays a large shadow 8172 under the bill of the peaked cap 8170.
As shown in fig. 8CD, the device 600 returns to the avatar hair 851-1. Because the avatar hair 851-1 is smaller in size than the avatar hair 851-2, the device 600 reduces the size of the peaked cap 8170 and other displayed cap options 8108. Since hair 851-1 is a shorter hairstyle, device 600 also modifies the avatar hair 851 to protrude less from cap line 8118 than hair 851-2 in FIG. 8 CC.
In fig. 8CE, the device 600 detects movement of the user's head and modifies the avatar 805 accordingly (e.g., turns the head and the peaked cap 8170 sideways to match the pose of the subject's head). When the avatar 805 turns to the side, the large shadow 8172 moves over the face of the avatar in response to movement of the bill of the peaked cap 8170 relative to the modeled light source, and the reflection 8150 moves to the other side of the mirror 8140-2.
Device 600 also detects an input 8152 (e.g., a touch gesture) on completion affordance 8154. In response, the device 600 closes the avatar editing user interface and displays AN avatar 805 (e.g., AN instant messaging application such as discussed above with respect to fig. 6A-6 AN) in the avatar selection area 8156 of the application, as shown in fig. 8 CF. The avatar 805 may be selected for the application (e.g., sent to John).
Fig. 9 is a flow diagram illustrating a method for displaying an avatar editing user interface, according to some embodiments. The method 900 is performed at an apparatus (e.g., 100, 300, 500, 600) having a display device. Some operations in method 900 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 900 provides an intuitive way for displaying an avatar editing user interface. The method reduces the cognitive burden on the user to manage the avatar, thereby creating a more efficient human-machine interface. For battery-driven computing devices, enabling a user to modify the characteristics of an avatar using an avatar-editing user interface faster and more efficiently conserves power and increases battery-charge time.
The electronic device displays (902) an avatar editing user interface (e.g., 801) via a display device, including concurrently displaying: a head portrait (904) (e.g., 805) having a plurality of head portrait features (e.g., head portrait hair, facial features (head portrait lip, eyes, nose, etc.), accessories (e.g., earrings, sunglasses, hat)), a first option selection area (904) (e.g., 808) of the respective head portrait features, and a second option selection area (906) (e.g., 810) of the respective head portrait features.
A first option selection region (e.g., 808) (e.g., including a visually distinct region selectable for an option to modify an avatar feature) of a respective avatar feature includes (904) a first set of feature options (e.g., corresponding to displayed representations of available modifications of the avatar feature) corresponding to a set of candidate values for a first characteristic (e.g., facial shape, lip size, hair color, etc.) of the respective (e.g., currently selected) avatar feature. In some examples, the option selection area (e.g., 808, 810) is configured to scroll vertically. In some examples, the feature options include graphical depictions of different feature options that can be selected for customizing aspects of a particular avatar feature. In some examples, the feature options (e.g., 809) are configured to scroll horizontally. In some examples, the option selection area (e.g., 808, 810) is configured to scroll along an axis different from an axis along which the feature option (e.g., 809) is configured to scroll, such as axes perpendicular to each other.
The second option selection area (e.g., 810) for the respective avatar characteristic includes (906) a second set of characteristic options corresponding to a set of candidate values for a second characteristic of the respective (e.g., currently selected) avatar characteristic. The second characteristic of the respective avatar feature is different from the first characteristic of the respective avatar feature.
In response (910) to detecting a selection (e.g., 850) of one feature option (e.g., 834b) in the first set of feature options (e.g., the user selects a "wavy hair" feature option from a "hair texture" characteristic of a "hair" avatar feature), the electronic device changes (912) the appearance of at least one of the second set of feature options (e.g., 810) from a first appearance (e.g., 836b) to a second appearance (e.g., 836b') in the second set of feature options. In some examples, the displayed feature options show that the avatar hairstyle transitions from a first appearance of the avatar hair (e.g., a state where the avatar hair has a straight texture) to a second appearance where the avatar hair has a wavy texture.
Changing the appearance of at least one of the second set of feature options from the first appearance to the second appearance in response to detecting selection of one of the first set of feature options provides feedback to the user regarding the current state of the avatar and available avatar feature options, and provides visual feedback to the user confirming selection of one of the first set of feature options. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, changing the appearance of at least one of the second set of feature options (e.g., 836) from the first appearance (e.g., 836b) to the second appearance (e.g., 836b') includes changing (914) the appearance of at least two of the second set of feature options (e.g., from an appearance corresponding to a first option in the first set of feature options to an appearance corresponding to a second option in the second set of feature options).
According to some embodiments, in response to (910) detecting selection of one of the first set of feature options, forgoing (918) changing the appearance of the first set of feature options from a first appearance (e.g., 834) to a second appearance (e.g., 836 b'). In response to detecting selection of one of the first set of feature options, forgoing changing the appearance of the first set of feature options from the first appearance (e.g., 834) to the second appearance (e.g., 836b'), which provides visual feedback to the user indicating that the first set of feature options is not affected or updated in response to detecting selection of one of the first set of feature options. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the second appearance of at least one of the second set of feature options (e.g., 836b) is based on the selected one of the first set of feature options (e.g., 834b) (e.g., the device determines a first characteristic value corresponding to the selected one of the first set of feature options and updates at least one of the second set of feature options based on the first characteristic value).
According to some embodiments, displaying at least one of the second set of feature options (e.g., 836b) changes from the first appearance to the second appearance includes determining that at least one of the second set of feature options includes at least a portion of the avatar feature corresponding to the selected one of the first set of feature options (e.g., when the hair texture option is selected, if the hair style option includes a representation of hair, the plurality of hair style options change to display the selected hair texture (e.g., as shown in fig. 8R-8S)). In some embodiments, the feature options are not changed if those feature options do not include a portion of the feature changed by the selected feature option. For example, when the hair texture option is selected, the appearance of the "head" hair style option does not change since the "head" hair style option does not include a representation of the avatar hair.
According to some embodiments, in response to detecting selection of one feature option (e.g., 834b) of the first set of feature options, in accordance with a determination that a second one (e.g., 836a) of the second set of feature options does not include at least a second portion of the avatar feature corresponding to the selected one of the first set of feature options, the electronic device maintains an appearance of the second one of the second set of feature options. (e.g., when the hair color option is selected, if the hair style option includes a representation of hair, then the plurality of hair style options change, but the appearance of the "head" hair style option does not change, because the "head" hair style does not include a representation of hair, as shown in fig. 8V-8 AC).
In accordance with a determination that the second one of the second set of feature options does not include at least a second portion of the avatar feature corresponding to the selected one of the first set of feature options, maintaining an appearance of the second one of the second set of feature options provides visual feedback to the user indicating that the second one of the second set of feature options was not affected or updated in response to detecting the selection of the one of the first set of feature options. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, changing the appearance of at least one of the second set of feature options (e.g., 836b) from the first appearance to the second appearance includes displaying an animation of at least one of the second set of feature options changing from the first appearance to the second appearance (e.g., as shown in fig. 8R-8 AU). In some embodiments, the animation of the feature option changing appearance includes zooming in on the changed feature option, displaying the feature option change (e.g., changing the texture or color of the hair shown in the feature option), and then reducing the changed feature option to the original size. In some embodiments, the animation effect is performed in an order that alters the feature options (e.g., from top to bottom and from left to right) to give an animation ripple effect that changes the feature options (e.g., a first feature option changes before a second feature option in a second set of feature options).
Displaying an animation of at least one of the second set of feature options changing from the first appearance to the second appearance provides feedback to the user regarding a current state of at least one of the second set of feature options and provides visual feedback to the user confirming selection of one of the first set of feature options. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the selected one of the first set of feature options is a selected hair color (e.g., 832a, red) of the first set of hair color options (e.g., 832), and at least one of the second set of feature options includes one or more of a hair length option (e.g., long, medium, short), a hair type option (e.g., 834, curl, straight, wave, etc.), and a hairstyle option (e.g., 836). According to some embodiments, changing the appearance of at least one of the second set of feature options from the first appearance to the second appearance includes changing one or more of a hair length option, a hair type option, and a hairstyle option from the first hair color to the selected hair color (e.g., as shown in fig. 8P-8 AV). Changing one or more of the hair length option, the hair type option, and the hair style option from the first hair color to the selected hair color provides the user with feedback regarding the current status of the avatar and hair length option, the hair type option, and the hair style option, and provides visual feedback to the user confirming selection of the hair color option. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the selected one of the first set of feature options (e.g., 834) is a selected hair type of the first set of hair type options (e.g., curl, straight, wave), and at least one of the second set of feature options includes one or more of a hair length option (e.g., long, medium, short, etc.) and a hairstyle option (e.g., 836 b). According to some embodiments, changing the appearance of at least one of the second set of feature options from the first appearance to the second appearance comprises changing one or more of a hair length option and a hairstyle option from the first hair type to the selected hair type. Changing one or more of the hair length option and the hair style option from the first hair type to the selected hair type provides feedback to the user regarding the current status of the avatar and hair length options and the hair style option, and provides visual feedback to the user confirming selection of the hair type option. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the second set of feature options includes a plurality of feature options arranged in an order, wherein a first feature option (e.g., 836a) is ranked before a second feature option (e.g., 836b) in the order, and the second feature option is ranked before a third feature option (e.g., 836c) in the order. According to some embodiments, changing the appearance of at least one of the second set of feature options from the first appearance to the second appearance comprises: displaying a first animated transition of a first feature option of the second set of feature options from the first appearance to the second appearance; after displaying at least a portion of the first animated transition of the first feature option to the second appearance, initiating a second animated transition of a second feature option of the second set of feature options from the first appearance to the second appearance; and after displaying at least a portion of the second animated transition of the second feature option to the second appearance, initiating a third animated transition of a third feature option of the second set of feature options from the first appearance to the second appearance. In some embodiments, the first animation transition overlaps the second animation transition, and the second animation transition overlaps the third animation transition. In some embodiments, the first feature option is adjacent to the second feature option, which is adjacent to both the first feature option and the third feature option.
Displaying the first animated transition, then beginning the second animated transition after displaying at least a portion of the first animated transition, then beginning the third animated transition after displaying at least a portion of the second animated transition, provides the user with current state feedback regarding the changed appearance of the first, second, and third feature options in the second set of feature options, and provides visual feedback to the user indicating the order in which the first, second, and third feature options transition. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, changing the appearance of at least one of the second set of feature options from the first appearance to the second appearance comprises: enlarging the size of a first one of the second set of feature options (e.g., 836b'), and then reducing the size of the first one of the second set of feature options (e.g., 836b) (e.g., to its original size); and enlarging the size of a second one of the second set of feature options (e.g., 836c') and then reducing the size of the second one of the second set of feature options (e.g., 836c) (e.g., to its original size). In some embodiments, a second one of the feature options is enlarged (e.g., the transitions of the first and second feature options overlap) before the first one of the feature options is reduced to its original size. Enlarging the size of the first and second ones of the second set of feature options provides the user with feedback regarding the current state of the changed appearance of the first and second ones of the second set of feature options, and provides visual feedback to the user to indicate that the first and second ones of the second set of feature options are changing. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
Reducing the size of the first and second of the second set of feature options provides feedback to the user regarding the current state of the changed appearance of the first and second of the second set of feature options and provides visual feedback to the user indicating when the change is complete. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, an electronic device detects a change in a face in a field of view of one or more cameras (e.g., 602) of the electronic device. The electronic device changes the appearance of the avatar (e.g., 805) (e.g., as shown in fig. 8 BD-8 BE) based on the detected facial changes (e.g., in addition to changing the appearance of the second set of feature options). Changing the appearance of the avatar based on the detected facial changes provides the user with an option to control the modification of the virtual avatar without the need for displayed user interface control (e.g., touch control) elements. Providing additional control options without cluttering the user interface with additional controls enhances operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, after the electronic device (e.g., 600) detects a change in the face (e.g., 673), the electronic device determines that no face (e.g., 602) is detected in the field of view of the one or more cameras for a predetermined amount of time (e.g., 10 seconds). In response to determining that no face is detected in the field of view of the one or more cameras for a predetermined amount of time, the electronic device stops changing the appearance of the avatar (e.g., 805) based on the detected face changes (e.g., transitioning the avatar to a non-interactive (static) state where the avatar does not change in response to detecting the face changes even if the face returns to the field of view of the one or more cameras after tracking stops). After ceasing to change the appearance of the avatar, the electronic device detects an input (e.g., 8166) (e.g., an input for the user interface, such as a gesture on the user interface (e.g., a tap gesture on the "tap to resume tracing the face" affordance), detects a device lift, and so forth). When the face of the user is not detected in the field of view, the electronic device does not update the appearance of the avatar based on the detected change. Since the detected change is not visible to the user, battery power and processing resources of the electronic device are preserved by not displaying the change. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In response to detecting the input (e.g., 8166), the electronic device (e.g., 600) resumes changing the appearance of the avatar (e.g., 805) based on the detected face (e.g., 673) change (e.g., transitioning the avatar to an interaction state (e.g., 805 in fig. 8 CA), wherein the avatar changes in response to detecting the change in the face). In some embodiments, transitioning the avatar to a non-interactive state (e.g., 805 in fig. 8 BZ) includes displaying an animation of the avatar transitioning from an appearance determined based on the detected face (e.g., 805 in fig. 8 BY) to a predetermined appearance (e.g., 805 in fig. 8 BZ). In some embodiments, transitioning the avatar to the interaction state includes displaying an animation of the avatar transitioning from a predetermined appearance to an appearance determined based on the detected face (e.g., 673). Detecting movement of the device being lifted indicates that changes in the detected face should be reflected in the appearance of the avatar. The appearance of the avatar provides feedback to the user indicating the type of characteristics of the avatar that can be customized. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, in response to determining that no face (e.g., 673) is detected in the field of view of the one or more cameras (e.g., 602) for a predetermined amount of time, the electronic device (e.g., 600) displays an indication (e.g., 8164) (e.g., a message) that no face is detected in the field of view of the one or more cameras. In some embodiments, the indication is a message informing the user of an action that can be taken to resume facial tracking (e.g., "show your face," "tap to resume," etc.). In some embodiments, the indication is an animation that indicates that the avatar (e.g., 805) is no longer changed in response to the detected change in the user's face (e.g., an animation of the avatar transitioning to a static state). When no face is detected in the field of view, the user is notified of the non-detection of the face by the displayed indication. This provides feedback to the user so that the user can take action to resume facial tracking and inform the user of the action that can be taken to resume facial tracking (otherwise, the user may not know that the device has stopped facial tracking, or how to resume tracking). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently. Stopping face tracking when a user is not detected also saves power and reduces wear on the device (including the face tracking sensor). The notification informs the user how to resume tracking stopped for the purpose of saving power and reducing wear on the device (including the facial tracking sensor).
In some embodiments, an input is detected (e.g., via an accelerometer and/or gyroscope of the electronic device) that the device (e.g., 600) is being lifted (e.g., the user is picking up the device and, optionally, detecting the user's face (e.g., 673) in the field of view of one or more cameras (e.g., 602)). In some embodiments, the input is a gesture (e.g., 8166) (e.g., a tap or swipe gesture) for an avatar editing user interface (e.g., 801). In some embodiments, the gesture is an input at any location on the user interface, including, for example, selecting an option, navigating to a new portion of the user interface, selecting an affordance (e.g., a "start tracking facial movement" affordance).
According to some embodiments, the electronic device changes the appearance of the avatar (e.g., as shown in fig. 8 BG-8 BI) based on an input (e.g., a gesture on the avatar to rotate or adjust the magnification of the avatar, or a detected change in the face in the field of view of the camera). Changing the appearance of the avatar based on the input includes moving one or more of the plurality of avatar characteristics (e.g., 8125) in accordance with one or more physical models (e.g., inertial model, gravity model, force transfer model, friction model). In some embodiments, the physical model specifies a magnitude and direction of movement of the avatar feature based on a magnitude and direction of an input (e.g., a gesture on the avatar to rotate or adjust the magnification of the avatar, or movement of the face or a portion of the face) and predefined properties of the virtual avatar feature such as one or more of a simulated mass, a simulated elasticity, a simulated friction coefficient, or other simulated physical properties.
Moving one or more avatar features based on the physical model of the virtual avatar enables a user to create a realistic and interactive virtual avatar that can convey a wider range of non-verbal information. This enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user communicate predetermined messages using more realistic virtual avatar movements), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the electronic device detects a gesture (e.g., a pinch/spread gesture, a swipe gesture) on the avatar (e.g., 805). In response to detecting the gesture on the avatar: in accordance with a determination that the gesture corresponds to a first type of gesture (e.g., a pinch/expand gesture), the electronic device adjusts a zoom level of the avatar based on the gesture (e.g., zoom in on the displayed avatar if the gesture is an expand gesture; and zoom out from the displayed avatar if the gesture is a pinch gesture); and in accordance with a determination that the gesture corresponds to a second type of gesture (e.g., a swipe gesture), the electronic device adjusts an orientation of the avatar based on the gesture (e.g., rotates the avatar in a direction corresponding to the swipe gesture) (e.g., as shown in fig. 8 BG-8 BK). In response to detecting selection of one of the first set of feature options, the electronic device updates the avatar based on the selected feature option. In some embodiments, the zoom and rotate features are available when adding accessories to the head portrait. For example, the first and/or second option selection areas include feature options corresponding to cosmetic enhancements (e.g., scars, birthmarks, freckles, tattoos, and coloring schemes (e.g., corresponding to sports teams, make-up, etc.)) when the respective avatar feature corresponds to an avatar accessory feature. The zoom and rotate operations display the avatar at different zoom levels and angles so that the user can accurately place the selected feature options (e.g., cosmetic enhancements) on the avatar.
Adjusting the zoom level of the avatar based on the gesture provides the user with an option to control modification of the avatar display without requiring a displayed user interface control (e.g., touch control) element. Providing additional control options without cluttering the user interface with additional controls enhances operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
Adjusting the orientation of the avatar based on the gesture provides the user with an option to control modification of the avatar display without requiring a displayed user interface control (e.g., touch control) element. Providing additional control options without cluttering the user interface with additional controls enhances operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the respective feature options (e.g., feature options of the first or second set of feature options) include representations of respective (e.g., currently selected) avatar features. The representation of the respective avatar characteristic is displayed with a magnified view (e.g., magnified) as compared to the respective avatar characteristic of the displayed avatar. In some embodiments, the feature options correspond to the avatar nose and include an enlarged view of the avatar nose and surrounding facial regions when compared to the avatar nose and surrounding facial regions of the displayed avatar. In some embodiments, the second set of feature options includes one or more magnified views of the avatar feature.
According to some embodiments, the second respective feature option (e.g., 8116) includes a representation of the respective avatar feature and excludes (e.g., does not display) at least a portion of the different avatar feature (e.g., obscures the avatar feature of the at least a portion of the respective avatar feature modified using the respective feature option when displayed) (e.g., as shown in fig. 8 BB). In some embodiments, the feature options correspond to avatar ears, and the representations of the avatar ears displayed in the feature options include the avatar ears, but omit other avatar features such as avatar hair that, when displayed, obscure at least a portion of the avatar ears displayed in the avatar options.
According to some embodiments, displaying the avatar editing user interface further includes displaying an avatar feature sub-region (e.g., a scrollable text list of avatar feature options) (e.g., 807) that includes a plurality of affordances (e.g., 809) corresponding to avatar features (e.g., face, hair, eyes, accessories, etc.). The plurality of affordances includes affordances (e.g., 809a, 809b, 809c, 809d) corresponding to a first selection of a respective (e.g., currently selected) avatar feature (e.g., a "hair" affordance 809b is highlighted to indicate that a hair avatar feature has currently been selected).
According to some embodiments, in response to detecting selection of one of the feature options (e.g., 814b) in the first set of feature options (e.g., 814), the electronic device displays an animation of the visual effect associated with the second of the plurality of affordances corresponding to the avatar feature (e.g., highlighting the hair affordance 809b in fig. 8F). In some embodiments, after the first selection of the feature option, an animation is displayed on the affordance corresponding to a different avatar feature than the currently selected avatar feature, prompting the user to select the affordance to display an avatar editing user interface for the different avatar feature.
According to some embodiments, in response to detecting selection of a second affordance (e.g., an "accessory" affordance 809D), wherein the second affordance corresponds to a second avatar feature (e.g., an avatar accessory), the electronic device: updating the first option selection area to display an updated first set of feature options (e.g., displayed earring options) corresponding to a set of candidate values (e.g., different earring options, such as earring, or earring-free) for a first characteristic (e.g., earring characteristic) of the second avatar feature; and updating the second option selection area to display a second set of feature options (e.g., displayed hat options) corresponding to a set of candidate values (e.g., no hat, cowboy hat, headband, etc.) for a second characteristic (e.g., hat characteristic) of the second avatar feature (e.g., as shown in fig. 8 BA-8 BB).
Updating the first and second option selection areas in response to detecting selection of the second affordance corresponding to the second avatar characteristic provides feedback to the user, confirms selection of the second avatar characteristic, and provides visual feedback to the user indicating avatar characteristic options available for the second avatar characteristic. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, an avatar feature sub-region (e.g., 807) is displayed in a first region (e.g., 803) of an avatar editing user interface. The first option selection area (e.g., 808) and the second option selection area (e.g., 810) are displayed in a second area (e.g., 804) of the avatar editing user interface, the second area being displayed below the first area.
According to some embodiments, the first set of feature options includes a plurality of color affordances corresponding to a plurality of colors, including a first selected color affordance corresponding to a color of a respective (e.g., currently selected) avatar feature (e.g., as shown in fig. 8W).
According to some embodiments, in response to detecting a selection (e.g., 832) of one of the plurality of color affordances, the electronic device displays a color selector user interface (e.g., 888, 892, 856, 822) (e.g., displays a user interface that can be selected to modify a color of the selected color affordance) having a selected color corresponding to the selected color affordance and a plurality of other color options not included in the plurality of color affordances. In some embodiments, the display color selector UI has a selected color corresponding to the selected color affordance. The user may then adjust the color selector UI to refine the selected color or select an entirely different color. In some implementations, displaying the color selector user interface includes replacing at least one of the first option selection area or the second option selection area with the color selector user interface. In some embodiments, the color selector UI replaces the first option selection area and the second option selection area with an animation that shows the color selector UI sliding onto the screen (and over the first option selection area and the second option selection area) from a particular direction (e.g., bottom of screen, left side of screen, right side of screen, etc.). In some embodiments, the color selector UI is a pop-up screen displayed on the first option selection area and the second option selection area.
In some embodiments, in accordance with a determination that the plurality of color affordances (e.g., 812) correspond to colors of the avatar skin color feature, the plurality of color affordances includes an expanded set of color affordances (e.g., as shown in fig. 8A) that includes colors corresponding to the avatar skin color feature (e.g., an expanded palette of expansion colors for the selected avatar skin color). In some implementations, when the color corresponds to the avatar skin tone feature, the expanded palette excludes options for expanding or shrinking the palette size (e.g., similar to 832). In some embodiments, the plurality of color indicators are non-scrollable in a horizontal direction when displayed in the expanded state.
According to some embodiments, the plurality of color affordances represent colors (e.g., avatar characteristics other than avatar skin color characteristics) corresponding to a first type of avatar characteristic (e.g., 828). In some embodiments, an electronic device (e.g., 600) displays a first portion (e.g., 882) of a plurality of color affordances. In some embodiments, the electronic device detects a gesture (e.g., a swipe gesture) on the plurality of color affordances (e.g., a swipe gesture on a color affordance). In response to detecting the gesture, the electronic device ceases to display the first portion of the color affordance and displays a second portion of the color affordance (e.g., scrolls the plurality of color affordances to display additional color affordances). In some embodiments, the second portion of the color affordance includes an affordance (e.g., 886) corresponding to an expanded set (e.g., 888) of color affordances different from the first portion of the color affordance and the second portion of the color affordance. The animation of the avatar transitioning from the interactive state to the non-interactive state is displayed to provide visual feedback of the non-interactive appearance of the avatar. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some implementations, at least one of the first or second characteristics corresponds to a feature shape (e.g., a facial shape, a nasal shape, an ear shape, etc.) of a respective avatar feature (e.g., an avatar face).
According to some embodiments, the corresponding avatar feature is an avatar face (e.g., fig. 8B). The first characteristic and the second characteristic are selected from the group comprising: head shape, skin color, nose size, nose shape, lip color, ear size, facial hairstyle, and age.
According to some embodiments, the corresponding avatar feature is avatar hair (e.g., fig. 8O). The first characteristic and the second characteristic are selected from the group comprising: hair color, style, length, hair type (e.g., curl, straight, wave, etc.), hair demarcation (e.g., location of demarcation in the avatar hair), hair pull-up, hair pull-down (e.g., vertical position of hair on the avatar head), and hairline (e.g., back, head, beauty tip, mature, low, etc.).
According to some embodiments, the corresponding avatar characteristic is an avatar eye. The first characteristic and the second characteristic are selected from the group comprising: eye shape, eye color, eyelash and eyebrow shape.
According to some embodiments, the corresponding avatar feature is an accessory (e.g., fig. 8 BB). The first characteristic and the second characteristic are selected from the group comprising: hats, glasses, earrings, and cosmetic enhancements (e.g., coloring regimens (e.g., corresponding to sports teams, make-up, etc.), tattoos, freckles, birthmarks, scars).
According to some embodiments, in response to detecting a vertical gesture on the avatar-editing user interface (e.g., a vertical swipe gesture at a location on the touch screen display corresponding to the avatar-editing user interface), the electronic device scrolls the avatar-editing user interface in a vertical direction corresponding to the vertical gesture. Scrolling the avatar editing user interface includes scrolling the first option selection area and the second option selection area in the direction of the vertical gesture while maintaining the vertical position of the area including the displayed avatar (e.g., as shown in fig. 8 AG-8 AH).
According to some embodiments, in response to detecting a gesture (e.g., 830) on an avatar feature sub-region (e.g., 807) of the avatar editing user interface (e.g., a horizontal swipe gesture at a location on the touchscreen display corresponding to the avatar, or a touch gesture on an affordance corresponding to one of the avatar features), the electronic device: the display avatar feature sub-region changes from a first appearance in which the first avatar feature (e.g., 809a) is selected to a second appearance in which the second avatar feature (e.g., 809b) is selected; ceasing to display the first option selection area and the second option selection area (e.g., 808, 810); displaying a third option selection area (e.g., 838) having a plurality of feature options (e.g., 832) arranged in an order, wherein a first feature option is ranked before a second feature option in the order, and the second feature option is ranked before the third feature option in the order; a fourth option selection area (e.g., 840) is displayed having a plurality of feature options (e.g., 834) arranged in an order, wherein a first feature option is ranked before a second feature option in the order, and the second feature option is ranked before a third feature option in the order. Displaying the third option selection area includes displaying a first animation including sequentially displaying a plurality of feature options of the third option selection area. Displaying the fourth option selection area includes: after displaying at least a portion of the first animation, a second animation is started, including sequentially displaying a plurality of feature options of a fourth option selection area.
According to some embodiments, the avatar has a first size (e.g., an enlarged size) or a second size (e.g., a reduced size). The electronic device detects a gesture (e.g., a tap gesture or a vertical swipe gesture on a feature option) on the avatar editing user interface (e.g., at a location corresponding to the first option selection area or the second option selection area), in accordance with a determination that the gesture corresponds to a selection (e.g., 869) of a feature option (e.g., 836f) in the first set or the second set of feature options, and that the avatar has a second size (e.g., fig. 8AN), the electronic device displays the avatar transitioning from the second size to the first size (e.g., fig. 8AO) (e.g., if the avatar is a reduced size and the feature option is selected, the avatar increasing from the reduced size to AN enlarged size, as shown in fig. 8 AN-8 AO). In accordance with a determination that the gesture is a scroll gesture (e.g., a vertical swipe gesture on the first option selection area or the second option selection area) and the avatar is a first size, the electronic device display avatar transitions to a second size if the scroll gesture corresponds to a first scroll direction (e.g., a downward scroll direction). In some embodiments, if the avatar is of an enlarged or intermediate size, the avatar compresses in response to detecting a scroll gesture in a downward scroll direction. In some embodiments, if the avatar is of reduced size, the device does not further reduce the size of the avatar in response to the scroll gesture along the downward scroll direction. In some embodiments, the device further scrolls the first option selection area and the second option selection area in response to the scroll gesture. In accordance with a determination that the gesture is a scroll gesture and the avatar is a second size, the electronic device displays the avatar transitioning to the first size if the scroll gesture corresponds to a second scroll direction (e.g., an upward scroll direction) that is opposite the first direction. In some embodiments, if the avatar is of a reduced size or an intermediate size, the avatar enlarges in response to detecting the scroll gesture along the upward scroll direction. In some embodiments, if the avatar is of an enlarged size, the device does not further increase the size of the avatar in response to the scroll gesture along the upward scroll direction. In some embodiments, the device further scrolls the first option selection area and the second option selection area in response to the scroll gesture.
According to some embodiments, in accordance with a determination that the gesture is a scroll gesture and the avatar is a first size, if the scroll gesture corresponds to a second scroll direction, the electronic device foregoes displaying the avatar to transition to a second size. In some embodiments, the avatar is only condensed when the scroll gesture is in the downward scroll direction (e.g., 805).
According to some embodiments, prior to detecting selection (e.g., 820) of one of the feature options (e.g., 812), the avatar (e.g., 805) is displayed with a skin color that changes over time through a plurality of different color values (e.g., the avatar is displayed to oscillate back and forth between two or more colors over time). In some embodiments, prior to detecting selection of one of the feature options, the avatar is displayed in a non-interactive state (e.g., 805 in fig. 8A) (e.g., where the avatar has a predetermined appearance that does not change in response to a detected change in the user's face (e.g., 673)). In some embodiments, in response to detecting an input (e.g., 820) on an avatar editing user interface (e.g., 801) (e.g., selecting an avatar skin color option (e.g., 812a) from a plurality of user-selectable skin color options (e.g., 812)), an electronic device (e.g., 600) displays an avatar without an oscillating color effect (e.g., displays an avatar with a static color scheme/monochrome), and the display avatar transitions from a non-interactive state to an interactive state (e.g., an animated state in which the avatar changes in response to detected changes in the user's face (e.g., detected via one or more cameras of the electronic device)).
It is noted that the details of the process described above with respect to method 900 (e.g., fig. 9) also apply in a similar manner to the methods described below and above. For example, method 700 optionally includes one or more features of the various methods described above with reference to method 900. The method 700 of editing an avatar may be incorporated into a method for navigating an avatar user interface. For example, in some embodiments, the navigation user interface invokes a process for creating or editing a customizable avatar, which may be implemented in accordance with the method 900 described above with reference to FIG. 9. As further examples, methods 1000, 1100, 1200, and 1400 optionally include one or more features of the various methods described above with reference to method 900. For example, in some embodiments, the navigation user interface invokes a process for creating or editing a customizable avatar, which may be implemented according to the methods described below with reference to fig. 10-12. As another example, in some embodiments, the navigation user interface invokes a process for modifying the virtual avatar, which may be implemented according to the method described below with reference to fig. 14A-14B. For the sake of brevity, these details are not repeated.
10A-10B are flow diagrams illustrating methods for displaying visual effects in an avatar-editing application, according to some embodiments. The method 1000 is performed at an apparatus (e.g., 100, 300, 500, 600) having a display device. Some operations in method 1000 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 1000 provides an intuitive way for displaying visual effects in an avatar editing application. The method reduces the cognitive burden on the user of applying visual effects to images viewed in an avatar editing application, thereby creating a more efficient human-machine interface. For battery-driven electronic devices, enabling a user to display visual effects in an image faster and more efficiently conserves power and increases the time interval between battery charges.
In some implementations, an electronic device (e.g., 600) displays (1002) via a display device (e.g., 601): including a user interface object (e.g., virtual avatar 805) having respective features (e.g., 851, 8140) of a first set of one or more colors (e.g., a default set of one or more colors, including highlights, midtones, and/or shadows in some embodiments), and a plurality of color options (e.g., 832, 894) (e.g., a plurality of affordances, each corresponding to a color) for the respective features (e.g., a first avatar feature; e.g., avatar skin color, avatar eye color, avatar hair color, etc.). In some embodiments, the corresponding feature is avatar skin color. In some embodiments, the respective feature is avatar eye color (e.g., 829). In some embodiments, the corresponding characteristic is avatar hair color. Displaying the avatar with the corresponding features that the user can change with the color options provides visual feedback to the user confirming that the corresponding features of the avatar are in a state where the color can be changed. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, an electronic device (e.g., 600) detects (1004) a selection (e.g., 895, 852) of a color option (e.g., 894-1, 832a) of a plurality of color options (e.g., 894, 832) corresponding to a second color. In response to detecting the selection (1006): the electronic device changes (1008) the color of the respective feature (e.g., frame 8140-1, avatar hair 851) to the color option (e.g., changes the appearance of the avatar feature option displaying the respective avatar feature; e.g., changes the appearance of the virtual avatar (e.g., 805) having the respective avatar feature), and displays (1010) a first color adjustment control (e.g., 857, 897) (e.g., a slider user interface) corresponding to the color option of the second set of one or more colors (e.g., a set of color changes resulting from the change of slider 857, 897). Displaying the avatar with the first color adjustment control provides visual feedback to the user confirming that the corresponding feature of the avatar has changed color and been selected for further color modification. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently. The haptic feedback confirms that the change has been received. Providing tactile feedback informs the user that input has been received and that changes have been made.
In some embodiments, a first color adjustment control corresponding to a second set of one or more color options includes a slider (e.g., 897) having a track (e.g., 897-2) and a scroll thumb (e.g., 897-1) moving in the track. In some embodiments, the input (e.g., 860) causes movement of the scroll thumb in the track. In some embodiments, the device generates the haptic feedback in response to detecting the input and moving to a predetermined location (e.g., 860') in accordance with the scroll block (e.g., the midpoint of the track; the location corresponding to the default value of the second color). In some embodiments, movement of the scroll block to a position other than the predetermined position does not generate haptic feedback including a haptic output.
While the respective feature (e.g., 851) of the user interface object (e.g., 805) has the second set of one or more colors (e.g., 832a), the electronic device detects (1012) an input (e.g., 860) (e.g., a drag gesture or a tap gesture) to the corresponding first color adjustment control. In response to detecting the input corresponding to the first color adjustment control, the electronic device modifies (1014) the color of the respective feature from a second set of one or more colors to a modified version of the second set of one or more colors (e.g., a modified color of the respective avatar feature) based on the second color. In some embodiments, the slider user interface modifies the properties (e.g., hue, saturation value/brightness) of the base selection color option. In some embodiments, the display color of the selected color option is also modified in response to an input on the slider user interface. In some embodiments, the plurality of color options includes a color palette as described with respect to method 900 and fig. 8 AX-8 AY. The modified appearance of the avatar provides feedback to the user indicating the types of characteristics of the avatar that may be customized. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, in response to detecting the input (e.g., 860) corresponding to the first color adjustment control (e.g., 857), the electronic device (e.g., 600) modifies the color (e.g., 832a) of the color option from the second color to a modified version of the second set of one or more colors. In some embodiments, modifying the color of the respective feature (e.g., 851, 8140) from the second set of one or more colors to a modified version of the second set of one or more colors includes modifying a plurality of values (e.g., highlight, halftone, shadow) of the second set of one or more colors. In some embodiments, modifying the color of the respective feature from the second set of one or more colors to a modified version of the second set of one or more colors is further based on a magnitude and direction of the input (e.g., 860) corresponding to the first color adjustment control (e.g., the farther the input moves to the right, the greater the increase in the red value of the color; the farther the input moves to the left, the greater the increase in the green value of the color).
According to some embodiments, an electronic device (e.g., 600) displays (1016) a second plurality of color options (e.g., 896) (e.g., a portion of a respective (first) avatar feature or a second avatar feature different from the respective avatar feature) for a second feature (e.g., 8140-2). Displaying an avatar having a second plurality of color options of a second characteristic provides visual feedback to the user that is prompted when the user changes the color of the second characteristic using the second plurality of color options. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, an electronic device (e.g., 600) detects (1018) a selection (e.g., 898) of a second color option (e.g., 896-1) to a second plurality of color options. In some embodiments, in response to (1020) detecting selection of the second color option, the electronic device changes (1022) the color of the second feature to the second color option and displays (1024) a second color adjustment control (e.g., 899) corresponding to the second color option of the third set of one or more colors. Displaying the second color adjustment control provides visual feedback to the user that the color of the second feature can be changed with a different set of colors. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the respective feature and the second feature respectively correspond to portions of an avatar eyeglass feature (e.g., 8140), the plurality of color options (e.g., 884) correspond to colors of a frame (e.g., 8140-1) of the avatar eyeglass, and the second plurality of color options (e.g., 896) correspond to colors of a lens (e.g., 8140-2) of the avatar eyeglass. In some embodiments, the electronic device (e.g., 600) detects an input corresponding to the second color adjustment control. In response to detecting the input corresponding to the second color adjustment control, the electronic device modifies the opacity of the overhead eyewear lens (e.g., modifies the opacity of the lens from a maximum value of full reflection to a minimum value of mostly transparent and no reflection within a range). The appearance of the avatar glasses provides feedback to the user indicating the type of characteristics of the avatar that can be customized. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting selection of the second color option (e.g., 832b), the device (e.g., 600) stops displaying (e.g., in response to detecting selection of the second color option) the first color adjustment control (e.g., 857) (e.g., hiding the first color slider) corresponding to the second set of one or more color options (e.g., 832 a). According to some embodiments, after ceasing to display (e.g., in response to detecting selection of the second color option) the first color adjustment control (e.g., 857) corresponding to the color option (e.g., 832a) of the second set of one or more colors, the electronic device (e.g., 600) detects a subsequent selection (e.g., 871) of the color option (e.g., 832a) of the plurality of color options corresponding to the second color. In response to detecting the subsequent selection, the electronic device resumes displaying the first color adjustment control of the color option (see, e.g., fig. 8 AT). In some embodiments, the first color adjustment control corresponds to a modified version of the second set of one or more colors (e.g., a modification of the color slider (including a change to the slider and a modified version of the second set of one or more colors)) persists until a subsequent input change by changing the color slider). In some embodiments, the setting of the color slider persists as the device navigates away from the displayed slider (e.g., by selecting a different avatar feature, selecting a different color affordance, scrolling through avatar options, etc.). In some embodiments, the modified settings (e.g., the position of the selector affordance and the modified color) remain unchanged when the device navigates back to the modified slider (e.g., as shown in fig. 8 AT). Displaying the first color adjustment control again after ceasing to display the first color adjustment control provides visual feedback to the user that the state of the user interface returns to a mode in which colors may be changed. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, after ceasing to display the first color adjustment control (e.g., 857) corresponding to the second set of one or more colors, the electronic device remains to display the color option (e.g., 832a) having the modified version of the plurality of color options of the second set of one or more colors (e.g., as shown in fig. 8 AC).
In some embodiments, modifying the color of the respective feature from the second set of one or more colors to a modified version of the second set of one or more colors based on the second color comprises one or more of the following steps. In accordance with a determination that the input (e.g., 860) corresponding to the first color adjustment control (e.g., 822) includes movement in the second direction, the device (e.g., 600) increases the red value of the second set of one or more colors. In accordance with a determination that the input corresponding to the first color adjustment control includes movement in a third direction, green values of the second set of one or more colors are increased.
In some embodiments, the electronic device (e.g., 600) detects an input (e.g., a drag gesture or a tap gesture) corresponding to the second color adjustment control (e.g., 899) when the respective feature of the user interface object has a third set of one or more colors. In response to detecting the input corresponding to the second color adjustment control, the electronic device modifies the color of the respective feature from the third set of one or more colors to a modified version of the third set of one or more colors (e.g., a modified color of the respective avatar feature) based on the second color. In some embodiments, this includes one or more of the following steps. In accordance with a determination that the second input corresponding to the second color adjustment control includes movement in the second direction, increasing a red value of the third set of one or more colors. In accordance with a determination that the second input corresponding to the second color adjustment control includes movement in a third direction, green values of a third set of one or more colors are increased. The modification of the set of colors is associated with a movement of the user input. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, in response to determining that the input corresponding to the first color adjustment control (e.g., 897) includes a first direction, the electronic device (e.g., 600) modifies the second set of one or more colors in the first manner (e.g., adjusts a color gradient of the second set of one or more colors along the first direction (e.g., from a cooler hue to a warmer hue) based on movement of the input on the first color slider in the first direction). In some embodiments, in response to determining that the second input corresponding to the second color adjustment control (e.g., 899) includes the first direction, the third set of one or more colors is modified in the first manner (e.g., the gradient of the third set of one or more colors is adjusted in the same manner as the adjustment of the second set of one or more colors (e.g., also from a cooler hue to a warmer hue) based on movement of the input on the second color slider in the first direction (e.g., in the same first direction as the movement of the first color slider). The modification of the set of colors is associated with a movement of the user input. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
It is noted that the details of the process described above with respect to method 1000 (e.g., fig. 10) also apply in a similar manner to the methods described below and above. For example, method 700 optionally includes one or more features of the various methods described above with reference to method 1000. The method 700 of editing an avatar may be incorporated into a method for navigating an avatar user interface. For example, in some embodiments, the navigation user interface invokes a process for creating or editing a customizable avatar, which may be implemented in accordance with method 1000 described above with reference to FIG. 10. As further examples, methods 900, 1100, 1200, and 1400 optionally include one or more characteristics of the various methods described above with reference to method 1000. For example, in some embodiments, the navigation user interface invokes a process for creating or editing a customizable avatar, which may be implemented according to the methods described below with reference to fig. 11-12. As another example, in some embodiments, the navigation user interface invokes a process for modifying the virtual avatar, which may be implemented according to the method described below with reference to fig. 14A-14B. For the sake of brevity, these details are not repeated.
11A and 11B are flow diagrams illustrating methods for displaying an avatar editing user interface, according to some embodiments. Method 1100 is performed at an apparatus (e.g., 100, 300, 500, 600) having a display device. Some operations in method 1100 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
Method 1100 provides an intuitive way to display an avatar editing user interface, as described below. The method reduces the cognitive burden on the user to manage the avatar, thereby creating a more efficient human-machine interface. For battery-driven computing devices, enabling a user to modify the characteristics of an avatar using an avatar-editing user interface faster and more efficiently conserves power and increases battery-charge time.
An electronic device (e.g., 600) displays (1102) an avatar editing user interface (e.g., 801) via a display apparatus (e.g., 601), including displaying (1104): an avatar (e.g., 805) having a plurality of avatar features (e.g., avatar hair, facial features (avatar lips, eyes, nose, etc.), accessories (e.g., earrings, sunglasses, hat)) including a first avatar feature (e.g., skin tone) having a first set of one or more colors and a second avatar feature (e.g., 827, 829) (e.g., facial hair, eyebrows, lips), wherein the second avatar feature has a set of one or more colors based on and different from the first set of one or more colors. Displaying the avatar editing user interface also includes displaying (1106) a plurality of color options (e.g., 812) (e.g., a plurality of affordances, each affordance optionally corresponding to a color) corresponding to the first avatar characteristic. The electronic device detects (1108) a selection (e.g., 820) of a respective color option (e.g., 812a) of the plurality of color options. The appearance of the avatar provides feedback to the user indicating the type of characteristics of the avatar that can be customized. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to detecting selection of a respective color option (e.g., 812a) of a plurality of color options for a first avatar feature (e.g., skin tone), the electronic device (e.g., 600) updates (1110) an appearance (e.g., 805) of the avatar in accordance with a determination that the respective color option corresponds to a second set of one or more colors. In some embodiments, updating the appearance of the avatar includes one or more of the following steps. One step includes changing (1112) the first avatar characteristic (e.g., the face of avatar 805) to a second set of one or more colors. Another step includes changing (1114) the second avatar characteristic (e.g., 827) to a set of one or more colors that is based on and different from the second set of one or more colors (e.g., the selected color of the first avatar provides color characteristics (e.g., ground tint, hue, shading, saturation, mid-tone, highlight, warm color, etc.) for the modified color of the second avatar characteristic). Selecting a respective color option for the first avatar characteristic and changing the first avatar characteristic in accordance with the selection provides feedback to the user of the modified first avatar characteristic. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, in response to detecting selection (e.g., 820) of a respective color option (e.g., 812a) of the plurality of color options for the first avatar characteristic, in accordance with a determination that the respective color option corresponds to the third set of one or more colors, the electronic device (e.g., 600) changes (1118) the first avatar characteristic and the second avatar characteristic (e.g., 827) in a manner different than when the respective color option corresponds to the second set of one or more colors (e.g., adjusting the mid-tone of the first avatar characteristic and the second avatar characteristic when the selected color option corresponds to the second set of one or more colors) (e.g., changes the first avatar characteristic and the second avatar characteristic based on the selected color option corresponding to the third set of one or more colors but not the second set of one or more colors; e.g., when the selected color option corresponds to the third set of one or more colors, adjusting the highlighting of the first avatar characteristic and the second avatar characteristic based on the selected color option). In some embodiments, the relationship between the selected color option and the first avatar characteristic and the second avatar characteristic is different for the third set of colors than for the second set of one or more colors. For example, the selected color option corresponding to the second set of one or more colors is used to adjust the highlight of the first avatar characteristic and/or the second avatar characteristic, while the selected color option corresponding to the third set of one or more colors is used to adjust the midtone of the first avatar characteristic and/or the second avatar characteristic. Selecting a respective color option for the first avatar characteristic and changing the first avatar characteristic in accordance with the selection provides feedback to the user of the modified first avatar characteristic. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the electronic device (e.g., 600) displays, via the display device (e.g., 601), a second plurality of color options (e.g., 832) corresponding to a third avatar characteristic (e.g., 851) (e.g., hair color). The device detects selection (e.g., 852) of a first color option (e.g., 832a) of the second plurality of color options. In response to detecting selection of a first color option of the second plurality of color options of the third avatar characteristic, and in accordance with a determination that the first color option corresponds to the fourth set of one or more colors, the electronic device updates an appearance of the avatar (e.g., 805). Updating the avatar includes changing the third avatar characteristic (e.g., 851) to a fourth set of one or more colors and changing the second avatar characteristic (e.g., eyebrow color 827) to one or more colors based on and different from the fourth set of one or more colors. In some embodiments, the facial hair color (e.g., eyebrow color) of the avatar is affected by hair color and skin color. The appearance of the avatar provides feedback to the user indicating the type of characteristics of the avatar that can be customized. Selecting a first color option for the third avatar characteristic and changing the third avatar characteristic in accordance with the selection provides feedback to the user of the modified first avatar characteristic. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, an electronic device (e.g., 600) detects a selection (e.g., 861) of a second color option (e.g., 832b) of a second plurality of color options (e.g., 832). In response to detecting selection of a second color option of the second plurality of color options for the third avatar feature, and in accordance with a determination that the first color option corresponds to a fifth set of one or more colors, the electronic device changes the third avatar feature (e.g., skin tone) and the second avatar feature (e.g., 827) in a manner different than when the first color option was selected (e.g., changes the third avatar feature and the second avatar feature based on the second color option corresponding to the fifth set of one or more colors but not the fourth set of one or more colors). In some embodiments, the relationship between the selected color option (e.g., the second color option) and the third avatar characteristic and the second avatar characteristic is different for the fifth set of colors than for the fourth set of one or more colors. For example, the selected color option corresponding to the fourth set of one or more colors is used to adjust the highlight of the third avatar characteristic and/or the second avatar characteristic, while the selected color option corresponding to the fifth set of one or more colors is used to adjust the halftone of the third avatar characteristic and/or the second avatar characteristic. In some embodiments, the first avatar characteristic corresponds to an avatar hair color. In some embodiments, the second avatar characteristic corresponds to an avatar of the eyebrow. In some embodiments, the third avatar characteristic corresponds to an avatar skin color. A second color option is selected for the second avatar characteristic and the second avatar characteristic is changed in accordance with the selection to provide feedback to the user of the modified second avatar characteristic. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the third avatar feature (e.g., avatar skin color) and the second avatar feature (e.g., 827) (e.g., avatar eyebrow) are changed in a first manner that includes adjusting the first color attribute (e.g., color hue) based on a second set of one or more colors corresponding to the first avatar feature (e.g., avatar hair color). In some embodiments, the third avatar characteristic and the second avatar characteristic are changed in a second manner that includes adjusting a second color attribute (e.g., color brightness) different from the first color attribute based on a fourth set of one or more colors corresponding to the third avatar characteristic (e.g., the avatar eyebrows are darker than the avatar skin tones). The third avatar characteristic and the second avatar characteristic are adjusted according to the first color attribute corresponding to the first avatar characteristic. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, in response to detecting selection (e.g., 820) of a respective color option (e.g., 812a) of the plurality of color options (e.g., 812), the electronic device (e.g., 600) displays (1116) a color adjustment control (e.g., 822) (e.g., a slider user interface) corresponding to the respective color option of the second set of one or more colors. In some embodiments, the color adjustment control is the color adjustment control described with respect to method 1000 and fig. 10A-10B. The color adjustment control provides a visual representation of the color options that may be selected. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the second avatar feature corresponds to an avatar lip (e.g., 828) having an avatar lip color corresponding to a set of one or more colors based on and different from the second set of one or more colors. In some implementations, the device detects an input (e.g., a drag gesture or a tap gesture) corresponding to a color adjustment control (e.g., 892, 893). In response to detecting the input, the electronic device (e.g., 600) modifies the avatar lip color of a first portion of the avatar lips (e.g., an outer portion (e.g., 828a)) and maintains the avatar lip color of a second portion of the avatar lips (e.g., an inner portion (e.g., 828 b)). The appearance of the second avatar characteristic provides feedback to the user indicating the type of characteristics of the avatar that may be customized. Selecting the color option for the avatar characteristic from the color adjustment control provides feedback to the user of the modified first avatar characteristic. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the first avatar characteristic corresponds to avatar skin. In some embodiments, the second avatar feature corresponds to an avatar lip (e.g., 828). In some embodiments, the set of one or more colors based on the second set of one or more colors includes the second set of one or more colors and a red value (e.g., the avatar lip is based on a skin tone and a red hue (e.g., a natural hue such as pink, or a hue representative of a lipstick hue)). The appearance of the avatar skin tone provides feedback to the user indicating the skin tone of the customizable avatar. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
It is noted that the details of the process described above with respect to method 1100 (e.g., fig. 11) also apply in a similar manner to the methods described below and above. For example, method 700 optionally includes one or more features of the various methods described above with reference to method 1100. The method 700 of editing an avatar may be incorporated into a method for navigating an avatar user interface. For example, in some embodiments, the navigation user interface invokes a process for creating or editing a customizable avatar, which may be implemented in accordance with the method 900 described above with reference to FIG. 9. As further examples, methods 1000, 1200, and 1400 optionally include one or more features of the various methods described above with reference to method 1100. For example, in some embodiments, the navigation user interface invokes a process for creating or editing a customizable avatar, which may be implemented according to the methods described below with reference to fig. 10-12. As another example, in some embodiments, the navigation user interface invokes a process for modifying the virtual avatar, which may be implemented according to the method described below with reference to fig. 14A-14B. For the sake of brevity, these details are not repeated.
Fig. 12A and 12B are flow diagrams illustrating methods for displaying an avatar editing user interface, according to some embodiments. The method 1200 is performed at an apparatus (e.g., 100, 300, 500, 600) having a display device. Some operations in method 1200 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
Method 1200 provides an intuitive way to display an avatar editing user interface, as described below. The method reduces the cognitive burden on the user to manage the avatar, thereby creating a more efficient human-machine interface. For battery-driven computing devices, enabling a user to modify the characteristics of an avatar using an avatar-editing user interface faster and more efficiently conserves power and increases battery-charge time.
An electronic device (e.g., 600) displays (1202) an avatar editing user interface (e.g., 801) via a display apparatus (e.g., 601), including a display (1204): an avatar (e.g., 805) having a plurality of avatar characteristics, including an avatar hair (e.g., 851) having a selected avatar hair style (e.g., 836b) (e.g., a particular style for the avatar hair selected (e.g., by a user)). The avatar-editing user interface also includes (1206) a plurality of avatar accessory options (e.g., 8112) (e.g., affordances corresponding to various avatar accessories (e.g., glasses, hats, earrings, scarves, etc.)). The appearance of the avatar hair and avatar accessory options provides feedback to the user indicating a customizable hairstyle and avatar accessory. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
An electronic device (e.g., 600) detects (1208) selection of a respective accessory option (e.g., 8112 b). In response to detecting selection of a respective one of the plurality of avatar accessory options (e.g., 8112), the electronic device changes (1210) an appearance of the avatar (e.g., 805) to include a representation of the respective accessory option (e.g., 8140), including in accordance with a determination that the respective accessory option is a first accessory option (e.g., 8112b) (e.g., an eyewear accessory option): displaying (1212) a representation of a first accessory option (e.g., 8140) positioned on the avatar (e.g., displaying selected glasses on a face of the avatar with temples located on sides of the head of the avatar and temples located behind ears of the avatar). The electronic device modifies (1214) the geometry of a first portion (e.g., 8145) of the avatar hair based on the location of the representation of the first accessory option on the avatar while maintaining the selected avatar hairstyle (e.g., a portion of the avatar hair located near the glasses is pushed to one side to accommodate the presence of glasses on the face of the avatar, including positioning the temples and earpieces behind the ears of the avatar, while the remainder of the avatar hair remains unchanged to represent the selected avatar hairstyle). The appearance of the avatar accessory options provides feedback to the user indicating the accessories of the customizable avatar. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the appearance of the representation of the respective accessory option (e.g., 8108d) is based on one or more characteristics (e.g., hair type, hairstyle, hair length, etc.) of the avatar hair (e.g., 851). In some embodiments, the size of the accessory options (e.g., hat (e.g., 8170)) is determined based on the avatar hair. For example, if the avatar hair has a small hairstyle (e.g., 851-1) (e.g., 836c) (buzz or optical head hairstyle), the hat has a small hat line perimeter (e.g., hat line 8118 in fig. 8 CD). Conversely, if the avatar hair has a large hairstyle (e.g., 851-2) (e.g., large curls), the cap has a large cap line circumference (e.g., cap line 8118 in fig. CC). In some embodiments, the location of the accessory options (e.g., hair bands) is determined based on the avatar hair. For example, if the avatar hair has a short hairstyle, the avatar hair band is positioned near the avatar head. Conversely, if the avatar hair has a long hairstyle, the hair bow may be positioned further away from the head, depending on the length of the hair. The appearance of the avatar hair provides feedback to the user indicating that the avatar's hairstyle can be customized. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, in accordance with a determination that the respective accessory option is a second accessory option (e.g., an accessory option different from the first accessory option; e.g., a hat, 8170): the electronic device (e.g., 600) displays a representation of a second accessory option (e.g., 8170) located on the avatar (e.g., displays that a hat on the avatar's head has a hat line (e.g., 8118) of the hat positioned on the avatar's head based on the type of hat selected). The electronic device modifies a geometry of a second portion of the avatar hair (e.g., hair at 8118, 8118-1, or 8118-2) based on a location of the representation of the second accessory option on the avatar, wherein the geometry of the second portion is different from the modified geometry of the first portion of the avatar hair (e.g., 8145) while maintaining the selected avatar hairstyle (e.g., modifying the avatar hair at the hat line such that the avatar hair is tied at the hat line such that the hair is positioned below and/or above the hat line (depending on the selected hat and hairstyle) to expand in response to tying of the hair at the hat line). Displaying the appearance of the avatar with the accessory options provides feedback to the user indicating customization of the avatar with the selected accessory. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, after displaying the representation of the first accessory option location (e.g., 8140) on the avatar, the electronic device (e.g., 600) detects (1216) a selection (e.g., 8159) of a second corresponding accessory option (e.g., 8108d) (e.g., avatar hat). In response to detecting selection of a second respective one of the plurality of avatar accessory options, the electronic device changes (1218) the appearance of the avatar (e.g., 805) to include a representation of the second respective accessory option (e.g., 8160) and a representation of the respective accessory option (e.g., the avatar is updated to include the avatar hat and avatar glasses while maintaining the selected avatar hair style). The appearance of the avatar with the selected accessory provides feedback to the user indicating the accessory of the customizable avatar. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, an electronic device (e.g., 600) displays a plurality of avatar hair style options (e.g., 836) (e.g., including a hair style option corresponding to a selected avatar hair style) via a display device (e.g., 601). The electronic device detects a selection (e.g., 872) of a second hair style option (e.g., 836c in fig. 8 AU) (e.g., a different hair style option than the currently selected hair style option). In response to detecting selection of the second hair style option, the electronic device changes the appearance of the avatar (e.g., 805) from having the selected avatar hair style (e.g., 836f) to having the second hair style option. In some embodiments, this includes one or more of the following steps. In accordance with a determination that the respective accessory option is a first type of accessory option (e.g., avatar glasses (e.g., 8140)) displayed on an avatar adjacent to at least a portion of the avatar hair, displaying the avatar hair having a second hairstyle, wherein the second hairstyle option modifies in a first manner (e.g., 8145) based on a representation of the respective accessory option (e.g., modifies a geometry of the first portion of the avatar hair based on a position of the avatar glasses while still maintaining the second avatar hairstyle). Displaying the appearance of the avatar with hair style options provides feedback to the user indicating customization of the avatar with the selected hair style. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In accordance with a determination that the respective accessory option (e.g., 8108d) is a second type of accessory option (e.g., 8108) (e.g., a hat displayed on an avatar adjacent to at least a portion of the avatar hair), the electronic device (e.g., 600) displays the avatar hair (e.g., 851) having a second hair style option (e.g., 836c) that is modified in a second manner (e.g., bulging at 8118-1 or 8118-2) based on the representation of the respective accessory option (e.g., modifying a geometry of a second portion of the avatar hair based on a position of the hat while still maintaining the second avatar hair style). Displaying the appearance of the avatar with the accessory options provides feedback to the user indicating customization of the avatar with the selected accessory. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, in accordance with a determination that the respective accessory option is of the third type (e.g., an accessory option that does not affect the displayed avatar hair style; e.g., a nose ring), the electronic device (e.g., 600) displays the avatar hair (e.g., 851) having the second hair style option (e.g., 836c) without modification (e.g., without modification based on the respective accessory option). Displaying the appearance of the avatar with hair style and accessory options provides feedback to the user indicating customization of the avatar with the selected hair style. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, in response to detecting selection of a third hair style option (e.g., 851-2 in fig. 8 CB) (e.g., a different hair style option than the currently selected hair style option), the electronic device (e.g., 600) changes the appearance of the avatar from having the appearance of the selected avatar hair style (e.g., 851-1) to having the third hair style option, and changes the appearance (e.g., location, size, shape, etc.) of the representation of the respective accessory option (e.g., 8160) based on the third hair style option. Displaying the appearance of the avatar with hair style options provides feedback to the user indicating customization of the avatar with the selected hair style. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the respective accessory option is an avatar cap (e.g., 8160), and changing the appearance of the representation of the respective accessory option includes changing a size of the avatar cap representation (e.g., selecting a larger hair style increases the size of the cap to accommodate a larger hair style; e.g., selecting a smaller hair style decreases the size of the cap to accommodate a smaller hair style) based on a size (e.g., simulated hair volume) of the third hair style option (e.g., 851-2). In some embodiments, changing the appearance of the representation of the respective accessory option further comprises changing a size of a hat line (e.g., a portion of the hat that fits over the head to secure the hat to the head) of the avatar representation based on a size of the third hair style option (e.g., a perimeter of the hat line changes (increases or decreases) based on the size of the hair style option). In some embodiments, the cap wire remains in the same position relative to the head such that the cap wire continues to intersect the head at the same position, but has a different perimeter). Displaying the appearance of the avatar with hat options provides feedback to the user indicating customization of the avatar with the selected hat. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the first accessory option is a head portrait cap (e.g., 8160), and displaying the representation of the first accessory option located on the head portrait (e.g., 805) includes displaying the head portrait cap located over a portion of the head portrait hair (e.g., 851) (e.g., the head portrait cap is displayed overlying the head of the head portrait and the top of the adjacent hair). In some embodiments, modifying the geometry of a portion of the avatar hair includes displaying that the avatar hair has a tightened appearance at a location (e.g., 8118-1 or 8118-2) adjacent to a hat line of the avatar hat (e.g., a portion of the hat that fits over the head to secure the hat to the head), and expands as the avatar hair extends from a location near the hat line of the avatar hat (e.g., modifying the avatar hair at the hat line such that the avatar hair is tightened at the hat line such that the hair is positioned below and/or above the hat line (depending on the hat and hairstyle selected) to expand in response to tightening of the hair at the hat line). Displaying the appearance of the avatar with hat options provides feedback to the user indicating customization of the avatar with the selected hat. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, in response to detecting a selection (e.g., 8168) of a second avatar hat option (e.g., 8108e) of the plurality of avatar accessory options (e.g., 8108), the electronic device (e.g., 600) replaces the representation of the avatar hat (e.g., 8160) with the representation of the second avatar hat (e.g., 8170) while maintaining the geometry of the portion of the avatar hair and the hat line (e.g., 8118), the avatar hair having a tightened appearance at a location adjacent to the hat line and expanding as the avatar hair extends from the location adjacent to the hat line (e.g., different avatar hats have the same hat line; e.g., selecting a different avatar hat replaces the currently selected avatar with the different avatar hat while maintaining the shape of the hat line and the avatar hair relative to the hat line). Displaying the appearance of the avatar with hat options provides feedback to the user indicating customization of the avatar with the selected hat. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the avatar hair (e.g., 851) moves according to the avatar hair's simulated physical reaction to avatar movement based on a physical model (e.g., as shown in fig. 8 BY) (e.g., inertial model, gravity model, force transfer model, friction model). When the first accessory option is a headwear, the simulated physical reaction of the headwear hair to the movement of the headwear based on the physical model changes (e.g., the movement of the headwear hair changes when the headwear is worn). In some embodiments, the avatar hair moves with movement of the avatar head based on the physical model when the avatar is not wearing a hat. In some embodiments, the movement of the head of the avatar relative to the head of the avatar when the avatar is wearing the hat varies based on the position of the hat on the head of the avatar. In some embodiments, the physical model specifies a magnitude and direction of movement of the avatar feature based on a magnitude and direction of an input (e.g., 8XX) (e.g., a gesture on the avatar to rotate or adjust the magnification of the avatar, or movement of the face or portion of the face) and predefined attributes of the virtual avatar feature such as one or more of a simulated mass, a simulated elasticity, a simulated coefficient of friction, or other simulated physical attributes. In some embodiments, the simulated physical response of the avatar hair changes as the attachment point of the hair moves from the location where the hair is attached to the hat line.
According to some embodiments, the first accessory option is avatar glasses (e.g., 8140), and modifying the geometry of the portion of the avatar hair (e.g., 851) includes displaying a portion (e.g., 8145) of the avatar hair positioned to avoid obscuring at least a portion of the avatar glasses (e.g., hair on the side of the avatar head, hair over the avatar ears moving to the back or side, or otherwise positioned behind the temples of the glasses). In some embodiments, the first accessory option is head glasses and displaying the representation of the first accessory option located on the head includes: displaying a representation of the reflection (e.g., 8150) (e.g., the reflected representation overlaid on the representation of the glasses) (e.g., determining a location of the reflection on the glasses based on relative locations of the displayed glasses and a simulated light source, optionally determined based on a light source detected in a field of view of the camera) on a lens portion (e.g., 8140-2) of the eyewear, and displaying a shadow representation cast by the representation of the eyewear with the eyewear displayed on at least a portion of the avatar (e.g., the shadow cast by the glasses overlaid on the representation of the avatar, having an opacity of less than 100%) (e.g., a portion of the avatar determined based on relative locations of the displayed avatar and the simulated light source, optionally determined based on a light source detected in a field of view of the camera). Displaying the appearance of the avatar with the glasses option provides feedback to the user indicating customization of the avatar with the selected hat. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, displaying the representation of the first accessory option located on the avatar comprises: displaying a representation of one or more shadows (e.g., 8142, 8147, 8172) projected (e.g., projected on the avatar) by a first accessory option (e.g., an avatar hat (e.g., 8170) or avatar glasses (e.g., 8140)) or avatar hair (e.g., 851) (e.g., the representation of the shadows projected by the hair, glasses, and/or hat is overlaid on the representation of the avatar with less than 100% opacity) (e.g., a portion of the avatar determined based on the relative positions of the displayed avatar and simulated light sources, optionally determined based on light sources detected in the field of view of the camera). Displaying the appearance of the avatar with the shadow provides feedback to the user indicating a more realistic representation of the avatar. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the first accessory option is a head ring (e.g., 8125). The avatar earrings move according to a physical model (e.g., an inertial model, a gravity model, a force transfer model, a friction model) (in some embodiments, the avatar moves based on detecting changes in the face within the field of view of one or more cameras of the electronic device). Displaying the appearance of the avatar with earrings provides feedback to the user indicating customization of the avatar with the selected earring. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
It is noted that the details of the process described above with respect to method 1200 (e.g., fig. 12) also apply in a similar manner to the methods described below and above. For example, method 700 optionally includes one or more features of the various methods described above with reference to method 1200. The method 700 of editing an avatar may be incorporated into a method for navigating an avatar user interface. For example, in some embodiments, the navigation user interface invokes a process for creating or editing a customizable avatar, which may be implemented in accordance with the method 900 described above with reference to FIG. 9. As further examples, methods 1000, 1100, and 1400 optionally include one or more characteristics of the various methods described above with reference to method 1200. For example, in some embodiments, the navigation user interface invokes a process for creating or editing a customizable avatar, which may be implemented according to the methods described below with reference to fig. 10-11. As another example, in some embodiments, the navigation user interface invokes a process for modifying the virtual avatar, which may be implemented according to the method described below with reference to fig. 14A-14B. For the sake of brevity, these details are not repeated.
Fig. 13A-13O illustrate exemplary user interfaces for modifying an avatar in an avatar navigation user interface. The user interfaces in these figures are used to illustrate the processes described below, including the process in FIG. 14.
In fig. 13A, the device 600 displays an instant message user interface 1303 similar to the instant message user interface 603 in fig. 6A. The device 600 detects the input 1302 on the application taskbar affordance 1310 and displays a condensed avatar selection interface 1315 in FIG. 13B (similar to the condensed avatar selection interface 668 in FIG. 6L). The condensed avatar selection interface includes a scrollable list of avatars 1320 (similar to the scrollable list of avatars 675 in FIG. 6L), including customizable female avatars 1321, monkey avatars 1322, and robot avatars 1323.
As shown in fig. 13B-13O, the device 600 modifies the avatar (e.g., monkey avatar 1322) displayed in the condensed avatar selection interface 1315 in response to detecting the facial change. For reference, fig. 13B-13O include representations of a face 1325 (e.g., a user's face) detected in the field of view of a camera (e.g., 602). Fig. 13B to 13O show modifications of various displayed avatars in response to a change in the detected face 1325. In some embodiments, the view of face 1325 in fig. 13B-13O is shown from the perspective of a device positioned facing face 1325. Thus, the corresponding changes to the displayed avatar are shown in fig. 13B-13O as mirror images relative to the movement of the face 1325.
In fig. 13B, device 600 detects a forward facing face 1325 with lower jaw 1325-2 and mouth 1325-1 closed. In response, device 600 modifies the displayed avatar, i.e., monkey avatar 1322, to have the same facial expression as jaw 1322-2 and mouth 1321-1 are closed, thereby matching the facial expression of mouth 1325-1.
In fig. 13C, the device 600 detects that the lower jaw 1325-2 and mouth 1325-1 are moving towards an open position and modifies the lower jaw 1322-2 and mouth 1322-1 of the monkey avatar 1322 to a slightly open position to match the movement of the lower jaw 1325-2 and mouth 1325-1. The tongue 1325-3 is not extended. Thus, device 600 does not show tongue 1323-3 of the monkey avatar extending from mouth 1323-1, but rather is located inside mouth 1323-1.
In some embodiments, device 600 displays that the avatar tongue extends beyond the avatar mouth in response to detecting that the user's tongue extends from the user's mouth. For example, in FIG. 13D, the lower jaw 1325-2 is slightly open and the tongue 1325-3 extends from the mouth 1325-1. Accordingly, apparatus 600 modifies the monkey avatar 1322 to extend tongue 1322-3 from mouth 1322-1 while lower jaw 1322-2 remains in a slightly open position.
In some embodiments, the apparatus 600 displays a transition from no-tongue extension (e.g., see tongue 1322-3 in fig. 13C) to tongue extension (e.g., see tongue 1322-3 in fig. 13D) as an animation of the tongue moving from within the head image mouth (e.g., mouth 1322-1) to an extended posture. In some embodiments, the animation includes the tongue of the displayed avatar bending over the teeth of the avatar as the tongue moves from the mouth to the extended position. For example, in fig. 13D, the avatar tongue 1322-3 is slightly curved over the bottom teeth of the avatar mouth. In some embodiments, the device 600 displays the tongue back into the mouth of the avatar by reversing the tongue extension animation (including optionally displaying a reversal of the bending motion of the avatar tongue).
In some embodiments, the device 600 displays movement of the avatar tongue (e.g., tilt or rotation of the user's head, or up/down movement and/or left-right movement of the mandible 1325-2) based on detected movement of a user's facial features other than the user's tongue. For example, fig. 13E illustrates movement of the monkey tongue 1322-3 in response to a detected change in position of the user's mandible 1325-2. When the device 600 detects that the user's mandible 1325-2 is moving downward and the user's mouth 1325-1 is enlarged, the device 600 enlarges the monkey mouth 1322-1 and lowers the monkey mandible 1322-2. When the monkey's lower jaw 1322-2 is lowered, the apparatus 600 shows the monkey's tongue 1322-3 moving downward with the lower jaw 1322-2 and farther away from the mouth 1322-1. The device 600 may also modify the position of the tongue 1322-3 based on other movements of the user's mandible 1325-2. For example, if the user moves his chin left and right, the head image moves with the chin 1322-2 and tongue 1322-3 in accordance with the left and right movement of the user's chin 1325-2. Similarly, if the user moves his chin upward (e.g., back to the position shown in fig. 13D, or tilts upward as shown in fig. 13H), the device 600 displays that the avatar's lower jaw 1322-2 and tongue 1322-3 move accordingly (e.g., back to the position shown in fig. 13D, or tilts upward as shown in fig. 13H).
FIG. 13F illustrates another example of the device 600 modifying the movement of the tongue 1322-3 based on movement of a user's facial features other than the user's tongue. In fig. 13F, the user tilts the head to the side. In response to detecting the tilt of the user's head, the device 600 modifies the monkey avatar 1322 by tilting the head of the monkey. As the monkey's head tilts, the position of the tongue 1322-3 changes based on the tilt of the monkey's head (e.g., both magnitude and direction) and the modeled gravity of the tongue 1322-2, which causes the tongue 1322-3 to hang downward, but also tilts slightly as the head and lower jaw 1322-2 move.
In some embodiments, the device 600 modifies the movement of the avatar tongue based on a physical model (e.g., modeled gravity, inertia, etc.) applied to the avatar. As the tongue of the avatar extends away from the mouth of the avatar, the tongue's response to the physical model is exacerbated based on the amount of tongue extending from the avatar's mouth. For example, in fig. 13E, the monkey tongue 1322-3 has a greater curvature than that shown in fig. 13D. This is because the apparatus 600 shows the tongue 1322-3 extending farther from the mouth 1322-1 in fig. 13E (as compared to that shown in fig. 13D), and the modeled gravitational force exerted on the tongue 1322-3 acts to cause the tongue to hang down from the mouth (resulting in an increased curvature of the tongue above the monkey's teeth).
In some embodiments, the device 600 does not modify the avatar to display a particular facial expression (or reduce movement of avatar features (e.g., lips, mouth, etc.) that form the particular facial expression) when the tongue of the avatar is extended. This is to avoid modifying the avatar in a manner that interferes with (e.g., bumps or collides with) the tongue of the displayed avatar. For example, the device 600 may forego modifying the lips of the avatar to form lip folds, closing the mouth of the avatar, extending the lower lips (e.g., puckering the mouth), or extending the lips and moving the mouth to a closed position (e.g., funneling the mouth).
In FIG. 13F, the device 600 detects an input 1327 (e.g., a horizontal gesture (e.g., a swipe or drag) or a tap gesture on the robotic avatar 1323) and scrolls the list of avatars 1320 to display the robotic avatar 1323 in the center of the condensed avatar selection interface 1315, as shown in FIG. 13G.
When the robot avatar 1323 is centered on the condensed avatar selection interface 1315, the device 600 begins to modify the robot avatar based on the detected face 1325. As shown in FIG. 13G, the user's head is no longer tilted, but the user's chin 1325-2 and mouth 1325-1 are opened and the tongue 1325-3 is extended. The device 600 modifies the robot avatar 1323 to match the pose of the face 1325 by flaring the robot mouth 1323-1 and extending the robot tongue 1323-3. In some embodiments, the robot avatar does not include a lower jaw distinguishable from the rest of the robot head, but may indicate movement of the robot lower jaw by increasing the vertical opening of the robot mouth 1323-1.
As shown in fig. 13G, the robot tongue 1323-3 includes a hinged connection 1323-4 that divides the robot tongue 1323-3 into a base portion 1323-3a (e.g., the proximal end of the tongue 1323-3) that is connected to the robot mouth 1323-1 and a tip portion 1323-3b (e.g., the distal end of the tongue 1323-3) that is freely suspended and swung from the hinged connection 1323-4. In some embodiments, tip portion 1323-3b oscillates as robot mouthpiece 1323-1 and robot head move.
For example, in FIG. 13H, the device 600 detects that the user's head is leaning backward and the tongue 1325-3 is extended. The device 600 modifies the robot avatar 1323 by tilting the head back towards the robot and opening the mouth 1323-1 and extending the tongue 1323-3. As the robot head tilts back, the tip portion 1323-3b swings toward the bottom of the robot head (e.g., toward the chin area of the robot) as the base portion 1323-3a moves along with the robot mouth 1323-1. When the user tilts his head back to the neutral position in fig. 13I, the device 600 tilts the robot avatar 1323 back to the neutral position and the tip portion 1323-3b of the avatar tongue 1323-3 rocks back and forth from the hinged connection 1323-4 in response to movement of the robot head, mouth 1323-1 and base portion 1323-3 a.
In FIG. 13I, the device 600 detects an input 1329 (e.g., a horizontal gesture (e.g., a swipe or drag) or a tap gesture on the alien avatar 1324) and scrolls the list of avatars 1320 to display the alien avatar 1324 in the center of the condensed avatar selection interface 1315, as shown in FIG. 13J.
In some embodiments, device 600 displays an avatar tongue with a visual effect determined based on a particular avatar. For example, a robot avatar tongue has a hinged connection, a unicorn avatar has a blinking tongue, and a alien avatar has an iridescent effect. In some embodiments, the visual effect changes based on the display position of the avatar tongue. For example, fig. 13J-13L illustrate a varying iridescence effect 1324-4 of the alien tongue 1324-3. As the alien's tongue 1324-3 moves, the iris effect 1324-4 of the tongue changes (as indicated by the changing position of the iris effect 1324-4 on the tongue 1324-3). Fig. 13J shows that the alien tongue 1324-3 has an iridescence 1324-4 when the user's face 1325 is facing forward and the tongue 1325-3 is extended. The alien mandible 1324-2 and mouth 1324-1 are open, and tongue 1324-3 is extended and has an iridescent effect 1324-4 at the base of the tongue. In FIG. 13K, the face 1325 rotates as the tongue 1325-3 extends, and the device 600 rotates the extraterrestrial avatar 1324 and changes the iris of the tongue 1324-3 (represented by the changed position of the iris effect 1324-4 on the tongue 1324-3). In fig. 13L, the user slightly closes the mandible 1325-2, which lifts the user's tongue 1325-3. The device 600 modifies the alien avatar 1324 by slightly closing the lower jaw 1324-2, lifting the tongue 1324-3 and changing the iris of the tongue 1324-3 (as represented by the changing position of the iris effect 1324-4 on the tongue 1324-3).
In some embodiments, device 600 displays avatar tongues having different shapes according to the position of the avatar's mouth (which is determined based on the detected position of the user's mouth). For example, when the user's mouth 1325-1 is open, the device 600 displays an avatar tongue having a flat shape, as shown by monkey tongue 1322-3 in FIG. 13E and alien tongue 1324-3 in FIG. 13J. When the user's mouth 1325-1 is closed around the tongue 1325-3, the device 600 displays an avatar tongue having a conical or "strawberry" shape. For example, in fig. 13M and 13N, when the user retracts his tongue into the mouth 1325-1, the device 600 detects that the user's mouth 1325-1 is closed around the tongue 1325-3. In response, the device 600 shows the mouth 1324-1 closed around the tongue 1324-3 and has a conical shape when the tongue 1324-3 is retracted into the alien mouth 1324-1 in fig. 13M and 13N. In fig. 13O, the device 600 detects that the user's tongue 1325-3 is no longer extended and the mouth 1325-1 and mandible 1325-2 are closed. The device 600 shows the alien avatar 1324 without a tongue, and the mouth 1324-1 and mandible 1324-2 are closed.
Fig. 14A and 14B are flow diagrams illustrating methods for modifying an avatar in an avatar navigation user interface, according to some embodiments. The method 1400 is performed at an apparatus (e.g., 100, 300, 500, 600) having a display device. Some operations in method 1400 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 1400 provides an intuitive way for modifying an avatar in an avatar editing user interface. The method reduces the cognitive burden on the user to modify the avatar, thereby creating a more efficient human-machine interface. For battery-powered computing devices, enabling a user to modify the characteristics of an avatar using an avatar navigation user interface faster and more efficiently conserves power and increases battery-charge time.
An electronic device (e.g., 600) displays (1402), via a display device (e.g., 601), a virtual avatar (e.g., 1322, 1323, 1324) having a plurality of avatar features (e.g., 1322-1, 1322-2, 1322-3) (e.g., facial features (e.g., eyes, mouth portions) or macroscopic features (e.g., head, neck)) that changes appearance in response to a facial (e.g., 1325) pose change (e.g., orientation, translation) (e.g., a change in facial expression) detected in a field of view of one or more cameras (e.g., 602). When a face is detected in the field of view of the one or more cameras, the electronic device (1404) detects movement of a first facial feature, the face comprising a plurality of detected facial features, the plurality of detected facial features comprising the first facial feature (e.g., 1325-2) other than a tongue (e.g., a lower jaw) of the user; displaying the appearance of the avatar provides customized feedback to the user indicating the specific characteristics of the avatar. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In response to detecting (1406) movement of the first facial feature (e.g., 1325-2), the device (e.g., 600) performs one or more of the following steps. According to (1408), it is determined that the user's tongue (e.g., 1325-3) satisfies a respective criterion (e.g., a tongue display criterion), wherein the respective criterion includes a requirement that the user's tongue be visible to satisfy the respective criterion (e.g., as shown in fig. 13D) (e.g., the user's tongue is visible and gestures recognized to extend the user's mouth), the electronic device (e.g., 600) displays an avatar tongue (e.g., 1322-3) (e.g., the avatar tongue is not continuously displayed (e.g., it is variably displayed) as part of the displayed virtual avatar). In some embodiments, the avatar tongue is displayed in accordance with a determination that a set of avatar tongue display criteria is met (e.g., a set of criteria including one or more of detecting that a face detected in the field of view of the camera includes a visible tongue, and detecting that the face includes a mouth that is open a threshold distance (e.g., a mouth with a lower jaw in a sufficiently downward position)). Displaying the appearance of the avatar with movement of the avatar tongue provides feedback to the user indicating movement of the avatar tongue in accordance with the user's movement. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
The electronic device (e.g., 600) modifies (1408) the position of the avatar tongue (e.g., 1322-3) based on the movement (e.g., direction and magnitude) of the first facial feature (e.g., 1325-2) (e.g., determines the position of the avatar tongue based on the detected position of the user's lower jaw (e.g., within a range from fully open to fully closed). In some implementations, in response to detecting the movement of the first facial feature, an avatar feature (e.g., 1322-2) corresponding to the first facial feature (e.g., an avatar feature other than the avatar tongue) is also modified/moved based on the detected movement of the first facial feature. In accordance with a determination that the user's tongue does not meet the respective criteria, the electronic device forgoes (1414) displaying the avatar tongue.
According to some embodiments, the avatar tongue (e.g., 1323-3) includes a first portion (e.g., 1323-3a) and a second portion (e.g., 1323-3b), and the second portion is connected to the first portion by a connector (e.g., 1323-4) (e.g., a hinge) that is more flexible than the first portion or the second portion (e.g., the avatar tongue has two or more segments joined at one or more hinges). In some embodiments, when the virtual avatar is a robotic avatar (e.g., 1323), the avatar tongue is formed of hinged segments. In some embodiments, the first portion and the second portion are rigid. In some embodiments, the first portion is free to swing when the head portrait tongue is extended and moves according to the movement of the user's head (e.g., as shown in fig. 13G-13I).
According to some embodiments, the avatar tongue (e.g., 1323-3) has a visual effect (e.g., 1324-4) (e.g., blinking, iridescence) that changes in response to modifying the position of the avatar tongue. In some embodiments, the virtual avatar is a unicorn, and the avatar tongue includes a flashing effect that flashes as the avatar tongue moves. In some embodiments, the virtual avatar is an alien person (e.g., 1324), and the avatar tongue includes an iridescent effect that changes as the avatar tongue moves. The visual effect of displaying the avatar tongue movement provides feedback to the user indicating movement of the avatar tongue in accordance with the user's movement. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, modifying the position of the avatar tongue (e.g., 1322-3) based on the movement of the first facial feature (e.g., 1325-2) includes one or more of the following steps. In accordance with a determination that the first facial feature is moving in a first direction (e.g., the user's mandible is moving left and/or upward), the electronic device (e.g., 600) modifies (1410) the position of the avatar tongue to be in the first direction (e.g., the avatar tongue is moving left and/or upward). In accordance with a determination that the first facial feature is moving in a second direction different from the first direction (e.g., the user's mandible is moving right and/or downward), the electronic device modifies (1412) the position of the avatar tongue to be in the second direction (e.g., the avatar tongue is moving right and/or downward). The display of the avatar tongue provides feedback to the user indicating movement of the avatar tongue based on the movement of the first facial feature. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some implementations, modifying the position of the avatar tongue (e.g., 1322-3) based on the movement of the first facial feature (e.g., 1325-2) includes one or more of the following steps. In accordance with a determination that the first facial feature moved a first amount (e.g., the user's mandible moved 30 degrees to the right from a forward facing position), the position of the avatar tongue is modified by an amount proportional to the first amount (e.g., the avatar tongue moved 30 degrees to the right from a forward facing position). In accordance with a determination that the first facial feature moved a second magnitude different from the first magnitude (e.g., the user's mandible moved 45 degrees to the right from the forward facing position), the electronic device (e.g., 600) modifies the position of the avatar tongue by an amount proportional to the second magnitude (e.g., the avatar tongue moved 45 degrees to the right from the forward facing position). The display of the avatar tongue provides feedback to the user indicating movement of the avatar tongue based on the movement of the first facial feature. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the avatar tongue (e.g., 1322-3) moves according to a physical model (e.g., inertial model, gravity model, force transfer model, friction model). In some embodiments, the degree of movement of the avatar tongue (e.g., according to a physical model based on movement of the head and/or facial features) is increased (e.g., or decreased) based on the amount of tongue that extends out of the mouth of the virtual avatar (e.g., 1322). The physical model allows realistic display of the avatar tongue according to the movement of the object. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, when displaying the avatar tongue (e.g., 1324-3), the electronic device (e.g., 600) detects (1416) that the user's tongue (e.g., 1325-3) no longer satisfies the corresponding criteria (e.g., tongue display criteria). In response to detecting that the user's tongue no longer satisfies the respective criteria, the electronic device stops (1418) displaying the avatar tongue (e.g., fig. 13O). In some implementations, displaying (e.g., not previously displayed) the avatar tongue includes displaying an animation of the avatar tongue extending from a mouth (e.g., 1322-1) of the virtual avatar (e.g., 1322). In some embodiments, ceasing to display the avatar tongue includes displaying an animation of the avatar tongue retracting into the virtual avatar opening. In some embodiments, at least one of the animation of the tongue extending from the mouth of the virtual avatar or the animation of the tongue retracting from the mouth of the virtual avatar includes displaying a bending movement of the avatar tongue over one or more teeth of the virtual avatar (e.g., a set of lower teeth in the mandible of the virtual avatar) (e.g., the avatar tongue is shown moving such that it bends or assumes an arc over the teeth of the avatar, rather than extending/retracting in a linear motion). Stopping the display of the avatar tongue by retracting the tongue into the avatar's mouth provides feedback to the user indicating that the avatar no longer has the displayed tongue features. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the electronic device (e.g., 600) detects that the second facial feature (e.g., the user's mouth; 1325-1) is moved to the first position (e.g., FIG. 13O) (e.g., the closed position of the user's mouth). In response to detecting the movement of the second facial feature to the first position, the device performs one or more of the following steps. In accordance with a determination that the avatar tongue (e.g., 1324-2) is not displayed, the electronic device modifies the first avatar feature (e.g., avatar mouth 1324-1) based on movement of the second facial feature (e.g., an avatar feature other than the avatar's lower jaw that affects the avatar tongue position; e.g., the avatar's mouth, the avatar's lower lip, etc.) (e.g., modifies the avatar's mouth to have a closed position corresponding to a closed position of the user's mouth). In accordance with a determination that the avatar tongue is displayed based on the respective criteria being met, movement of the first avatar feature is suppressed (e.g., eliminated or reduced in magnitude) based on movement of the second facial feature (e.g., movement of the avatar's mouth is suppressed in response to detecting a closed position of the user's mouth while the avatar tongue is displayed). In some embodiments, when the avatar tongue is displayed, certain portions of the avatar are not modified (or are modified by a limited amount) in response to detecting changes in the user's face. In some embodiments, when the avatar tongue is displayed, the avatar is not modified to display certain poses (or certain poses are restricted) in response to detecting changes in the user's face. The display of the avatar tongue provides feedback to the user indicating movement of the avatar tongue based on the movement of the second facial feature. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the second facial feature is a mouth of the user (e.g., 1325-1), the first position of the second facial feature corresponds to a position in which the mouth of the user is closed (e.g., the mandible is open) and the first portrait feature is a portrait mouth. In some embodiments, movement of the mouth of the avatar is suppressed compared to movement of the mouth of the face detected in the field of view of one or more cameras (e.g., 602) of the device to avoid the mouth from closing completely and thereby colliding or impacting with the extended avatar tongue. The display of the avatar tongue provides feedback to the user indicating movement of the avatar tongue in response to the user closing the mouth. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
According to some embodiments, the second facial feature is a user's lower lip, the first position of the second facial feature corresponds to a position in which the user's lower lip extends (e.g., the user's lower lip is extended out of a puckered pose), and the first avatar feature is the lower lip of the avatar. In some embodiments, movement of the lower lip of the avatar is suppressed compared to movement of the lower lip of the face detected in the field of view of one or more cameras (e.g., 602) of the device to avoid the lower lip from completely colliding or striking the extended avatar tongue.
According to some embodiments, the second facial feature is an upper lip and a lower lip of the user, the first location of the second facial feature corresponds to a location where the upper lip and the lower lip of the user are wrinkled, and the first avatar feature is the upper lip and the lower lip of the avatar. In some embodiments, movement of the upper and lower lips of the avatar is inhibited to avoid the lips colliding or impinging on the extended avatar tongue, as compared to movement of the upper and lower lips of the face detected in the field of view of one or more cameras (e.g., 602) of the device.
According to some embodiments, the second facial feature is a mouth of the user (e.g., 1325-1), the first position of the second facial feature corresponds to a position in which the mouth of the user is closed (e.g., the mouth moves from an open position to an intermediate position of the closed position in which the lips of the user are wrinkled), and the first portrait feature is an avatar mouth. In some embodiments, movement of the mouth of the avatar is suppressed compared to movement of the mouth of the face detected in the field of view of one or more cameras (e.g., 602) of the device to avoid the mouth closing and thereby colliding or impacting with the extended avatar tongue.
According to some implementations, displaying the avatar tongue (e.g., 1322-3) includes one or more of the following steps. The location of a third facial feature (e.g., 1325-1) (e.g., the user's mouth) is detected. In accordance with a determination that the third facial feature has a first position (e.g., a substantially closed position), the electronic device (e.g., 600) displays an avatar tongue having a first shape (e.g., as shown in fig. 13M and 13N) (e.g., a cone or "strawberry" shape). In accordance with a determination that the first facial feature has a second position (e.g., a substantially open position) that is different from the first position, the electronic device display avatar tongue has a second shape (e.g., a flat shape as shown in fig. 13E) that is different from the first shape. In some embodiments, the avatar tongue extends further outward when the avatar tongue has the second shape than when the avatar tongue has the first shape. The display of the avatar tongue provides feedback to the user indicating different tongue shapes depending on whether the mouth of the avatar is open or closed. Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide suitable input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
It is noted that details of the process described above with reference to method 1400 (e.g., fig. 14) also apply in a similar manner to the method described above. For example, method 700 optionally includes one or more features of the various methods described above with reference to method 1400. The method 700 of editing an avatar may be incorporated into a method for navigating an avatar user interface. For example, in some embodiments, the navigation user interface invokes a process for creating or editing a customizable avatar, which may be implemented in accordance with the method 900 described above with reference to FIG. 9. As further examples, methods 1000, 1100, and 1200 optionally include one or more features of the various methods described above with reference to method 1400. For example, in some embodiments, the navigation user interface invokes a process for creating or editing a customizable avatar, which may be implemented according to the methods described below with reference to fig. 10-12. For the sake of brevity, these details are not repeated.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the technology and its practical applications. Those skilled in the art are thus well able to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
Although the present disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. It is to be understood that such changes and modifications are to be considered as included within the scope of the disclosure and examples as defined by the following claims.
As described above, one aspect of the present technology is to collect and use data from various sources for sharing with other users. The present disclosure contemplates that, in some instances, such collected data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, phone numbers, email addresses, Twitter account numbers, home addresses, data or records related to a user's health or fitness level (e.g., vital sign measurements, medication information, exercise information), date of birth, or any other identifying or personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be useful to benefit the user. For example, personal information data may be used to better represent users in a conversation. In addition, the present disclosure also contemplates other uses for which personal information data is beneficial to a user. For example, health and fitness data may be used to provide insight into the overall health condition of a user, or may be used as positive feedback for individuals using technology to pursue health goals.
The present disclosure contemplates that entities responsible for collecting, analyzing, publishing, transmitting, storing, or otherwise using such personal information data will comply with established privacy policies and/or privacy practices. In particular, such entities should enforce and adhere to the use of privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining privacy and security of personal information data. Users can conveniently access such policies and should update as data is collected and/or used. The user's personal information should be collected as legitimate and legitimate uses of the entity and should not be shared or sold outside of these legitimate uses. Furthermore, such acquisition/sharing should be done after receiving the user's informed consent. Furthermore, such entities should consider taking any necessary steps to defend and secure access to such personal information data and to ensure that others who have access to the personal information data comply with their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices. In addition, policies and practices should be adjusted to the particular type of personal information data collected and/or accessed, and to applicable laws and standards including specific considerations of jurisdiction. For example, in the united states, the collection or access of certain health data may be subject to federal and/or state laws such as the health insurance convenience and accountability act (HIPAA); while health data in other countries may be subject to other regulations and policies and should be treated accordingly. Therefore, different privacy practices for different types of personal data should be maintained in each country.
Regardless of the foregoing, the present disclosure also contemplates embodiments in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, in the case of sending avatars, the techniques of the present invention may be configured to allow a user to opt-in to "join" or "opt-out" to participate in the collection of personal information data during registration with a service, or at any time thereafter. In addition to providing "opt-in" and "opt-out" options, the present disclosure contemplates providing notifications related to accessing or using personal information. For example, the user may notify the user when the application is downloaded that their personal information data is to be accessed, and then remind the user again before the personal information data is accessed by the application.
Further, it is an object of the present disclosure that personal information data should be managed and processed to minimize the risk of inadvertent or unauthorized access or use. Once the data is no longer needed, the risk can be minimized by limiting data collection and deleting data. In addition, and when applicable, including in certain health-related applications, data de-identification may be used to protect the privacy of the user. Where appropriate, de-identification may be facilitated by removing certain identifiers (e.g., date of birth, etc.), controlling the amount or characteristics of data stored (e.g., collecting location data at the city level rather than the address level), controlling the manner in which data is stored (e.g., aggregating data among users), and/or other methods.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, the present disclosure also contemplates that various embodiments may be implemented without the need to access such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data.

Claims (27)

1. An electronic device, comprising:
a display device;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for:
displaying, via the display device, an avatar editing user interface, including displaying:
an avatar having a plurality of avatar characteristics, the plurality of avatar characteristics including a first avatar characteristic having a first set of one or more colors and a second avatar characteristic having a set of one or more colors, the set of one or more colors based on and different from the first set of one or more colors; and
a plurality of color options corresponding to the first avatar characteristic;
Detecting selection of a respective color option of the plurality of color options; and
in response to detecting selection of the respective color option of the plurality of color options for the first avatar characteristic, in accordance with a determination that the respective color option corresponds to a second set of one or more colors, updating an appearance of the avatar, including:
changing the first avatar characteristic to the second set of one or more colors; and
changing the second avatar characteristic to a set of one or more colors, the set of one or more colors based on and different from the second set of one or more colors.
2. The electronic device of claim 1, the one or more programs further comprising instructions for:
in response to detecting selection of the respective color option of the plurality of color options for the first avatar feature, in accordance with a determination that the respective color option corresponds to a third set of one or more colors, change the first and second avatar features in a manner different than when the respective color option corresponds to the second set of one or more colors.
3. The electronic device of claim 1, the one or more programs further comprising instructions for:
displaying, via the display device, a second plurality of color options corresponding to a third avatar characteristic;
detecting selection of a first color option of the second plurality of color options; and
in response to detecting selection of the first color option of the second plurality of color options for the third avatar characteristic, in accordance with a determination that the first color option corresponds to a fourth set of one or more colors, updating the appearance of the avatar, including:
changing the third avatar characteristic to the fourth set of one or more colors; and
changing the second avatar characteristic to a set of one or more colors, the set of one or more colors based on and different from the fourth set of one or more colors.
4. The electronic device of claim 3, the one or more programs further comprising instructions for:
detecting selection of a second color option of the second plurality of color options; and
in response to detecting selection of the second color option of the second plurality of color options for the third avatar characteristic, in accordance with a determination that the first color option corresponds to a fifth set of one or more colors, change the third avatar characteristic and the second avatar characteristic in a manner different than when the first color option is selected.
5. The electronic device of claim 4, wherein:
the third and second avatar characteristics change in a first manner, including adjusting a first color attribute based on the second set of one or more colors corresponding to the first avatar characteristic; and
the third avatar characteristic and the second avatar characteristic change in a second manner, including adjusting a second color attribute different from the first color attribute based on the fourth set of one or more colors corresponding to the third avatar characteristic.
6. The electronic device of claim 3, wherein:
the first avatar characteristic corresponds to an avatar hair color;
the second avatar feature corresponds to an avatar of the eyebrow; and
the third avatar characteristic corresponds to an avatar skin color.
7. The electronic device of claim 1, the one or more programs further comprising instructions for:
in response to detecting selection of the respective color option of the plurality of color options, displaying a color adjustment control for the respective color option corresponding to the second set of one or more colors.
8. The electronic device of claim 7, wherein the second avatar feature corresponds to an avatar lip having an avatar lip color that corresponds to the set of one or more colors that are based on and different from the second set of one or more colors, the one or more programs further comprising instructions for:
Detecting an input corresponding to the color adjustment control; and
in response to detecting the input:
modifying an avatar lip color of a first portion of the avatar lip; and
maintaining an avatar lip color of a second portion of the avatar lip.
9. The electronic device of claim 1, wherein:
the first avatar characteristic corresponds to an avatar skin color;
the second avatar feature corresponds to an avatar lip; and
the set of one or more colors based on the second set of one or more colors includes the second set of one or more colors and a red value.
10. A non-transitory computer-readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display apparatus, the one or more programs including instructions for:
displaying, via the display device, an avatar editing user interface, including displaying:
an avatar having a plurality of avatar characteristics, the plurality of avatar characteristics including a first avatar characteristic having a first set of one or more colors and a second avatar characteristic having a set of one or more colors, the set of one or more colors based on and different from the first set of one or more colors; and
A plurality of color options corresponding to the first avatar characteristic;
detecting selection of a respective color option of the plurality of color options; and
in response to detecting selection of the respective color option of the plurality of color options for the first avatar characteristic, in accordance with a determination that the respective color option corresponds to a second set of one or more colors, updating an appearance of the avatar, including:
changing the first avatar characteristic to the second set of one or more colors; and
changing the second avatar characteristic to a set of one or more colors, the set of one or more colors based on and different from the second set of one or more colors.
11. A method, comprising:
at an electronic device having a display device:
displaying, via the display device, an avatar editing user interface, including displaying:
an avatar having a plurality of avatar characteristics, the plurality of avatar characteristics including a first avatar characteristic having a first set of one or more colors and a second avatar characteristic having a set of one or more colors, the set of one or more colors based on and different from the first set of one or more colors; and
A plurality of color options corresponding to the first avatar characteristic;
detecting selection of a respective color option of the plurality of color options; and
in response to detecting selection of the respective color option of the plurality of color options for the first avatar characteristic, in accordance with a determination that the respective color option corresponds to a second set of one or more colors, updating an appearance of the avatar, including:
changing the first avatar characteristic to the second set of one or more colors; and
changing the second avatar characteristic to a set of one or more colors, the set of one or more colors based on and different from the second set of one or more colors.
12. The non-transitory computer readable storage medium of claim 10, the one or more programs further comprising instructions for:
in response to detecting selection of the respective color option of the plurality of color options for the first avatar feature, in accordance with a determination that the respective color option corresponds to a third set of one or more colors, change the first and second avatar features in a manner different than when the respective color option corresponds to the second set of one or more colors.
13. The non-transitory computer readable storage medium of claim 10, the one or more programs further comprising instructions for:
displaying, via the display device, a second plurality of color options corresponding to a third avatar characteristic;
detecting selection of a first color option of the second plurality of color options; and
in response to detecting selection of the first color option of the second plurality of color options for the third avatar characteristic, in accordance with a determination that the first color option corresponds to a fourth set of one or more colors, updating the appearance of the avatar, including:
changing the third avatar characteristic to the fourth set of one or more colors; and
changing the second avatar characteristic to a set of one or more colors, the set of one or more colors based on and different from the fourth set of one or more colors.
14. The non-transitory computer readable storage medium of claim 13, the one or more programs further comprising instructions for:
detecting selection of a second color option of the second plurality of color options; and
In response to detecting selection of the second color option of the second plurality of color options for the third avatar characteristic, in accordance with a determination that the first color option corresponds to a fifth set of one or more colors, change the third avatar characteristic and the second avatar characteristic in a manner different than when the first color option is selected.
15. The non-transitory computer-readable storage medium of claim 14, wherein:
the third and second avatar characteristics change in a first manner, including adjusting a first color attribute based on the second set of one or more colors corresponding to the first avatar characteristic; and
the third avatar characteristic and the second avatar characteristic change in a second manner, including adjusting a second color attribute different from the first color attribute based on the fourth set of one or more colors corresponding to the third avatar characteristic.
16. The non-transitory computer-readable storage medium of claim 13, wherein:
the first avatar characteristic corresponds to an avatar hair color;
the second avatar feature corresponds to an avatar of the eyebrow; and
the third avatar characteristic corresponds to an avatar skin color.
17. The non-transitory computer readable storage medium of claim 10, the one or more programs further comprising instructions for:
in response to detecting selection of the respective color option of the plurality of color options, displaying a color adjustment control for the respective color option corresponding to the second set of one or more colors.
18. The non-transitory computer readable storage medium of claim 17, wherein the second avatar feature corresponds to an avatar lip having an avatar lip color corresponding to the set of one or more colors that are based on and different from the second set of one or more colors, the one or more programs further comprising instructions for:
detecting an input corresponding to the color adjustment control; and
in response to detecting the input:
modifying an avatar lip color of a first portion of the avatar lip; and
maintaining an avatar lip color of a second portion of the avatar lip.
19. The non-transitory computer-readable storage medium of claim 10, wherein:
The first avatar characteristic corresponds to an avatar skin color;
the second avatar feature corresponds to an avatar lip; and
the set of one or more colors based on the second set of one or more colors includes the second set of one or more colors and a red value.
20. The method of claim 11, further comprising:
in response to detecting selection of the respective color option of the plurality of color options for the first avatar feature, in accordance with a determination that the respective color option corresponds to a third set of one or more colors, change the first and second avatar features in a manner different than when the respective color option corresponds to the second set of one or more colors.
21. The method of claim 11, further comprising:
displaying, via the display device, a second plurality of color options corresponding to a third avatar characteristic;
detecting selection of a first color option of the second plurality of color options; and
in response to detecting selection of the first color option of the second plurality of color options for the third avatar characteristic, in accordance with a determination that the first color option corresponds to a fourth set of one or more colors, updating the appearance of the avatar, including:
Changing the third avatar characteristic to the fourth set of one or more colors; and
changing the second avatar characteristic to a set of one or more colors, the set of one or more colors based on and different from the fourth set of one or more colors.
22. The method of claim 21, further comprising:
detecting selection of a second color option of the second plurality of color options; and
in response to detecting selection of the second color option of the second plurality of color options for the third avatar characteristic, in accordance with a determination that the first color option corresponds to a fifth set of one or more colors, change the third avatar characteristic and the second avatar characteristic in a manner different than when the first color option is selected.
23. The method of claim 22, wherein:
the third and second avatar characteristics change in a first manner, including adjusting a first color attribute based on the second set of one or more colors corresponding to the first avatar characteristic; and
the third avatar characteristic and the second avatar characteristic change in a second manner, including adjusting a second color attribute different from the first color attribute based on the fourth set of one or more colors corresponding to the third avatar characteristic.
24. The method of claim 21, wherein:
the first avatar characteristic corresponds to an avatar hair color;
the second avatar feature corresponds to an avatar of the eyebrow; and
the third avatar characteristic corresponds to an avatar skin color.
25. The method of claim 11, further comprising:
in response to detecting selection of the respective color option of the plurality of color options, displaying a color adjustment control for the respective color option corresponding to the second set of one or more colors.
26. The method of claim 25, wherein the second avatar characteristic corresponds to an avatar lip having an avatar lip color corresponding to the set of one or more colors based on and different from the second set of one or more colors, the method further comprising:
detecting an input corresponding to the color adjustment control; and
in response to detecting the input:
modifying an avatar lip color of a first portion of the avatar lip; and
maintaining an avatar lip color of a second portion of the avatar lip.
27. The method of claim 11, wherein:
The first avatar characteristic corresponds to an avatar skin color;
the second avatar feature corresponds to an avatar lip; and
the set of one or more colors based on the second set of one or more colors includes the second set of one or more colors and a red value.
CN202110820692.4A 2018-05-07 2018-09-28 Avatar creation user interface Active CN113535306B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110820692.4A CN113535306B (en) 2018-05-07 2018-09-28 Avatar creation user interface

Applications Claiming Priority (16)

Application Number Priority Date Filing Date Title
US201862668200P 2018-05-07 2018-05-07
US62/668,200 2018-05-07
US201862679950P 2018-06-03 2018-06-03
US62/679,950 2018-06-03
DKPA201870377A DK179874B1 (en) 2018-05-07 2018-06-12 USER INTERFACE FOR AVATAR CREATION
DKPA201870372A DK180212B1 (en) 2018-05-07 2018-06-12 USER INTERFACE FOR CREATING AVATAR
DKPA201870377 2018-06-12
DKPA201870374 2018-06-12
DKPA201870375A DK180078B1 (en) 2018-05-07 2018-06-12 USER INTERFACE FOR AVATAR CREATION
DKPA201870372 2018-06-12
DKPA201870374A DK201870374A1 (en) 2018-05-07 2018-06-12 Avatar creation user interface
DKPA201870375 2018-06-12
US16/116,221 2018-08-29
US16/116,221 US10580221B2 (en) 2018-05-07 2018-08-29 Avatar creation user interface
CN202110820692.4A CN113535306B (en) 2018-05-07 2018-09-28 Avatar creation user interface
CN201811142889.1A CN110457092A (en) 2018-05-07 2018-09-28 Head portrait creates user interface

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811142889.1A Division CN110457092A (en) 2018-05-07 2018-09-28 Head portrait creates user interface

Publications (2)

Publication Number Publication Date
CN113535306A true CN113535306A (en) 2021-10-22
CN113535306B CN113535306B (en) 2023-04-07

Family

ID=66286280

Family Applications (6)

Application Number Title Priority Date Filing Date
CN201811142889.1A Pending CN110457092A (en) 2018-05-07 2018-09-28 Head portrait creates user interface
CN201910691872.XA Pending CN110456965A (en) 2018-05-07 2018-09-28 Head portrait creates user interface
CN202010330318.1A Pending CN111488193A (en) 2018-05-07 2018-09-28 Avatar creation user interface
CN201910691865.XA Pending CN110457103A (en) 2018-05-07 2018-09-28 Head portrait creates user interface
CN202310255451.9A Pending CN116309023A (en) 2018-05-07 2018-09-28 Head portrait creation user interface
CN202110820692.4A Active CN113535306B (en) 2018-05-07 2018-09-28 Avatar creation user interface

Family Applications Before (5)

Application Number Title Priority Date Filing Date
CN201811142889.1A Pending CN110457092A (en) 2018-05-07 2018-09-28 Head portrait creates user interface
CN201910691872.XA Pending CN110456965A (en) 2018-05-07 2018-09-28 Head portrait creates user interface
CN202010330318.1A Pending CN111488193A (en) 2018-05-07 2018-09-28 Avatar creation user interface
CN201910691865.XA Pending CN110457103A (en) 2018-05-07 2018-09-28 Head portrait creates user interface
CN202310255451.9A Pending CN116309023A (en) 2018-05-07 2018-09-28 Head portrait creation user interface

Country Status (3)

Country Link
JP (4) JP6735325B2 (en)
CN (6) CN110457092A (en)
WO (1) WO2019216999A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI439960B (en) 2010-04-07 2014-06-01 Apple Inc Avatar editing environment
KR102585858B1 (en) 2017-05-16 2023-10-11 애플 인크. Emoji recording and sending
DK179867B1 (en) 2017-05-16 2019-08-06 Apple Inc. RECORDING AND SENDING EMOJI
DK201870374A1 (en) 2018-05-07 2019-12-04 Apple Inc. Avatar creation user interface
US10375313B1 (en) 2018-05-07 2019-08-06 Apple Inc. Creative camera
US11722764B2 (en) 2018-05-07 2023-08-08 Apple Inc. Creative camera
US11107261B2 (en) 2019-01-18 2021-08-31 Apple Inc. Virtual avatar animation based on facial feature movement
CN110882539B (en) 2019-11-22 2022-06-07 腾讯科技(深圳)有限公司 Animation display method and device, storage medium and electronic device
US11921998B2 (en) 2020-05-11 2024-03-05 Apple Inc. Editing features of an avatar
DK202070625A1 (en) 2020-05-11 2022-01-04 Apple Inc User interfaces related to time
US11682002B2 (en) 2020-06-05 2023-06-20 Marketspringpad Ip Usa Llc. Methods and systems for interactive data management
US11604562B2 (en) * 2020-06-10 2023-03-14 Snap Inc. Interface carousel for use with image processing software development kit
CN115309302A (en) * 2021-05-06 2022-11-08 阿里巴巴新加坡控股有限公司 Icon display method, device, program product and storage medium
US11714536B2 (en) 2021-05-21 2023-08-01 Apple Inc. Avatar sticker editor user interfaces
US11776190B2 (en) 2021-06-04 2023-10-03 Apple Inc. Techniques for managing an avatar on a lock screen
JP7348943B2 (en) * 2021-12-22 2023-09-21 凸版印刷株式会社 Content management system, content management method, and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102035990A (en) * 2009-09-30 2011-04-27 株式会社日立制作所 Method of color customization of content screen
CN105100462A (en) * 2015-07-10 2015-11-25 广州市久邦数码科技有限公司 Short message system having custom theme edition function
US20160275724A1 (en) * 2011-02-17 2016-09-22 Metail Limited Computer implemented methods and systems for generating virtual body models for garment fit visualisation

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3859005A (en) 1973-08-13 1975-01-07 Albert L Huebner Erosion reduction in wet turbines
US4826405A (en) 1985-10-15 1989-05-02 Aeroquip Corporation Fan blade fabrication system
EP1717682B1 (en) 1998-01-26 2017-08-16 Apple Inc. Method and apparatus for integrating manual input
KR20010056965A (en) * 1999-12-17 2001-07-04 박희완 Method for creating human characters by partial image synthesis
US7688306B2 (en) 2000-10-02 2010-03-30 Apple Inc. Methods and apparatuses for operating a portable device based on an accelerometer
US7218226B2 (en) 2004-03-01 2007-05-15 Apple Inc. Acceleration-based theft detection system for portable electronic devices
US6677932B1 (en) 2001-01-28 2004-01-13 Finger Works, Inc. System and method for recognizing touch typing under limited tactile feedback conditions
US6570557B1 (en) 2001-02-10 2003-05-27 Finger Works, Inc. Multi-touch system and method for emulating modifier keys via fingertip chords
JP2006520053A (en) * 2003-03-03 2006-08-31 アメリカ オンライン インコーポレイテッド How to use an avatar to communicate
KR20070007799A (en) * 2004-02-12 2007-01-16 비숀 알리반디 System and method for producing merchandise from a virtual environment
US7657849B2 (en) 2005-12-23 2010-02-02 Apple Inc. Unlocking a device by performing gestures on an unlock image
KR20090002176A (en) * 2007-06-20 2009-01-09 엔에이치엔(주) System for providing ranking of game-avatar in network and method thereof
US9513704B2 (en) * 2008-03-12 2016-12-06 Immersion Corporation Haptically enabled user interface
JP5383668B2 (en) * 2008-04-30 2014-01-08 株式会社アクロディア Character display data generating apparatus and method
CN105327509B (en) * 2008-06-02 2019-04-19 耐克创新有限合伙公司 The system and method for creating incarnation
JP5256001B2 (en) * 2008-11-20 2013-08-07 京セラドキュメントソリューションズ株式会社 Color adjustment apparatus, method and program
CN101692681A (en) * 2009-09-17 2010-04-07 杭州聚贝软件科技有限公司 Method and system for realizing virtual image interactive interface on phone set terminal
US9542038B2 (en) * 2010-04-07 2017-01-10 Apple Inc. Personalizing colors of user interfaces
KR20120013727A (en) * 2010-08-06 2012-02-15 삼성전자주식회사 Display apparatus and control method thereof
US8558844B2 (en) * 2010-09-28 2013-10-15 Apple Inc. Systems, methods, and computer-readable media for changing colors of displayed assets
CN102142149A (en) * 2011-01-26 2011-08-03 深圳市同洲电子股份有限公司 Method and device for obtaining contact image
CN102298797A (en) * 2011-08-31 2011-12-28 深圳市美丽同盟科技有限公司 Three-dimensional virtual fitting method, device and system
WO2013169849A2 (en) 2012-05-09 2013-11-14 Industries Llc Yknots Device, method, and graphical user interface for displaying user interface objects corresponding to an application
US20140078144A1 (en) * 2012-09-14 2014-03-20 Squee, Inc. Systems and methods for avatar creation
KR101958517B1 (en) 2012-12-29 2019-03-14 애플 인크. Device, method, and graphical user interface for transitioning between touch input to display output relationships
JP5603452B1 (en) * 2013-04-11 2014-10-08 株式会社スクウェア・エニックス Video game processing apparatus and video game processing program
CN104753762B (en) * 2013-12-31 2018-07-27 北京发现角科技有限公司 The method and system that ornament is added into avatar icon applied to instant messaging
KR102367550B1 (en) * 2014-09-02 2022-02-28 삼성전자 주식회사 Controlling a camera module based on physiological signals
CN104376160A (en) * 2014-11-07 2015-02-25 薛景 Real person simulation individuality ornament matching system
CN114527881B (en) * 2015-04-07 2023-09-26 英特尔公司 avatar keyboard
US20180047200A1 (en) * 2016-08-11 2018-02-15 Jibjab Media Inc. Combining user images and computer-generated illustrations to produce personalized animated digital avatars
CN110109592B (en) * 2016-09-23 2022-09-23 苹果公司 Avatar creation and editing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102035990A (en) * 2009-09-30 2011-04-27 株式会社日立制作所 Method of color customization of content screen
US20160275724A1 (en) * 2011-02-17 2016-09-22 Metail Limited Computer implemented methods and systems for generating virtual body models for garment fit visualisation
CN105100462A (en) * 2015-07-10 2015-11-25 广州市久邦数码科技有限公司 Short message system having custom theme edition function

Also Published As

Publication number Publication date
WO2019216999A1 (en) 2019-11-14
CN111488193A (en) 2020-08-04
JP6735325B2 (en) 2020-08-05
JP2019207670A (en) 2019-12-05
CN113535306B (en) 2023-04-07
JP2020187775A (en) 2020-11-19
JP2023085356A (en) 2023-06-20
CN116309023A (en) 2023-06-23
JP2022008470A (en) 2022-01-13
CN110456965A (en) 2019-11-15
JP6991283B2 (en) 2022-01-12
CN110457103A (en) 2019-11-15
JP7249392B2 (en) 2023-03-30
CN110457092A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
CN113535306B (en) Avatar creation user interface
AU2021202254B2 (en) Avatar navigation, library, editing and creation user interface
CN110046020B (en) Electronic device, computer-readable storage medium, and method executed at electronic device
AU2024201007A1 (en) Avatar navigation, library, editing and creation user interface
AU2020101715B4 (en) Avatar creation user interface
EP3567457B1 (en) Avatar creation user interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant