US20140055554A1 - System and method for communication using interactive avatar - Google Patents
System and method for communication using interactive avatar Download PDFInfo
- Publication number
- US20140055554A1 US20140055554A1 US13/996,230 US201213996230A US2014055554A1 US 20140055554 A1 US20140055554 A1 US 20140055554A1 US 201213996230 A US201213996230 A US 201213996230A US 2014055554 A1 US2014055554 A1 US 2014055554A1
- Authority
- US
- United States
- Prior art keywords
- avatar
- remote
- user
- parameters
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000004891 communication Methods 0.000 title claims abstract description 74
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000002452 interceptive effect Effects 0.000 title claims description 12
- 230000001815 facial effect Effects 0.000 claims abstract description 96
- 230000033001 locomotion Effects 0.000 claims abstract description 25
- 210000000744 eyelid Anatomy 0.000 claims abstract description 17
- 230000004424 eye movement Effects 0.000 claims abstract description 14
- 230000004397 blinking Effects 0.000 claims abstract description 9
- 230000000977 initiatory effect Effects 0.000 claims abstract description 8
- 210000001508 eye Anatomy 0.000 claims description 75
- 238000001514 detection method Methods 0.000 claims description 51
- 238000004458 analytical method Methods 0.000 claims description 17
- 230000008921 facial expression Effects 0.000 claims description 11
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 238000012706 support-vector machine Methods 0.000 claims description 6
- 230000014509 gene expression Effects 0.000 description 14
- 230000003993 interaction Effects 0.000 description 12
- 210000003128 head Anatomy 0.000 description 8
- 238000011330 nucleic acid test Methods 0.000 description 8
- 230000004048 modification Effects 0.000 description 7
- 238000012986 modification Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000015654 memory Effects 0.000 description 6
- 238000010606 normalization Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 210000001331 nose Anatomy 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 244000078534 Vaccinium myrtillus Species 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 235000021029 blackberry Nutrition 0.000 description 2
- 210000005252 bulbus oculi Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000008451 emotion Effects 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003278 mimic effect Effects 0.000 description 2
- 210000000214 mouth Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000405217 Viola <butterfly> Species 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000002329 infrared spectrum Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000002211 ultraviolet spectrum Methods 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
- 210000000216 zygoma Anatomy 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/157—Conference systems defining a virtual conference space and using avatars or agents
-
- G06K9/00248—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/175—Static expression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8146—Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/147—Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
Definitions
- the device 102 may further include an avatar selection module 208 configured to allow a user of device 102 to select an avatar for display on a remote device.
- the avatar selection module 208 may include custom, proprietary, known and/or after-developed user interface construction code (or instruction sets) that are generally well-defined and operable to present different avatars to a user so that the user may select one of the avatars.
- the facial expression detection module 310 may include custom, proprietary, known and/or after-developed facial expression detection and/or identification code (or instruction sets) that is generally well-defined and operable to detect and/or identify facial expressions of the user in the image. For example, the facial expression detection module 310 may determine size and/or position of the facial features (e.g., eyes, mouth, cheeks, teeth, etc.) and compare the facial features to a facial feature database which includes a plurality of sample facial features with corresponding facial feature classifications.
- the facial features e.g., eyes, mouth, cheeks, teeth, etc.
- the eye detection/tracking module 312 may include custom, proprietary, known and/or after-developed eye tracking code (or instruction sets) that is generally well-defined and operable to detect and identify, at least to a certain extent, eye movement and/or eye gaze or focus of the user in the image. Similar to the face posture module 308 , the eye detection/tracking module 312 may be configured to establish the direction in which the user's eyes are directed with respect to the display 108 of the device 102 . The eye detection/tracking module 312 may be further configured to establish eye blinking of a user.
- Another example system includes the foregoing components and determining facial characteristics from the face includes determining a facial expression in the face.
- Another example system includes the foregoing components and the instructions that when executed by one or more processors result in the following additional operation of receiving at least one of a remote avatar selection or remote avatar parameters.
- Another example apparatus includes the foregoing components and further includes an eye detection/tracking module configured to detect and identify at least one of eye movement of the user with respect to a display and eyelid movement of the user,
- Another example computer accessible medium includes the foregoing operations and the avatar selection and avatar parameters are used to generate an avatar on a remote device, the avatar being based on the facial characteristics.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
A video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, determining facial characteristics from the face, including eye movement and eyelid movement of a user indicative of direction of user gaze and blinking, respectively, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters.
Description
- The present application claims the benefit of PCT Patent Application Serial No. PCT/CN2011/084902, filed Dec. 29, 2011, the entire disclosure of which is incorporated herein by reference.
- The present disclosure relates to video communication and interaction, and, more particularly, to a system and method for communication using interactive avatars.
- The increasing variety of functionality available in mobile devices has spawned a desire for users to communicate via video in addition to simple calls. For example, users may initiate “video calls,” “videoconferencing,” etc., wherein a camera and microphone in a device transmits audio and real-time video of a user to one or more other recipients such as other mobile devices, desktop computers, videoconferencing systems, etc. The communication of real-time video may involve the transmission of substantial amounts of data (e.g., depending on the technology of the camera, the particular video codec employed to process the real time image information, etc.). Given the bandwidth limitations of existing 2G/3G wireless technology, and the still limited availability of emerging 4G wireless technology, the proposition of many device users conducting concurrent video calls places a large burden on bandwidth in the existing wireless communication infrastructure, which may impact negatively on the quality of the video call.
- Features and advantages of various embodiments of the claimed subject matter will become apparent as the following Detailed Description proceeds, and upon reference to the Drawings, wherein like numerals designate like parts, and in which:
-
FIG. 1A illustrates an example device-to-device system consistent with various embodiments of the present disclosure; -
FIG. 1B illustrates an example virtual space system consistent with various embodiments of the present disclosure; -
FIG. 2 illustrates an example device in consistent with various embodiments of the present disclosure; -
FIG. 3 illustrates an example face detection module consistent with various embodiments of the present disclosure; -
FIG. 4 illustrates an example system implementation in accordance with at least one embodiment of the present disclosure; and -
FIG. 5 is a flowchart of example operations in accordance with at least one embodiment of the present disclosure. - Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art.
- By way of overview, the present disclosure is generally directed to a system and method for video communication and interaction using interactive avatars. A system and method consistent with the present disclosure generally provides detection and/or tracking of a user's eyes during active communication, including the detection of characteristics of a user's eyes, including, but not limited to, eyeball movement, gaze direction and/or point of focus of the user's eyes, eye blinking, etc. The system and method is further configured to provide avatar animation based at least in part on the detected characteristics of the user's eyes in real-time or near real-time during active communication.
- In one embodiment an application is activated in a device coupled to a camera. The application may be configured to allow a user to select an avatar for display on a remote device, in a virtual space, etc. The device may then be configured to initiate communication with at least one other device, a virtual space, etc. For example, the communication may be established over a 2G, 3G, 4G cellular connection. Alternatively, the communication may be established over the Internet via a WiFi connection. After the communication is established, the camera may be configured to start capturing images. Facial detection is then performed on the captured images, and facial characteristics are determined. The detected face/head movements, including movement of the user's eyes and/or eyelids, and/or changes in facial features are then converted into parameters usable for animating the avatar on the at least one other device, within the virtual space, etc. At least one of the avatar selection or avatar parameters are then transmitted. In one embodiment at least one of a remote avatar selection or remote avatar parameters are received. The remote avatar selection may cause the device to display an avatar, while the remote avatar parameters may cause the device to animate the displayed avatar. Audio communication accompanies the avatar animation via known methods.
-
FIG. 1A illustrates device-to-device system 100 consistent with various embodiments of the present disclosure. Thesystem 100 may generally includedevices network 122.Device 102 includes at leastcamera 104, microphone 106 anddisplay 108.Device 112 includes at leastcamera 114,microphone 116 anddisplay 118. Network 122 includes at least oneserver 124. -
Devices devices -
Cameras cameras Cameras Cameras devices devices cameras - WiFi, Bluetooth, etc.) web cameras as may be associated with computers, video monitors, etc., mobile device cameras (e.g., cell phone or smart phone cameras integrated in, for example, the previously discussed example devices), integrated laptop computer cameras, integrated tablet computer cameras (e.g., iPad®, Galaxy Tab®, and the like), etc.
-
Devices microphones Microphones devices devices examples regarding cameras Displays Displays devices examples regarding cameras - In one embodiment,
displays avatars device 102 may displayavatar 110 representing the user of device 112 (e.g., a remote user), and likewise,device 112 may displayavatar 120 representing the user ofdevice 102. As such, users may view a representation of other users without having to exchange large amounts of information that are generally involved with device-to-device communication employing live images. - Network 122 may include various second generation (2G), third generation (3G), fourth generation (4G) cellular-based data communication technologies, Wi-Fi wireless data communication technology, etc. Network 122 includes at least one
server 124 configured to establish and maintain communication connections when using these technologies. For example,server 124 may be configured to support Internet-related communication protocols like Session Initiation Protocol (SIP) for creating, modifying and terminating two-party (unicast) and multi-party (multicast) sessions, Interactive Connectivity Establishment Protocol (ICE) for presenting a framework that allows protocols to be built on top of bytestream connections, Session Traversal Utilities for Network Access Translators, or NAT, Protocol (STUN) for allowing applications operating through a NAT to discover the presence of other NATs, IP addresses and ports allocated for an application's User Datagram Protocol (UDP) connection to connect to remote hosts, Traversal Using Relays around NAT (TURN) for allowing elements behind a NAT or firewall to receive data over Transmission Control Protocol (TCP) or UDP connections, etc. -
FIG. 1B illustrates avirtual space system 126 consistent with various embodiments of the present disclosure. Thesystem 126 may includedevice 102,device 112 andserver 124.Device 102,device 112 andserver 124 may continue to communicate in the manner similar to that illustrated inFIG. 1A , but user interaction may take place invirtual space 128 instead of in a device-to-device format. As referenced herein, a virtual space may be defined as a digital simulation of a physical location. For example,virtual space 128 may resemble an outdoor location like a city, road, sidewalk, field, forest, island, etc., or an inside location like an office, house, school, mall, store, etc. - Users, represented by avatars, may appear to interact in
virtual space 128 as in the real world.Virtual space 128 may exist on one or more servers coupled to the Internet, and may be maintained by a third party. Examples of virtual spaces include virtual offices, virtual meeting rooms, virtual worlds like Second Life®, massively multiplayer online role-playing games (MMORPGs) like World of Warcraft®, massively multiplayer online real-life games (MMORLGs), like The Sims Online®, etc. Insystem 126,virtual space 128 may contain a plurality of avatars corresponding to different users. Instead of displaying avatars, displays 108 and 118 may display encapsulated (e.g., smaller) versions of virtual space (VS) 128. For example,display 108 may display a perspective view of what the avatar corresponding to the user ofdevice 102 “sees” invirtual space 128. Similarly,display 118 may display a perspective view of what the avatar corresponding to the user ofdevice 112 “sees” invirtual space 128. Examples of what avatars might see invirtual space 128 may include, but are not limited to, virtual structures (e.g., buildings), virtual vehicles, virtual objects, virtual animals, other avatars, etc. -
FIG. 2 illustrates anexample device 102 in accordance with various embodiments of the present disclosure. Whileonly device 102 is described, device 112 (e.g., remote device) may include resources configured to provide the same or similar functions. As previously discussed,device 102 is shown includingcamera 104,microphone 106 anddisplay 108. Thecamera 104 andmicrophone 106 may provide input to a camera andaudio framework module 200. The camera andaudio framework module 200 may include custom, proprietary, known and/or after-developed audio and video processing code (or instruction sets) that are generally well-defined and operable to control atleast camera 104 andmicrophone 106. For example, the camera andaudio framework module 200 may causecamera 104 andmicrophone 106 to record images and/or sounds, may process images and/or sounds, may cause images and/or sounds to be reproduced, etc. The camera andaudio framework module 200 may vary depending ondevice 102, and more particularly, the operating system (OS) running indevice 102. Example operating systems include iOS®, Android®, Blackberry® OS, Symbian®, Palm® OS, etc. Aspeaker 202 may receive audio information from camera andaudio framework module 200 and may be configured to reproduce local sounds (e.g., to provide audio feedback of the user's voice) and remote sounds (e.g., the sound of the other parties engaged in a telephone, video call or interaction in a virtual place). - The
device 102 may further include aface detection module 204 configured to identify and track a head, face and/or facial region within image(s) provided bycamera 104 and to determine one or more facial characteristics of the user (i.e., facial characteristics 206). For example, theface detection module 204 may include custom, proprietary, known and/or after-developed face detection code (or instruction sets), hardware, and/or firmware that are generally well-defined and operable to receive a standard format image (e.g., but not limited to, a RGB color image) and identify, at least to a certain extent, a face in the image. - The
face detection module 204 may also be configured to track the detected face through a series of images (e.g., video frames at 24 frames per second) and to determine a head position based on the detected face. Known tracking systems that may be employed byface detection module 204 may include particle filtering, mean shift, Kalman filtering, etc., each of which may utilize edge analysis, sum-of-square-difference analysis, feature point analysis, histogram analysis, skin tone analysis, etc. - The
face detection module 204 may also include custom, proprietary, known and/or after-developed facial characteristics code (or instruction sets) that are generally well-defined and operable to receive a standard format image (e.g., but not limited to, a RGB color image) and identify, at least to a certain extent, one or more facial characteristics in the image. Such known facial characteristics systems include, but are not limited to, the CSU Face Identification Evaluation System by Colorado State University, standard Viola-Jones boosting cascade framework, which may be found in the public Open Source Computer Vision (OpenCV™) package. - As discussed in greater detail herein,
facial characteristics 206 may include features of the face, including, but not limited to, the location and/or shape of facial landmarks such as eyes, eyebrows, nose, mouth, etc., as well as movement of the eyes and/or eyelids. In one embodiment, avatar animation may be based on sensed facial actions (e.g., changes in facial characteristics 206). The corresponding feature points on an avatar's face may follow or mimic the movements of the real person's face, which is known as “expression clone” or “performance-driven facial animation.” - The
face detection module 204 may also be configured to recognize an expression associated with the detected features (e.g., identifying whether a previously detected face is happy, sad, smiling, frown, surprised, excited, etc.)). Thus, theface detection module 204 may further include custom, proprietary, known and/or after-developed facial expression detection and/or identification code (or instruction sets) that is generally well-defined and operable to detect and/or identify expressions in a face. For example, theface detection module 204 may determine size and/or position of facial features (e.g., eyes, mouth, cheeks, teeth, etc.) and may compare these facial features to a facial feature database which includes a plurality of sample facial features with corresponding facial feature classifications (e.g. smiling, frown, excited, sad, etc.). - The
device 102 may further include anavatar selection module 208 configured to allow a user ofdevice 102 to select an avatar for display on a remote device. Theavatar selection module 208 may include custom, proprietary, known and/or after-developed user interface construction code (or instruction sets) that are generally well-defined and operable to present different avatars to a user so that the user may select one of the avatars. - In one embodiment one or more avatars may be predefined in
device 102. Predefined avatars allow all devices to have the same avatars, and during interaction only the selection of an avatar (e.g., the identification of a predefined avatar) needs to be communicated to a remote device or virtual space, which reduces the amount of information that needs to be exchanged. Avatars are selected prior to establishing communication, but may also be changed during the course of an active communication. Thus, it may be possible to send or receive an avatar selection at any point during the communication, and for the receiving device to change the displayed avatar in accordance with the received avatar selection, - The
device 102 may further include anavatar control module 210 configured to generate parameters for animating an avatar. Animation, as referred to herein, may be defined as altering the appearance of an image/model. A single animation may alter the appearance of a 2-D still image, or multiple animations may occur in sequence to simulate motion in the image (e.g., head turn, nodding, talking, frowning, smiling, laughing, blinking, winking, etc.). An example of animation for 3-D models includes deforming a 3-D wireframe model, applying a texture mapping, and re-computing the model vertex normal for rendering. A change in position of the detected face and/or facial characteristic 206, including facial features, may be may converted into parameters that cause the avatar's features to resemble the features of the user's face. - In one embodiment the general expression of the detected face may be converted into one or more parameters that cause the avatar to exhibit the same expression. The expression of the avatar may also be exaggerated to emphasize the expression. Knowledge of the selected avatar may not be necessary when avatar parameters may be applied generally to all of the predefined avatars. However, in one embodiment avatar parameters may be specific to the selected avatar, and thus, may be altered if another avatar is selected. For example, human avatars may require different parameter settings (e.g., different avatar features may be altered) to demonstrate emotions like happy, sad, angry, surprised, etc. than animal avatars, cartoon avatars, etc.
- The
avatar control module 210 may include custom, proprietary, known and/or after-developed graphics processing code (or instruction sets) that are generally well-defined and operable to generate parameters for animating the avatar selected byavatar selection module 208 based on the face/head position and/orfacial characteristics 206 detected byface detection module 204. For facial feature-based animation methods, 2-D avatar animation may be done with, for example, image warping or image morphing, whereas 3-D avatar animation may be done with free form deformation (FFD) or by utilizing the animation structure defined in a 3-D model of a head. Oddcast is an example of a software resource usable for 2-D avatar animation, while FaceGen is an example of a software resource usable for 3-D avatar animation. - In addition, in
system 100, theavatar control module 210 may receive a remote avatar selection and remote avatar parameters usable for displaying and animating an avatar corresponding to a user at a remote device. Theavatar control module 210 may cause adisplay module 212 to display anavatar 110 on thedisplay 108. Thedisplay module 212 may include custom, proprietary, known and/or after-developed graphics processing code (or instruction sets) that are generally well-defined and operable to display and animate an avatar ondisplay 108 in accordance with the example device-to-device embodiment. - For example, the
avatar control module 210 may receive a remote avatar selection and may interpret the remote avatar selection to correspond to a predetermined avatar. Thedisplay module 212 may then displayavatar 110 ondisplay 108. Moreover, remote avatar parameters received inavatar control module 210 may be interpreted, and commands may be provided todisplay module 212 to animateavatar 110. - In one embodiment more than two users may engage in the video call. When more than two users are interacting in a video call, the
display 108 may be divided or segmented to allow more than one avatar corresponding to remote users to be displayed simultaneously. Alternatively, insystem 126, theavatar control module 210 may receive information causing thedisplay module 212 to display what the avatar corresponding to the user ofdevice 102 is “seeing” in virtual space 128 (e.g., from the visual perspective of the avatar). For example, thedisplay 108 may display buildings, objects, animals represented invirtual space 128, other avatars, etc. In one embodiment, theavatar control module 210 may be configured to cause thedisplay module 212 to display a “feedback”avatar 214. Thefeedback avatar 214 represents how the selected avatar appears on the remote device, in a virtual place, etc. In particular, thefeedback avatar 214 appears as the avatar selected by the user and may be animated using the same parameters generated byavatar control module 210. In this way the user may confirm what the remote user is seeing during their interaction. - The
device 102 may further include acommunication module 216 configured to transmit and receive information for selecting avatars, displaying avatars, animating avatars, displaying virtual place perspective, etc. Thecommunication module 216 may include custom, proprietary, known and/or after-developed communication processing code (or instruction sets) that are generally well-defined and operable to transmit avatar selections, avatar parameters and receive remote avatar selections and remote avatar parameters. Thecommunication module 216 may also transmit and receive audio information corresponding to avatar-based interactions. Thecommunication module 216 may transmits and receive the above information vianetwork 122 as previously described. - The
device 102 may further include one or more processor(s) 218 configured to perform operations associated withdevice 102 and one or more of the modules included therein. -
FIG. 3 illustrates an exampleface detection module 204 a consistent with various embodiments of the present disclosure. Theface detection module 204 a may be configured to receive one or more images from thecamera 104 via the camera andaudio framework module 200 and identify, at least to a certain extent, a face (or optionally multiple faces) in the image. Theface detection module 204 a may also be configured to identify and determine, at least to a certain extent, one or morefacial characteristics 206 in the image. Thefacial characteristics 206 may be generated based on one or more of the facial parameters identified by theface detection module 204 a as described herein. Thefacial characteristics 206 may include may include features of the face, including, but not limited to, the location and/or shape of facial landmarks such as eyes, eyebrows, nose, mouth, etc., as well as movement of the mouth, eyes and/or eyelids. - In the illustrated embodiment, the
face detection module 204 a may include a face detection/tracking module 300, aface normalization module 302, alandmark detection module 304, afacial pattern module 306, aface posture module 308, a facial expression detection module 310, an eye detection/tracking module 312 and aneye classification module 314. The face detection/tracking module 300 may include custom, proprietary, known and/or after-developed face tracking code (or instruction sets) that is generally well-defined and operable to detect and identify, at least to a certain extent, the size and location of human faces in a still image or video stream received from thecamera 104. Such known face detection/tracking systems include, for example, the techniques of Viola and Jones, published as Paul Viola and Michael Jones, Rapid Object Detection using a Boosted Cascade of Simple Features, Accepted Conference on Computer Vision and Pattern Recognition, 2001. These techniques use a cascade of Adaptive Boosting (AdaBoost) classifiers to detect a face by scanning a window exhaustively over an image. The face detection/tracking module 300 may also track a face or facial region across multiple images. - The
face normalization module 302 may include custom, proprietary, known and/or after-developed face normalization code (or instruction sets) that is generally well-defined and operable to normalize the identified face in the image. For example, theface normalization module 302 may be configured to rotate the image to align the eyes (if the coordinates of the eyes are known), crop the image to a smaller size generally corresponding the size of the face, scale the image to make the distance between the eyes constant, apply a mask that zeros out pixels not in an oval that contains a typical face, histogram equalize the image to smooth the distribution of gray values for the non-masked pixels, and/or normalize the image so the non-masked pixels have mean zero and standard deviation one. - The
landmark detection module 304 may include custom, proprietary, known and/or after-developed landmark detection code (or instruction sets) that is generally well-defined and operable to detect and identify, at least to a certain extent, the various facial features of the face in the image. Implicit in landmark detection is that the face has already been detected, at least to some extent. Optionally, some degree of localization may have been performed (for example, by the face normalization module 302) to identify/focus on the zones/areas of the image where landmarks can potentially be found. For example, thelandmark detection module 304 may be based on heuristic analysis and may be configured to identify and/or analyze the relative position, size, and/or shape of the eyes (and/or the corner of the eyes), nose (e.g., the tip of the nose), chin (e.g. tip of the chin), cheekbones, and jaw. The eye-corners and mouth corners may also be detected using Viola-Jones based classifier. - The
facial pattern module 306 may include custom, proprietary, known and/or after-developed facial pattern code (or instruction sets) that is generally well-defined and operable to identify and/or generate a facial pattern based on the identified facial landmarks in the image. As may be appreciated, thefacial pattern module 306 may be considered a portion of the face detection/tracking module 300. - The
face posture module 308 may include custom, proprietary, known and/or after-developed facial orientation detection code (or instruction sets) that is generally well-defined and operable to detect and identify, at least to a certain extent, the posture of the face in the image. For example, theface posture module 308 may be configured to establish the posture of the face in the image with respect to thedisplay 108 of thedevice 102. More specifically, theface posture module 308 may be configured to determine whether the user's face is directed toward thedisplay 108 of thedevice 102, thereby indicating whether the user is observing the content being displayed on thedisplay 108. - The facial expression detection module 310 may include custom, proprietary, known and/or after-developed facial expression detection and/or identification code (or instruction sets) that is generally well-defined and operable to detect and/or identify facial expressions of the user in the image. For example, the facial expression detection module 310 may determine size and/or position of the facial features (e.g., eyes, mouth, cheeks, teeth, etc.) and compare the facial features to a facial feature database which includes a plurality of sample facial features with corresponding facial feature classifications.
- The eye detection/
tracking module 312 may include custom, proprietary, known and/or after-developed eye tracking code (or instruction sets) that is generally well-defined and operable to detect and identify, at least to a certain extent, eye movement and/or eye gaze or focus of the user in the image. Similar to theface posture module 308, the eye detection/tracking module 312 may be configured to establish the direction in which the user's eyes are directed with respect to thedisplay 108 of thedevice 102. The eye detection/tracking module 312 may be further configured to establish eye blinking of a user. - As shown, the eye detection/
tracking module 312 may include aneye classification module 314 configured to determine whether the user's eyes (individually and/or both) are open or closed and movement of the user's eyes with respect to thedisplay 108. In particular, theeye classification module 314 is configured to receive one or more normalized images (images normalized by the normalization module 302). A normalized image may include, but is not limited to, rotation to align the eyes (if the coordinates of the eyes are known), cropping of the image, particularly cropping of the eyes with reference to the eye-corner position, scaling the image to make the distance between the eyes constant, histogram equalizing the image to smooth the distribution of gray values for the non-masked pixels, and/or normalizing the image so the non-masked pixels have mean zero and a unit standard deviation. - Upon receipt of one or more normalized images, the
eye classification module 314 may be configured to separately identify eye opening/closing and/or eye movement (e.g. looking left/right, up/down, diagonally, etc.) with respect to thedisplay 108 and, as such, determine a status of the user's eyes in real-time or near real-time during active video communication and/or interaction. Theeye classification module 314 may include custom, proprietary, known and/or after-developed eye tracking code (or instruction sets) that is generally well-defined and operable to detect and identify, at least to a certain extent, movement of the eyelids and eyes of the user in the image. In one embodiment, theeye classification module 314 may use statistical-based analysis in order to identify the status of the user's eyes (open/close, movement, etc.), including, but not limited to, linear discriminant analysis (LDA), artificial neural network (ANN) and/or support vector machine (SVM). During analysis, theeye classification module 314 may further utilize an eye status database, which may include a plurality of sample eye features with corresponding eye feature classifications. - As previously described, avatar animation may be based on sensed facial actions (e.g., changes in
facial characteristics 206 of a user, including eye and/or eyelid movement. The corresponding feature points on an avatar's face may follow or mimic the movements of the real person's face, which is known as “expression clone” or “performance-driven facial animation.” Accordingly, eye opening/closing and eye movement may be animated in the avatar model during active video communication and/or interaction by any known methods. - For example, upon receipt of the avatar selection and avatar parameters from the
device 102, an avatar control module of theremote device 112 may be configured to control (e.g. animate) the avatar based on thefacial characteristics 206, including the eye and/or eyelid movement of the user. This may include normalizing and remapping the user's face to the avatar face, copying any changes to thefacial characteristics 206 and driving the avatar to perform the same facial characteristics and/or expression changes. For facial feature-based animation methods, 2-D avatar animation may be done with, for example, image warping or image morphing, whereas 3-D avatar animation may be done with free form deformation (FFD) or by utilizing the animation structure defined in a 3-D model of a head. Oddcast is an example of a software resource usable for 2-D avatar animation, while FaceGen is an example of a software resource usable for 3-D avatar generation and animation. -
FIG. 4 illustrates an example system implementation in accordance with at least one embodiment.Device 102′ is configured to communicate wirelessly via WiFi connection 400 (e.g., at work),server 124′ is configured to negotiate a connection betweendevices 102′ and 112′ viaInternet 402, andapparatus 112′ is configured to communicate wirelessly via another WiFi connection 404 (e.g., at home). In one embodiment, a device-to-device avatar-based video call application is activated inapparatus 102′. Following avatar selection, the application may allow at least one remote device (e.g.,device 112′) to be selected. The application may then causedevice 102′ to initiate communication withdevice 112′. Communication may be initiated withdevice 102′ transmitting a connection establishment request todevice 112′ via enterprise access point (AP) 406. Theenterprise AP 406 may be an AP usable in a business setting, and thus, may support higher data throughput and more concurrent wireless clients thanhome AP 414. Theenterprise AP 406 may receive the wireless signal fromdevice 102′ and may proceed to transmit the connection establishment request through various business networks viagateway 408, The connection establishment request may then pass throughfirewall 410, which may be configured to control information flowing into and out of theWiFi network 400. - The connection establishment request of
device 102′ may then be processed byserver 124′. Theserver 124′ may be configured for registration of IP addresses, authentication of destination addresses and NAT traversals so that the connection establishment request may be directed to the correct destination onInternet 402. For example,server 124′ may resolve the intended destination (e.g.,remote device 112′) from information in the connection establishment request received fromdevice 102′, and may route the signal to through the correct NATs, ports and to the destination IP address accordingly. These operations may only have to be performed during connection establishment, depending on the network configuration. - In some instances operations may be repeated during the video call in order to provide notification to the NAT to keep the connection alive. Media and
Signal Path 412 may carry the video (e.g., avatar selection and/or avatar parameters) and audio information direction tohome AP 414 after the connection has been established.Device 112′ may then receive the connection establishment request and may be configured to determine whether to accept the request. Determining whether to accept the request may include, for example, presenting a visual narrative to a user ofdevice 112′ inquiring as to whether to accept the connection request fromdevice 102′. Should the user ofdevice 112′ accept the connection (e.g., accept the video call) the connection may be established.Cameras 104′ and 114′ may be configured to then start capturing images of the respective users ofdevices 102′ and 112′, respectively, for use in animating the avatars selected by each user.Microphones 106′ and 116′ may be configured to then start recording audio from each user. As information exchange commences betweendevices 102′ and 112′, displays 108′ and 118′ may display and animate avatars corresponding to the users ofdevices 102′ and 112′. -
FIG. 5 is a flowchart of example operations in accordance with at least one embodiment. Inoperation 502 an application (e.g., an avatar-based voice call application) may be activated in a device. Activation of the application may be followed by selection of an avatar. Selection of an avatar may include an interface being presented by the application, the interface allowing the user to select a predefined avatar. After avatar selection, communications may be configured in operation 504. Communication configuration includes the identification of at least one remote device or a virtual space for participation in the video call. For example, a user may select from a list of remote users/devices stored within the application, stored in association with another system in the device (e.g., a contacts list in a smart phone, cell phone, etc.), stored remotely, such as on the Internet (e.g., in a social media website like Facebook, LinkedIn, Yahoo, Google+, MSN, etc.). Alternatively, the user may select to go online in a virtual space like Second Life. - In
operation 506, communication may be initiated between the device and the at least one remote device or virtual space. For example, a connection establishment request may be transmitted to the remote device or virtual space. For the sake of explanation herein, it is assumed that the connection establishment request is accepted by the remote device or virtual space. A camera in the device may then begin capturing images inoperation 508, The images may be still images or live video (e.g., multiple images captured in sequence). Inoperation 510 image analysis may occur starting with detection/tracking of a face/head in the image. The detected face may then be analyzed in order to detect facial characteristics (e.g., facial landmarks, facial expression, etc.). Inoperation 512 the detected face/head position and/or facial characteristics are converted into Avatar parameters. Avatar parameters are used to animate the selected avatar on the remote device or in the virtual space. In operation 514 at least one of the avatar selection or the avatar parameters may be transmitted. - Avatars may be displayed and animated in
operation 516. In the instance of device-to-device communication (e.g., system 100), at least one of remote avatar selection or remote avatar parameters may be received from the remote device. An avatar corresponding to the remote user may then be displayed based on the received remote avatar selection, and may be animated based on the received remote avatar parameters. In the instance of virtual place interaction (e.g., system 126), information may be received allowing the device to display what the avatar corresponding to the device user is seeing. A determination may then be made inoperation 518 as to whether the current communication is complete. If it is determined inoperation 518 that the communication is not complete, operations 508-516 may repeat in order to continue to display and animate an avatar on the remote apparatus based on the analysis of the user's face. Otherwise, inoperation 520 the communication may be terminated. The video call application may also be terminated if, for example, no further video calls are to be made. - While
FIG. 5 illustrates various operations according to an embodiment, it is to be understood that not all of the operations depicted inFIG. 5 are necessary for other embodiments. Indeed, it is fully contemplated herein that in other embodiments of the present disclosure, the operations depicted inFIG. 5 and/or other operations described herein may be combined in a manner not specifically shown in any of the drawings, but still fully consistent with the present disclosure. Thus, claims directed to features and/or operations that are not exactly shown in one drawing are deemed within the scope and content of the present disclosure. - A system consistent with the present disclosure provides detection and/or tracking of a user's eyes during active communication, including the detection of characteristics of a user's eyes, including, but not limited to, eyeball movement, gaze direction and/or point of focus of the user's eyes, eye blinking, etc. The system uses a statistical-based approach for the determination of the status (e.g. open/closed eye and/or direction of eye gaze) of a user's eyes. The system further provides avatar animation based at least in part on the detected characteristics of the user's eyes in real-time or near real-time during active communication and interaction. Animation of a user's eyes may enhance interaction between users, as the human eyes and the characteristics associated with them, including movement and expression, may convey rich information during active communication, such as, for example, a user's interest, emotions, etc.
- A system consistent with the present disclosure provides advantages. For example, the use of statistical-based methods allows the performance of eye analysis and classifying to be improved by increasing sample collection and classifier re-training. Additionally, in contrast to other known methods of eye analysis, such as, for example, template-matching methods and/or geometry-based methods, a system consistent with the present disclosure generally does not require calibration before use nor does the system require special hardware, such as, for example, infrared lighting or close-view camera. Additionally, a system consistent with the present disclosure does not require a learning process for new user's.
- Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood by those having skill in the art. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
- As used in any embodiment herein, the term “module” may refer to software, firmware and/or circuitry configured to perform any of the aforementioned operations. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage medium. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. “Circuitry”, as used in any embodiment herein, may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The modules may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), system on-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smart phones, etc.
- Any of the operations described herein may be implemented in a system that includes one or more storage mediums having stored thereon, individually or in combination, instructions that when executed by one or more processors perform the methods. Here, the processor may include, for example, a server CPU, a mobile device CPU, and/or other programmable circuitry. Also, it is intended that operations described herein may be distributed across a plurality of physical devices, such as processing structures at more than one different physical location. The storage medium may include any type of tangible medium, for example, any type of disk including hard disks, floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, Solid State Disks (SSDs), magnetic or optical cards, or any type of media suitable for storing electronic instructions. Other embodiments may be implemented as software modules executed by a programmable control device. The storage medium may be non-transitory.
- The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents. Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood by those having skill in the art. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications.
- As described herein, various embodiments may be implemented using hardware elements, software elements, or any combination thereof. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth.
- Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
- According to one aspect, there is provided a system for interactive avatar communication interactive avatar communication between a first user device and a remote user device. The system includes a camera configured to capture images, a communication module configured to initiate and establish communication, and to transmit and receive information, between said first and said second user devices. The system further includes one or more storage mediums having stored thereon, individually or in combination, instructions that when executed by one or more processors result in one or more operations. The operations include selecting an avatar, initiating communication, capturing an image, detecting a face in the image and determining facial characteristics from the face. The facial characteristics include at least one of eye movement and eyelid movement, converting the facial characteristics to avatar parameters, transmitting at least one of the avatar selection and avatar parameters.
- Another example system includes the foregoing components and determining facial characteristics from the face includes determining a facial expression in the face.
- Another example system includes the foregoing components and the avatar selection and avatar parameters are used to generate an avatar on a remote device, the avatar being based on the facial characteristics.
- Another example system includes the foregoing components and the avatar selection and avatar parameters are used to generate an avatar in a virtual space, the avatar being based on the facial characteristics.
- Another example system includes the foregoing components and the instructions that when executed by one or more processors result in the following additional operation of receiving at least one of a remote avatar selection or remote avatar parameters.
- Another example system includes the foregoing components and further includes a display, the instructions that when executed by one or more processors result in the following additional operation of displaying an avatar based on the remote avatar selection.
- Another example system includes the foregoing components and the instructions that when executed by one or more processors result in the following additional operation of animating the displayed avatar based on the remote avatar parameters.
- According to one aspect, there is provided an apparatus for interactive avatar communication between a first user device and a remote user device. The apparatus includes a communication module configured to initiate and establish communication between the first and the remote user devices and to transmit information between the first and the remote user devices. The apparatus further includes an avatar selection module configured to allow a user to select an avatar for use during the communication. The apparatus further includes a face detection module configured to detect a facial region in an image of the user and to detect and identify one or more facial characteristics of the face. The facial characteristics include eye movement and eyelid movement of the user. The apparatus further includes an avatar control module configured to convert the facial characteristics to avatar parameters. The communication module is configured to transmit at least one of the avatar selection and avatar parameters.
- Another example apparatus includes the foregoing components and further includes an eye detection/tracking module configured to detect and identify at least one of eye movement of the user with respect to a display and eyelid movement of the user,
- Another example apparatus includes the foregoing components and the eye detection/tracking module includes an eye classification module configured to determine at least one of gaze direction of the user's eyes user and blinking of the user's eyes.
- Another example apparatus includes the foregoing components and the avatar selection and avatar parameters are used to generate an avatar on the remote device, the avatar being based on the facial characteristics.
- Another example apparatus includes the foregoing components and the communication module is configured to receive at least one of a remote avatar selection and remote avatar parameters.
- Another example apparatus includes the foregoing components and further includes a display configured to display an avatar based on the remote avatar selection.
- Another example apparatus includes the foregoing components and the avatar control module is configured to animate the displayed avatar based on the remote avatar parameters.
- According to another aspect there is provided a method for interactive avatar communication. The method includes selecting an avatar, initiating communication, capturing an image, detecting a face in the image and determining facial characteristics from the face, The facial characteristics include at least one of eye movement and eyelid movement, converting the facial characteristics to avatar parameters, transmitting at least one of the avatar selection and avatar parameters.
- Another example method includes the foregoing operations and determining facial characteristics from the face includes determining a facial expression in the face.
- Another example method includes the foregoing operations and the avatar selection and avatar parameters are used to generate an avatar on a remote device, the avatar being based on the facial characteristics.
- Another example method includes the foregoing operations and the avatar selection and avatar parameters are used to generate an avatar in a virtual space, the avatar being based on the facial characteristics.
- Another example method includes the foregoing operations and further includes receiving at least one of a remote avatar selection or remote avatar parameters.
- Another example method includes the foregoing operations and further includes displaying an avatar based on the remote avatar selection on a display.
- Another example method includes the foregoing operations and further includes animating the displayed avatar based on the remote avatar parameters.
- According to another aspect there is provided at least one computer accessible medium including instructions stored thereon. When executed by one or more processors, the instructions may cause a computer system to perform operations for interactive avatar communication. The operations include selecting an avatar, initiating communication, capturing an image, detecting a face in the image and determining facial characteristics from the face. The facial characteristics include at least one of eye movement and eyelid movement, converting the facial characteristics to avatar parameters, transmitting at least one of the avatar selection and avatar parameters.
- Another example computer accessible medium includes the foregoing operations and determining facial characteristics from the face includes determining a facial expression in the face.
- Another example computer accessible medium includes the foregoing operations and the avatar selection and avatar parameters are used to generate an avatar on a remote device, the avatar being based on the facial characteristics.
- Another example computer accessible medium includes the foregoing operations and the avatar selection and avatar parameters are used to generate an avatar in a virtual space, the avatar being based on the facial characteristics.
- Another example computer accessible medium includes the foregoing operations and further includes receiving at least one of a remote avatar selection or remote avatar parameters.
- Another example computer accessible medium includes the foregoing operations and further includes displaying an avatar based on the remote avatar selection on a display.
- Another example computer accessible medium includes the foregoing operations and further includes animating the displayed avatar based on the remote avatar parameters.
- The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents.
Claims (23)
1-22. (canceled)
23. A system for interactive avatar communication between a first user device and a remote user device, said system comprising:
a camera configured to capture images;
a communication module configured to initiate and establish communication between said first and said remote user devices and to transmit and receive information between said first and said remote user devices; and
one or more storage mediums having stored thereon, individually or in combination, instructions that when executed by one or more processors result in the following operations comprising:
selecting an avatar;
initiating communication;
capturing an image;
detecting a face in said image;
determining facial characteristics from said face, said facial characteristics comprising at least one of eye movement and eyelid movement;
converting said facial characteristics to avatar parameters; and
transmitting at least one of said avatar selection and avatar parameters.
24. The system of claim 23 , wherein determining facial characteristics from said face comprises determining a facial expression in said face.
25. The system of claim 23 , wherein determining facial characteristics from said face comprises determining at least one of gaze direction and blinking of said eyes based on statistical-based analysis selected from the group consisting of linear discriminant analysis (LDA), artificial neural network (ANN) and support vector machine (SVM).
26. The system of claim 23 , wherein said avatar selection and avatar parameters are used to generate an avatar on a remote device, said avatar being based on said facial characteristics.
27. The system of claim 23 , wherein said avatar selection and avatar parameters are used to generate an avatar in a virtual space, said avatar being based on said facial characteristics.
28. The system of claim 23 , wherein the instructions that when executed by one or more processors result in the following additional operations:
receiving at least one of a remote avatar selection or remote avatar parameters.
29. The system of claim 28 , further comprising a display, wherein the instructions that when executed by one or more processors result in the following additional operations:
displaying an avatar based on said remote avatar selection.
30. The system of claim 29 , wherein the instructions that when executed by one or more processors result in the following additional operations:
animating said displayed avatar based on said remote avatar parameters.
31. An apparatus for interactive avatar communication between a first user device and a remote user device, said apparatus comprising:
a communication module configured to initiate and establish communication between said first and said remote user devices;
an avatar selection module configured to allow a user to select an avatar for use during said communication;
a face detection module configured to detect a facial region in an image of said user and to detect and identify one or more facial characteristics of said face, said facial characteristics comprising at least one of eye movement and eyelid movement of said user; and
an avatar control module configured to convert said facial characteristics to avatar parameters;
wherein said communication module is configured to transmit at least one of said avatar selection and avatar parameters.
32. The apparatus of claim 31 , further comprising an eye detection/tracking module configured to detect and identify at least one of eye movement of said user with respect to a display and eyelid movement of said user.
33. The apparatus of claim 32 , wherein said eye detection/tracking module comprises an eye classification module configured to determine at least one of gaze direction of said user's eyes user and blinking of said user's eyes.
34. The apparatus of claim 33 , wherein said determination of said gaze direction and blinking of said user's eyes by said eye detection/tracking module is based on statistical-based analysis selected from the group consisting of linear discriminant analysis (LDA), artificial neural network (ANN) and support vector machine (SVM).
35. The apparatus of claim 31 , wherein said avatar selection and avatar parameters are used to generate an avatar on said remote device, said avatar being based on said facial characteristics.
36. The apparatus of claim 31 , wherein said communication module is configured to receive at least one of a remote avatar selection or remote avatar parameters.
37. The apparatus of claim 36 , further comprising a display configured to display an avatar based on said remote avatar selection.
38. The apparatus of claim 37 , wherein said avatar control module is configured to animate said displayed avatar based on said remote avatar parameters.
39. A method for interactive avatar communication, said method comprising:
selecting an avatar;
initiating communication;
capturing an image;
detecting a face in said image;
determining facial characteristics from said face, said facial characteristics comprising at least one of eye movement and eyelid movement;
converting said facial characteristics to avatar parameters; and
transmitting at least one of said avatar selection or avatar parameters.
40. The method of claim 39 , wherein the avatar selection and avatar parameters are used to generate an avatar on a remote device, said avatar being based on said facial characteristics.
41. The method of claim 39 , further comprising receiving at least one of a remote avatar selection or remote avatar parameters.
42. The method of claim 41 , further comprising displaying an avatar based on the remote avatar selection.
43. The method of claim 42 , further comprising animating said displayed avatar based on said remote avatar parameters.
44. At least one computer accessible medium storing instructions which, when executed by a machine, cause the machine to perform operations comprising:
selecting an avatar;
initiating communication;
capturing an image;
detecting a face in said image;
determining facial characteristics from said face, said facial characteristics comprising at least one of eye movement and eyelid movement;
converting said facial characteristics to avatar parameters; and
transmitting at least one of said avatar selection or avatar parameters.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2011/084902 WO2013097139A1 (en) | 2011-12-29 | 2011-12-29 | Communication using avatar |
CNPCT/CN2011/084902 | 2011-12-29 | ||
PCT/CN2012/000461 WO2013097264A1 (en) | 2011-12-29 | 2012-04-09 | System and method for communication using interactive avatar |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2012/000461 A-371-Of-International WO2013097264A1 (en) | 2011-12-29 | 2012-04-09 | System and method for communication using interactive avatar |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/643,984 Continuation US20170310934A1 (en) | 2011-12-29 | 2017-07-07 | System and method for communication using interactive avatar |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140055554A1 true US20140055554A1 (en) | 2014-02-27 |
Family
ID=48696221
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/993,612 Active US9398262B2 (en) | 2011-12-29 | 2011-12-29 | Communication using avatar |
US13/996,230 Abandoned US20140055554A1 (en) | 2011-12-29 | 2012-04-09 | System and method for communication using interactive avatar |
US15/184,409 Abandoned US20170054945A1 (en) | 2011-12-29 | 2016-06-16 | Communication using avatar |
US15/395,661 Abandoned US20170111616A1 (en) | 2011-12-29 | 2016-12-30 | Communication using avatar |
US15/395,657 Abandoned US20170111615A1 (en) | 2011-12-29 | 2016-12-30 | Communication using avatar |
US15/643,984 Abandoned US20170310934A1 (en) | 2011-12-29 | 2017-07-07 | System and method for communication using interactive avatar |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/993,612 Active US9398262B2 (en) | 2011-12-29 | 2011-12-29 | Communication using avatar |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/184,409 Abandoned US20170054945A1 (en) | 2011-12-29 | 2016-06-16 | Communication using avatar |
US15/395,661 Abandoned US20170111616A1 (en) | 2011-12-29 | 2016-12-30 | Communication using avatar |
US15/395,657 Abandoned US20170111615A1 (en) | 2011-12-29 | 2016-12-30 | Communication using avatar |
US15/643,984 Abandoned US20170310934A1 (en) | 2011-12-29 | 2017-07-07 | System and method for communication using interactive avatar |
Country Status (3)
Country | Link |
---|---|
US (6) | US9398262B2 (en) |
CN (3) | CN106961621A (en) |
WO (2) | WO2013097139A1 (en) |
Cited By (203)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130257876A1 (en) * | 2012-03-30 | 2013-10-03 | Videx, Inc. | Systems and Methods for Providing An Interactive Avatar |
CN104301655A (en) * | 2014-10-29 | 2015-01-21 | 四川智诚天逸科技有限公司 | Eye-tracking video communication device |
JP2015172883A (en) * | 2014-03-12 | 2015-10-01 | 株式会社コナミデジタルエンタテインメント | Terminal equipment, information communication method, and information communication program |
US20160062987A1 (en) * | 2014-08-26 | 2016-03-03 | Ncr Corporation | Language independent customer communications |
US9357174B2 (en) | 2012-04-09 | 2016-05-31 | Intel Corporation | System and method for avatar management and selection |
US9460541B2 (en) | 2013-03-29 | 2016-10-04 | Intel Corporation | Avatar animation, social networking and touch screen applications |
US20160353056A1 (en) * | 2013-08-09 | 2016-12-01 | Samsung Electronics Co., Ltd. | Hybrid visual communication |
US20170046065A1 (en) * | 2015-04-07 | 2017-02-16 | Intel Corporation | Avatar keyboard |
US20170069124A1 (en) * | 2015-04-07 | 2017-03-09 | Intel Corporation | Avatar generation and animations |
CN107333086A (en) * | 2016-04-29 | 2017-11-07 | 掌赢信息科技(上海)有限公司 | A kind of method and device that video communication is carried out in virtual scene |
WO2018006053A1 (en) * | 2016-06-30 | 2018-01-04 | Snapchat, Inc. | Avatar based ideogram generation |
US20180003983A1 (en) * | 2012-09-12 | 2018-01-04 | Sony Corporation | Image display device, image display method, and recording medium |
US20180027307A1 (en) * | 2016-07-25 | 2018-01-25 | Yahoo!, Inc. | Emotional reaction sharing |
US9948887B2 (en) | 2013-08-09 | 2018-04-17 | Samsung Electronics Co., Ltd. | Hybrid visual communication |
WO2018128996A1 (en) * | 2017-01-03 | 2018-07-12 | Clipo, Inc. | System and method for facilitating dynamic avatar based on real-time facial expression detection |
US20180211096A1 (en) * | 2015-06-30 | 2018-07-26 | Beijing Kuangshi Technology Co., Ltd. | Living-body detection method and device and computer program product |
US10244208B1 (en) * | 2017-12-12 | 2019-03-26 | Facebook, Inc. | Systems and methods for visually representing users in communication applications |
US20190130629A1 (en) * | 2017-10-30 | 2019-05-02 | Snap Inc. | Animated chat presence |
US10325417B1 (en) | 2018-05-07 | 2019-06-18 | Apple Inc. | Avatar creation user interface |
US10375313B1 (en) * | 2018-05-07 | 2019-08-06 | Apple Inc. | Creative camera |
CN110174942A (en) * | 2019-04-30 | 2019-08-27 | 北京航空航天大学 | Eye movement synthetic method and device |
KR20190101835A (en) * | 2018-02-23 | 2019-09-02 | 삼성전자주식회사 | Electronic device providing image including 3d avatar in which motion of face is reflected by using 3d avatar corresponding to face and method for operating thefeof |
US10419497B2 (en) * | 2015-03-31 | 2019-09-17 | Bose Corporation | Establishing communication between digital media servers and audio playback devices in audio systems |
US20190371039A1 (en) * | 2018-06-05 | 2019-12-05 | UBTECH Robotics Corp. | Method and smart terminal for switching expression of smart terminal |
US10528243B2 (en) | 2017-06-04 | 2020-01-07 | Apple Inc. | User interface camera effects |
US10602053B2 (en) | 2016-06-12 | 2020-03-24 | Apple Inc. | User interface for camera effects |
US10645294B1 (en) | 2019-05-06 | 2020-05-05 | Apple Inc. | User interfaces for capturing and managing visual media |
US10666902B1 (en) | 2019-01-30 | 2020-05-26 | Microsoft Technology Licensing, Llc | Display conflict elimination in videoconferencing |
US10839563B2 (en) | 2017-12-20 | 2020-11-17 | Samsung Electronics Co., Ltd. | Method and apparatus for processing image interaction |
US10848446B1 (en) | 2016-07-19 | 2020-11-24 | Snap Inc. | Displaying customized electronic messaging graphics |
US10852918B1 (en) | 2019-03-08 | 2020-12-01 | Snap Inc. | Contextual information in chat |
CN112042182A (en) * | 2018-05-07 | 2020-12-04 | 谷歌有限责任公司 | Manipulating remote avatars by facial expressions |
US10861170B1 (en) | 2018-11-30 | 2020-12-08 | Snap Inc. | Efficient human pose tracking in videos |
US10872451B2 (en) | 2018-10-31 | 2020-12-22 | Snap Inc. | 3D avatar rendering |
US10880246B2 (en) | 2016-10-24 | 2020-12-29 | Snap Inc. | Generating and displaying customized avatars in electronic messages |
US10893385B1 (en) | 2019-06-07 | 2021-01-12 | Snap Inc. | Detection of a physical collision between two client devices in a location sharing system |
US10895964B1 (en) | 2018-09-25 | 2021-01-19 | Snap Inc. | Interface to display shared user groups |
US10896534B1 (en) | 2018-09-19 | 2021-01-19 | Snap Inc. | Avatar style transformation using neural networks |
US10904181B2 (en) | 2018-09-28 | 2021-01-26 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
US10902661B1 (en) | 2018-11-28 | 2021-01-26 | Snap Inc. | Dynamic composite user identifier |
US10911387B1 (en) | 2019-08-12 | 2021-02-02 | Snap Inc. | Message reminder interface |
US10936066B1 (en) | 2019-02-13 | 2021-03-02 | Snap Inc. | Sleep detection in a location sharing system |
US10936157B2 (en) | 2017-11-29 | 2021-03-02 | Snap Inc. | Selectable item including a customized graphic for an electronic messaging application |
US10939246B1 (en) | 2019-01-16 | 2021-03-02 | Snap Inc. | Location-based context information sharing in a messaging system |
US10952013B1 (en) | 2017-04-27 | 2021-03-16 | Snap Inc. | Selective location-based identity communication |
US10949648B1 (en) | 2018-01-23 | 2021-03-16 | Snap Inc. | Region-based stabilized face tracking |
US10951562B2 (en) | 2017-01-18 | 2021-03-16 | Snap. Inc. | Customized contextual media content item generation |
US10964082B2 (en) | 2019-02-26 | 2021-03-30 | Snap Inc. | Avatar based on weather |
US10963529B1 (en) | 2017-04-27 | 2021-03-30 | Snap Inc. | Location-based search mechanism in a graphical user interface |
US10979752B1 (en) | 2018-02-28 | 2021-04-13 | Snap Inc. | Generating media content items based on location information |
USD916811S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
USD916872S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD916871S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
USD916810S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD916809S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
US10984575B2 (en) | 2019-02-06 | 2021-04-20 | Snap Inc. | Body pose estimation |
US10991395B1 (en) | 2014-02-05 | 2021-04-27 | Snap Inc. | Method for real time video processing involving changing a color of an object on a human face in a video |
US10992619B2 (en) | 2019-04-30 | 2021-04-27 | Snap Inc. | Messaging system with avatar generation |
US11010022B2 (en) | 2019-02-06 | 2021-05-18 | Snap Inc. | Global event-based avatar |
US11032670B1 (en) | 2019-01-14 | 2021-06-08 | Snap Inc. | Destination sharing in location sharing system |
US11030813B2 (en) | 2018-08-30 | 2021-06-08 | Snap Inc. | Video clip object tracking |
US11039270B2 (en) | 2019-03-28 | 2021-06-15 | Snap Inc. | Points of interest in a location sharing system |
US11036781B1 (en) | 2020-01-30 | 2021-06-15 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
US11036989B1 (en) | 2019-12-11 | 2021-06-15 | Snap Inc. | Skeletal tracking using previous frames |
US11048916B2 (en) | 2016-03-31 | 2021-06-29 | Snap Inc. | Automated avatar generation |
US11054973B1 (en) | 2020-06-01 | 2021-07-06 | Apple Inc. | User interfaces for managing media |
US11055514B1 (en) | 2018-12-14 | 2021-07-06 | Snap Inc. | Image face manipulation |
US11063891B2 (en) | 2019-12-03 | 2021-07-13 | Snap Inc. | Personalized avatar notification |
US11061372B1 (en) | 2020-05-11 | 2021-07-13 | Apple Inc. | User interfaces related to time |
US11069103B1 (en) | 2017-04-20 | 2021-07-20 | Snap Inc. | Customized user interface for electronic communications |
US11074675B2 (en) | 2018-07-31 | 2021-07-27 | Snap Inc. | Eye texture inpainting |
US11080917B2 (en) | 2019-09-30 | 2021-08-03 | Snap Inc. | Dynamic parameterized user avatar stories |
US11100311B2 (en) | 2016-10-19 | 2021-08-24 | Snap Inc. | Neural networks for facial modeling |
US11103795B1 (en) | 2018-10-31 | 2021-08-31 | Snap Inc. | Game drawer |
US11107261B2 (en) | 2019-01-18 | 2021-08-31 | Apple Inc. | Virtual avatar animation based on facial feature movement |
US11122094B2 (en) | 2017-07-28 | 2021-09-14 | Snap Inc. | Software application manager for messaging applications |
US11120601B2 (en) | 2018-02-28 | 2021-09-14 | Snap Inc. | Animated expressive icon |
US11120597B2 (en) | 2017-10-26 | 2021-09-14 | Snap Inc. | Joint audio-video facial animation system |
US11128792B2 (en) | 2018-09-28 | 2021-09-21 | Apple Inc. | Capturing and displaying images with multiple focal planes |
US11128586B2 (en) | 2019-12-09 | 2021-09-21 | Snap Inc. | Context sensitive avatar captions |
US11128715B1 (en) | 2019-12-30 | 2021-09-21 | Snap Inc. | Physical friend proximity in chat |
US11140515B1 (en) | 2019-12-30 | 2021-10-05 | Snap Inc. | Interfaces for relative device positioning |
US11166123B1 (en) | 2019-03-28 | 2021-11-02 | Snap Inc. | Grouped transmission of location data in a location sharing system |
US11169658B2 (en) | 2019-12-31 | 2021-11-09 | Snap Inc. | Combined map icon with action indicator |
US11176737B2 (en) | 2018-11-27 | 2021-11-16 | Snap Inc. | Textured mesh building |
US11188190B2 (en) | 2019-06-28 | 2021-11-30 | Snap Inc. | Generating animation overlays in a communication session |
US11189070B2 (en) | 2018-09-28 | 2021-11-30 | Snap Inc. | System and method of generating targeted user lists using customizable avatar characteristics |
US11189098B2 (en) | 2019-06-28 | 2021-11-30 | Snap Inc. | 3D object camera customization system |
US11199957B1 (en) | 2018-11-30 | 2021-12-14 | Snap Inc. | Generating customized avatars based on location information |
US11212449B1 (en) | 2020-09-25 | 2021-12-28 | Apple Inc. | User interfaces for media capture and management |
US11218838B2 (en) | 2019-10-31 | 2022-01-04 | Snap Inc. | Focused map-based context information surfacing |
US11217020B2 (en) | 2020-03-16 | 2022-01-04 | Snap Inc. | 3D cutout image modification |
US11227442B1 (en) | 2019-12-19 | 2022-01-18 | Snap Inc. | 3D captions with semantic graphical elements |
US11229849B2 (en) | 2012-05-08 | 2022-01-25 | Snap Inc. | System and method for generating and displaying avatars |
US11231591B2 (en) * | 2017-02-24 | 2022-01-25 | Sony Corporation | Information processing apparatus, information processing method, and program |
US11245658B2 (en) | 2018-09-28 | 2022-02-08 | Snap Inc. | System and method of generating private notifications between users in a communication session |
US11263817B1 (en) | 2019-12-19 | 2022-03-01 | Snap Inc. | 3D captions with face tracking |
US11284144B2 (en) | 2020-01-30 | 2022-03-22 | Snap Inc. | Video generation system to render frames on demand using a fleet of GPUs |
US11295502B2 (en) | 2014-12-23 | 2022-04-05 | Intel Corporation | Augmented facial animation |
US11294936B1 (en) | 2019-01-30 | 2022-04-05 | Snap Inc. | Adaptive spatial density based clustering |
US11303850B2 (en) | 2012-04-09 | 2022-04-12 | Intel Corporation | Communication using interactive avatars |
US11307747B2 (en) | 2019-07-11 | 2022-04-19 | Snap Inc. | Edge gesture interface with smart interactions |
US11310176B2 (en) | 2018-04-13 | 2022-04-19 | Snap Inc. | Content suggestion system |
US11314324B2 (en) * | 2018-06-11 | 2022-04-26 | Fotonation Limited | Neural network image processing apparatus |
US11321857B2 (en) | 2018-09-28 | 2022-05-03 | Apple Inc. | Displaying and editing images with depth information |
US11320969B2 (en) | 2019-09-16 | 2022-05-03 | Snap Inc. | Messaging system with battery level sharing |
US11350026B1 (en) | 2021-04-30 | 2022-05-31 | Apple Inc. | User interfaces for altering visual media |
US11356720B2 (en) | 2020-01-30 | 2022-06-07 | Snap Inc. | Video generation system to render frames on demand |
US11360733B2 (en) | 2020-09-10 | 2022-06-14 | Snap Inc. | Colocated shared augmented reality without shared backend |
US11411895B2 (en) | 2017-11-29 | 2022-08-09 | Snap Inc. | Generating aggregated media content items for a group of users in an electronic messaging application |
US11425062B2 (en) | 2019-09-27 | 2022-08-23 | Snap Inc. | Recommended content viewed by friends |
US11425068B2 (en) | 2009-02-03 | 2022-08-23 | Snap Inc. | Interactive avatar in messaging environment |
US11438341B1 (en) | 2016-10-10 | 2022-09-06 | Snap Inc. | Social media post subscribe requests for buffer user accounts |
US11443462B2 (en) * | 2018-05-23 | 2022-09-13 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for generating cartoon face image, and computer storage medium |
US11450051B2 (en) | 2020-11-18 | 2022-09-20 | Snap Inc. | Personalized avatar real-time motion capture |
US11452939B2 (en) | 2020-09-21 | 2022-09-27 | Snap Inc. | Graphical marker generation system for synchronizing users |
US11455081B2 (en) | 2019-08-05 | 2022-09-27 | Snap Inc. | Message thread prioritization interface |
US11455082B2 (en) | 2018-09-28 | 2022-09-27 | Snap Inc. | Collaborative achievement interface |
US11460974B1 (en) | 2017-11-28 | 2022-10-04 | Snap Inc. | Content discovery refresh |
US11468625B2 (en) | 2018-09-11 | 2022-10-11 | Apple Inc. | User interfaces for simulated depth effects |
US11481988B2 (en) | 2010-04-07 | 2022-10-25 | Apple Inc. | Avatar editing environment |
US11516173B1 (en) | 2018-12-26 | 2022-11-29 | Snap Inc. | Message composition interface |
US11543939B2 (en) | 2020-06-08 | 2023-01-03 | Snap Inc. | Encoded image based messaging system |
US11544885B2 (en) | 2021-03-19 | 2023-01-03 | Snap Inc. | Augmented reality experience based on physical items |
US11544883B1 (en) | 2017-01-16 | 2023-01-03 | Snap Inc. | Coded vision system |
US11562548B2 (en) | 2021-03-22 | 2023-01-24 | Snap Inc. | True size eyewear in real time |
US11580700B2 (en) | 2016-10-24 | 2023-02-14 | Snap Inc. | Augmented reality object manipulation |
US11580682B1 (en) | 2020-06-30 | 2023-02-14 | Snap Inc. | Messaging system with augmented reality makeup |
US11616745B2 (en) | 2017-01-09 | 2023-03-28 | Snap Inc. | Contextual generation and selection of customized media content |
US11615592B2 (en) | 2020-10-27 | 2023-03-28 | Snap Inc. | Side-by-side character animation from realtime 3D body motion capture |
US11619501B2 (en) | 2020-03-11 | 2023-04-04 | Snap Inc. | Avatar based on trip |
US11625873B2 (en) | 2020-03-30 | 2023-04-11 | Snap Inc. | Personalized media overlay recommendation |
US11636654B2 (en) | 2021-05-19 | 2023-04-25 | Snap Inc. | AR-based connected portal shopping |
US11636662B2 (en) | 2021-09-30 | 2023-04-25 | Snap Inc. | Body normal network light and rendering control |
US11651539B2 (en) | 2020-01-30 | 2023-05-16 | Snap Inc. | System for generating media content items on demand |
US11651572B2 (en) | 2021-10-11 | 2023-05-16 | Snap Inc. | Light and rendering of garments |
US11662900B2 (en) | 2016-05-31 | 2023-05-30 | Snap Inc. | Application control using a gesture based trigger |
US11660022B2 (en) | 2020-10-27 | 2023-05-30 | Snap Inc. | Adaptive skeletal joint smoothing |
US11663792B2 (en) | 2021-09-08 | 2023-05-30 | Snap Inc. | Body fitted accessory with physics simulation |
US11670059B2 (en) | 2021-09-01 | 2023-06-06 | Snap Inc. | Controlling interactive fashion based on body gestures |
US11673054B2 (en) | 2021-09-07 | 2023-06-13 | Snap Inc. | Controlling AR games on fashion items |
US11676199B2 (en) | 2019-06-28 | 2023-06-13 | Snap Inc. | Generating customizable avatar outfits |
US11683280B2 (en) | 2020-06-10 | 2023-06-20 | Snap Inc. | Messaging system including an external-resource dock and drawer |
US11706521B2 (en) | 2019-05-06 | 2023-07-18 | Apple Inc. | User interfaces for capturing and managing visual media |
US11704878B2 (en) | 2017-01-09 | 2023-07-18 | Snap Inc. | Surface aware lens |
US11722764B2 (en) | 2018-05-07 | 2023-08-08 | Apple Inc. | Creative camera |
US11734959B2 (en) | 2021-03-16 | 2023-08-22 | Snap Inc. | Activating hands-free mode on mirroring device |
US11734866B2 (en) | 2021-09-13 | 2023-08-22 | Snap Inc. | Controlling interactive fashion based on voice |
US11734894B2 (en) | 2020-11-18 | 2023-08-22 | Snap Inc. | Real-time motion transfer for prosthetic limbs |
US11748958B2 (en) | 2021-12-07 | 2023-09-05 | Snap Inc. | Augmented reality unboxing experience |
US11748931B2 (en) | 2020-11-18 | 2023-09-05 | Snap Inc. | Body animation sharing and remixing |
US11763481B2 (en) | 2021-10-20 | 2023-09-19 | Snap Inc. | Mirror-based augmented reality experience |
US11770601B2 (en) | 2019-05-06 | 2023-09-26 | Apple Inc. | User interfaces for capturing and managing visual media |
US11778339B2 (en) | 2021-04-30 | 2023-10-03 | Apple Inc. | User interfaces for altering visual media |
US11776190B2 (en) | 2021-06-04 | 2023-10-03 | Apple Inc. | Techniques for managing an avatar on a lock screen |
US11790531B2 (en) | 2021-02-24 | 2023-10-17 | Snap Inc. | Whole body segmentation |
US11790614B2 (en) | 2021-10-11 | 2023-10-17 | Snap Inc. | Inferring intent from pose and speech input |
US11798201B2 (en) | 2021-03-16 | 2023-10-24 | Snap Inc. | Mirroring device with whole-body outfits |
US11798238B2 (en) | 2021-09-14 | 2023-10-24 | Snap Inc. | Blending body mesh into external mesh |
US11809633B2 (en) | 2021-03-16 | 2023-11-07 | Snap Inc. | Mirroring device with pointing based navigation |
US11818286B2 (en) | 2020-03-30 | 2023-11-14 | Snap Inc. | Avatar recommendation and reply |
US11823346B2 (en) | 2022-01-17 | 2023-11-21 | Snap Inc. | AR body part tracking system |
US11830209B2 (en) | 2017-05-26 | 2023-11-28 | Snap Inc. | Neural network-based image stream modification |
US11836862B2 (en) | 2021-10-11 | 2023-12-05 | Snap Inc. | External mesh with vertex attributes |
US11836866B2 (en) | 2021-09-20 | 2023-12-05 | Snap Inc. | Deforming real-world object using an external mesh |
US11842411B2 (en) | 2017-04-27 | 2023-12-12 | Snap Inc. | Location-based virtual avatars |
US11854069B2 (en) | 2021-07-16 | 2023-12-26 | Snap Inc. | Personalized try-on ads |
US11852554B1 (en) | 2019-03-21 | 2023-12-26 | Snap Inc. | Barometer calibration in a location sharing system |
US11863513B2 (en) | 2020-08-31 | 2024-01-02 | Snap Inc. | Media content playback and comments management |
US11868414B1 (en) | 2019-03-14 | 2024-01-09 | Snap Inc. | Graph-based prediction for contact suggestion in a location sharing system |
US11870745B1 (en) | 2022-06-28 | 2024-01-09 | Snap Inc. | Media gallery sharing and management |
US11870743B1 (en) | 2017-01-23 | 2024-01-09 | Snap Inc. | Customized digital avatar accessories |
US11875439B2 (en) | 2018-04-18 | 2024-01-16 | Snap Inc. | Augmented expression system |
US11880947B2 (en) | 2021-12-21 | 2024-01-23 | Snap Inc. | Real-time upper-body garment exchange |
US11887260B2 (en) | 2021-12-30 | 2024-01-30 | Snap Inc. | AR position indicator |
US11887231B2 (en) | 2015-12-18 | 2024-01-30 | Tahoe Research, Ltd. | Avatar animation system |
US11888795B2 (en) | 2020-09-21 | 2024-01-30 | Snap Inc. | Chats with micro sound clips |
US11893166B1 (en) | 2022-11-08 | 2024-02-06 | Snap Inc. | User avatar movement control using an augmented reality eyewear device |
US11900506B2 (en) | 2021-09-09 | 2024-02-13 | Snap Inc. | Controlling interactive fashion based on facial expressions |
US11908243B2 (en) | 2021-03-16 | 2024-02-20 | Snap Inc. | Menu hierarchy navigation on electronic mirroring devices |
US11908083B2 (en) | 2021-08-31 | 2024-02-20 | Snap Inc. | Deforming custom mesh based on body mesh |
US11910269B2 (en) | 2020-09-25 | 2024-02-20 | Snap Inc. | Augmented reality content items including user avatar to share location |
US11921998B2 (en) | 2020-05-11 | 2024-03-05 | Apple Inc. | Editing features of an avatar |
US11922010B2 (en) | 2020-06-08 | 2024-03-05 | Snap Inc. | Providing contextual information with keyboard interface for messaging system |
US11928783B2 (en) | 2021-12-30 | 2024-03-12 | Snap Inc. | AR position and orientation along a plane |
US11941227B2 (en) | 2021-06-30 | 2024-03-26 | Snap Inc. | Hybrid search system for customizable media |
US11956190B2 (en) | 2020-05-08 | 2024-04-09 | Snap Inc. | Messaging system with a carousel of related entities |
US11954762B2 (en) | 2022-01-19 | 2024-04-09 | Snap Inc. | Object replacement system |
US11960784B2 (en) | 2021-12-07 | 2024-04-16 | Snap Inc. | Shared augmented reality unboxing experience |
US11969075B2 (en) | 2020-03-31 | 2024-04-30 | Snap Inc. | Augmented reality beauty product tutorials |
US11978283B2 (en) | 2021-03-16 | 2024-05-07 | Snap Inc. | Mirroring device with a hands-free mode |
US11983462B2 (en) | 2021-08-31 | 2024-05-14 | Snap Inc. | Conversation guided augmented reality experience |
US11983826B2 (en) | 2021-09-30 | 2024-05-14 | Snap Inc. | 3D upper garment tracking |
US11991419B2 (en) | 2020-01-30 | 2024-05-21 | Snap Inc. | Selecting avatars to be included in the video being generated on demand |
US11995757B2 (en) | 2021-10-29 | 2024-05-28 | Snap Inc. | Customized animation from video |
US11996113B2 (en) | 2021-10-29 | 2024-05-28 | Snap Inc. | Voice notes with changing effects |
US12002146B2 (en) | 2022-03-28 | 2024-06-04 | Snap Inc. | 3D modeling based on neural light field |
US12008811B2 (en) | 2020-12-30 | 2024-06-11 | Snap Inc. | Machine learning-based selection of a representative video frame within a messaging application |
EP4382182A1 (en) * | 2022-12-08 | 2024-06-12 | Sony Interactive Entertainment Europe Limited | Device and method for controlling a virtual avatar on an electronic device |
US12020386B2 (en) | 2022-06-23 | 2024-06-25 | Snap Inc. | Applying pregenerated virtual experiences in new location |
US12020358B2 (en) | 2021-10-29 | 2024-06-25 | Snap Inc. | Animated custom sticker creation |
US12020384B2 (en) | 2022-06-21 | 2024-06-25 | Snap Inc. | Integrating augmented reality experiences with other components |
US12028301B2 (en) | 2023-01-31 | 2024-07-02 | Snap Inc. | Contextual generation and selection of customized media content |
Families Citing this family (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10875182B2 (en) | 2008-03-20 | 2020-12-29 | Teladoc Health, Inc. | Remote presence system mounted to operating room hardware |
US9154942B2 (en) * | 2008-11-26 | 2015-10-06 | Free Stream Media Corp. | Zero configuration communication between a browser and a networked media device |
US10334324B2 (en) | 2008-11-26 | 2019-06-25 | Free Stream Media Corp. | Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device |
US10880340B2 (en) | 2008-11-26 | 2020-12-29 | Free Stream Media Corp. | Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device |
US10631068B2 (en) | 2008-11-26 | 2020-04-21 | Free Stream Media Corp. | Content exposure attribution based on renderings of related content across multiple devices |
US10567823B2 (en) | 2008-11-26 | 2020-02-18 | Free Stream Media Corp. | Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device |
US8180891B1 (en) | 2008-11-26 | 2012-05-15 | Free Stream Media Corp. | Discovery, access control, and communication with networked services from within a security sandbox |
US9986279B2 (en) | 2008-11-26 | 2018-05-29 | Free Stream Media Corp. | Discovery, access control, and communication with networked services |
US9519772B2 (en) | 2008-11-26 | 2016-12-13 | Free Stream Media Corp. | Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device |
US10419541B2 (en) | 2008-11-26 | 2019-09-17 | Free Stream Media Corp. | Remotely control devices over a network without authentication or registration |
US9961388B2 (en) | 2008-11-26 | 2018-05-01 | David Harrison | Exposure of public internet protocol addresses in an advertising exchange server to improve relevancy of advertisements |
US10977693B2 (en) | 2008-11-26 | 2021-04-13 | Free Stream Media Corp. | Association of content identifier of audio-visual data with additional data through capture infrastructure |
US8670017B2 (en) | 2010-03-04 | 2014-03-11 | Intouch Technologies, Inc. | Remote presence system including a cart that supports a robot face and an overhead camera |
US8718837B2 (en) | 2011-01-28 | 2014-05-06 | Intouch Technologies | Interfacing with a mobile telepresence robot |
US9323250B2 (en) | 2011-01-28 | 2016-04-26 | Intouch Technologies, Inc. | Time-dependent navigation of telepresence robots |
US9098611B2 (en) | 2012-11-26 | 2015-08-04 | Intouch Technologies, Inc. | Enhanced video interaction for a user interface of a telepresence network |
US9361021B2 (en) | 2012-05-22 | 2016-06-07 | Irobot Corporation | Graphical user interfaces including touchpad driving interfaces for telemedicine devices |
EP2852475A4 (en) | 2012-05-22 | 2016-01-20 | Intouch Technologies Inc | Social behavior rules for a medical telepresence robot |
WO2014139118A1 (en) | 2013-03-14 | 2014-09-18 | Intel Corporation | Adaptive facial expression calibration |
US10044849B2 (en) * | 2013-03-15 | 2018-08-07 | Intel Corporation | Scalable avatar messaging |
WO2016045010A1 (en) * | 2014-09-24 | 2016-03-31 | Intel Corporation | Facial gesture driven animation communication system |
US9633463B2 (en) * | 2014-09-24 | 2017-04-25 | Intel Corporation | User gesture driven avatar apparatus and method |
JP6547290B2 (en) * | 2014-12-17 | 2019-07-24 | オムロン株式会社 | Image sensing system |
CN104618721B (en) * | 2015-01-28 | 2018-01-26 | 山东大学 | The ELF magnetic field human face video coding-decoding method of feature based modeling |
CN105407313A (en) * | 2015-10-28 | 2016-03-16 | 掌赢信息科技(上海)有限公司 | Video calling method, equipment and system |
US20170178287A1 (en) * | 2015-12-21 | 2017-06-22 | Glen J. Anderson | Identity obfuscation |
CN105516785A (en) * | 2016-02-18 | 2016-04-20 | 启云科技股份有限公司 | Communication system, communication method and server for transmitting human-shaped doll image or video |
CN107705341B (en) * | 2016-08-08 | 2023-05-12 | 创奇思科研有限公司 | Method and device for generating user expression head portrait |
DK179471B1 (en) | 2016-09-23 | 2018-11-26 | Apple Inc. | Image data for enhanced user interactions |
JP6698216B2 (en) | 2016-09-23 | 2020-05-27 | アップル インコーポレイテッドApple Inc. | Patent application to the US Patent and Trademark Office for creating and editing avatars |
US10950275B2 (en) | 2016-11-18 | 2021-03-16 | Facebook, Inc. | Methods and systems for tracking media effects in a media effect index |
US10303928B2 (en) | 2016-11-29 | 2019-05-28 | Facebook, Inc. | Face detection for video calls |
US10122965B2 (en) | 2016-11-29 | 2018-11-06 | Facebook, Inc. | Face detection for background management |
US10554908B2 (en) * | 2016-12-05 | 2020-02-04 | Facebook, Inc. | Media effect application |
US11862302B2 (en) | 2017-04-24 | 2024-01-02 | Teladoc Health, Inc. | Automated transcription and documentation of tele-health encounters |
DK179948B1 (en) | 2017-05-16 | 2019-10-22 | Apple Inc. | Recording and sending Emoji |
KR102331988B1 (en) * | 2017-05-16 | 2021-11-29 | 애플 인크. | Record and send emojis |
KR20230144661A (en) * | 2017-05-16 | 2023-10-16 | 애플 인크. | Emoji recording and sending |
CN110490093B (en) * | 2017-05-16 | 2020-10-16 | 苹果公司 | Emoticon recording and transmission |
US10483007B2 (en) | 2017-07-25 | 2019-11-19 | Intouch Technologies, Inc. | Modular telehealth cart with thermal imaging and touch screen user interface |
US11636944B2 (en) | 2017-08-25 | 2023-04-25 | Teladoc Health, Inc. | Connectivity infrastructure for a telehealth platform |
US9996940B1 (en) * | 2017-10-25 | 2018-06-12 | Connectivity Labs Inc. | Expression transfer across telecommunications networks |
US10613827B2 (en) * | 2018-03-06 | 2020-04-07 | Language Line Services, Inc. | Configuration for simulating a video remote interpretation session |
US10617299B2 (en) | 2018-04-27 | 2020-04-14 | Intouch Technologies, Inc. | Telehealth cart that supports a removable tablet with seamless audio/video switching |
DK179992B1 (en) | 2018-05-07 | 2020-01-14 | Apple Inc. | Visning af brugergrænseflader associeret med fysiske aktiviteter |
CN108845741B (en) * | 2018-06-19 | 2020-08-21 | 北京百度网讯科技有限公司 | AR expression generation method, client, terminal and storage medium |
WO2020013891A1 (en) * | 2018-07-11 | 2020-01-16 | Apple Inc. | Techniques for providing audio and video effects |
KR102664710B1 (en) * | 2018-08-08 | 2024-05-09 | 삼성전자주식회사 | Electronic device for displaying avatar corresponding to external object according to change in position of external object |
US20200175739A1 (en) * | 2018-12-04 | 2020-06-04 | Robert Bosch Gmbh | Method and Device for Generating and Displaying an Electronic Avatar |
CN109727320A (en) * | 2018-12-29 | 2019-05-07 | 三星电子(中国)研发中心 | A kind of generation method and equipment of avatar |
DK201970530A1 (en) | 2019-05-06 | 2021-01-28 | Apple Inc | Avatar integration with multiple applications |
CN110213521A (en) * | 2019-05-22 | 2019-09-06 | 创易汇(北京)科技有限公司 | A kind of virtual instant communicating method |
US11074753B2 (en) * | 2019-06-02 | 2021-07-27 | Apple Inc. | Multi-pass object rendering using a three- dimensional geometric constraint |
KR20210012724A (en) | 2019-07-26 | 2021-02-03 | 삼성전자주식회사 | Electronic device for providing avatar and operating method thereof |
US11158028B1 (en) * | 2019-10-28 | 2021-10-26 | Snap Inc. | Mirrored selfie |
WO2021252160A1 (en) | 2020-06-08 | 2021-12-16 | Apple Inc. | Presenting avatars in three-dimensional environments |
CN111641798A (en) * | 2020-06-15 | 2020-09-08 | 黑龙江科技大学 | Video communication method and device |
CA3194856A1 (en) * | 2020-10-05 | 2022-04-14 | Michel Boivin | System and methods for enhanced videoconferencing |
US11418760B1 (en) | 2021-01-29 | 2022-08-16 | Microsoft Technology Licensing, Llc | Visual indicators for providing user awareness of independent activity of participants of a communication session |
CN115002391A (en) * | 2022-05-16 | 2022-09-02 | 中国第一汽车股份有限公司 | Vehicle-mounted follow-up virtual image video conference system and control method |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020197967A1 (en) * | 2001-06-20 | 2002-12-26 | Holger Scholl | Communication system with system components for ascertaining the authorship of a communication contribution |
US7076118B1 (en) * | 1997-12-05 | 2006-07-11 | Sharp Laboratories Of America, Inc. | Document classification system |
US20070065039A1 (en) * | 2005-09-22 | 2007-03-22 | Samsung Electronics Co., Ltd. | Image capturing apparatus with image compensation and method thereof |
US20070201730A1 (en) * | 2006-02-20 | 2007-08-30 | Funai Electric Co., Ltd. | Television set and authentication device |
US20070230794A1 (en) * | 2006-04-04 | 2007-10-04 | Logitech Europe S.A. | Real-time automatic facial feature replacement |
US20080059570A1 (en) * | 2006-09-05 | 2008-03-06 | Aol Llc | Enabling an im user to navigate a virtual world |
US20090055484A1 (en) * | 2007-08-20 | 2009-02-26 | Thanh Vuong | System and method for representation of electronic mail users using avatars |
US20100156781A1 (en) * | 2008-12-19 | 2010-06-24 | Samsung Electronics Co., Ltd. | Eye gaze control during avatar-based communication |
US20100189354A1 (en) * | 2009-01-28 | 2010-07-29 | Xerox Corporation | Modeling images as sets of weighted features |
US20100220897A1 (en) * | 2009-02-27 | 2010-09-02 | Kabushiki Kaisha Toshiba | Information processing apparatus and network conference system |
US20110085139A1 (en) * | 2009-10-08 | 2011-04-14 | Tobii Technology Ab | Eye-tracking using a gpu |
US20130109302A1 (en) * | 2011-10-31 | 2013-05-02 | Royce A. Levien | Multi-modality communication with conversion offloading |
US20130120522A1 (en) * | 2011-11-16 | 2013-05-16 | Cisco Technology, Inc. | System and method for alerting a participant in a video conference |
US20130293584A1 (en) * | 2011-12-20 | 2013-11-07 | Glen J. Anderson | User-to-user communication enhancement with augmented reality |
Family Cites Families (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5880731A (en) * | 1995-12-14 | 1999-03-09 | Microsoft Corporation | Use of avatars with automatic gesturing and bounded interaction in on-line chat session |
JP3771989B2 (en) * | 1997-03-24 | 2006-05-10 | オリンパス株式会社 | Image / audio communication system and videophone transmission / reception method |
KR100530812B1 (en) * | 1998-04-13 | 2005-11-28 | 네브엔지니어링 인코포레이티드 | Wavelet-based facial motion capture for avatar animation |
EP1574023A1 (en) * | 2002-12-12 | 2005-09-14 | Koninklijke Philips Electronics N.V. | Avatar database for mobile video communications |
US7106358B2 (en) * | 2002-12-30 | 2006-09-12 | Motorola, Inc. | Method, system and apparatus for telepresence communications |
JP2004289254A (en) * | 2003-03-19 | 2004-10-14 | Matsushita Electric Ind Co Ltd | Videophone terminal |
US7447211B1 (en) * | 2004-03-23 | 2008-11-04 | Avaya Inc. | Method and apparatus of establishing a communication channel using protected network resources |
US7969461B2 (en) * | 2006-03-30 | 2011-06-28 | Polycom, Inc. | System and method for exchanging connection information for videoconferencing units using instant messaging |
CN101098241A (en) * | 2006-06-26 | 2008-01-02 | 腾讯科技(深圳)有限公司 | Method and system for implementing virtual image |
CN1972274A (en) * | 2006-11-07 | 2007-05-30 | 搜图科技(南京)有限公司 | System and method for processing facial image change based on Internet and mobile application |
CN101669328A (en) * | 2007-02-09 | 2010-03-10 | 达丽星网络有限公司 | Method and apparatus for the adaptation of multimedia content in telecommunications networks |
GB0703974D0 (en) * | 2007-03-01 | 2007-04-11 | Sony Comp Entertainment Europe | Entertainment device |
CN101472158A (en) | 2007-12-27 | 2009-07-01 | 上海银晨智能识别科技有限公司 | Network photographic device based on human face detection and image forming method |
US8340452B2 (en) * | 2008-03-17 | 2012-12-25 | Xerox Corporation | Automatic generation of a photo guide |
EP2107708A1 (en) * | 2008-04-04 | 2009-10-07 | Deutsche Thomson OHG | Method for transporting data over a data connection and network component |
CN101610421B (en) * | 2008-06-17 | 2011-12-21 | 华为终端有限公司 | Video communication method, video communication device and video communication system |
US20100070858A1 (en) * | 2008-09-12 | 2010-03-18 | At&T Intellectual Property I, L.P. | Interactive Media System and Method Using Context-Based Avatar Configuration |
JP5423379B2 (en) * | 2009-08-31 | 2014-02-19 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
US8694899B2 (en) * | 2010-06-01 | 2014-04-08 | Apple Inc. | Avatars reflecting user states |
US20110304629A1 (en) * | 2010-06-09 | 2011-12-15 | Microsoft Corporation | Real-time animation of facial expressions |
CN102087750A (en) * | 2010-06-13 | 2011-06-08 | 湖南宏梦信息科技有限公司 | Method for manufacturing cartoon special effect |
US20120058747A1 (en) * | 2010-09-08 | 2012-03-08 | James Yiannios | Method For Communicating and Displaying Interactive Avatar |
US8638364B2 (en) * | 2010-09-23 | 2014-01-28 | Sony Computer Entertainment Inc. | User interface system and method using thermal imaging |
US8665307B2 (en) * | 2011-02-11 | 2014-03-04 | Tangome, Inc. | Augmenting a video conference |
US9330483B2 (en) | 2011-04-11 | 2016-05-03 | Intel Corporation | Avatar facial expression techniques |
US20130004028A1 (en) * | 2011-06-28 | 2013-01-03 | Jones Michael J | Method for Filtering Using Block-Gabor Filters for Determining Descriptors for Images |
-
2011
- 2011-12-29 US US13/993,612 patent/US9398262B2/en active Active
- 2011-12-29 CN CN201710066013.2A patent/CN106961621A/en active Pending
- 2011-12-29 WO PCT/CN2011/084902 patent/WO2013097139A1/en active Application Filing
- 2011-12-29 CN CN201180075926.1A patent/CN104115503A/en active Pending
-
2012
- 2012-04-09 US US13/996,230 patent/US20140055554A1/en not_active Abandoned
- 2012-04-09 CN CN201280064807.0A patent/CN104011738A/en active Pending
- 2012-04-09 WO PCT/CN2012/000461 patent/WO2013097264A1/en active Application Filing
-
2016
- 2016-06-16 US US15/184,409 patent/US20170054945A1/en not_active Abandoned
- 2016-12-30 US US15/395,661 patent/US20170111616A1/en not_active Abandoned
- 2016-12-30 US US15/395,657 patent/US20170111615A1/en not_active Abandoned
-
2017
- 2017-07-07 US US15/643,984 patent/US20170310934A1/en not_active Abandoned
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7076118B1 (en) * | 1997-12-05 | 2006-07-11 | Sharp Laboratories Of America, Inc. | Document classification system |
US20020197967A1 (en) * | 2001-06-20 | 2002-12-26 | Holger Scholl | Communication system with system components for ascertaining the authorship of a communication contribution |
US20070065039A1 (en) * | 2005-09-22 | 2007-03-22 | Samsung Electronics Co., Ltd. | Image capturing apparatus with image compensation and method thereof |
US20070201730A1 (en) * | 2006-02-20 | 2007-08-30 | Funai Electric Co., Ltd. | Television set and authentication device |
US20070230794A1 (en) * | 2006-04-04 | 2007-10-04 | Logitech Europe S.A. | Real-time automatic facial feature replacement |
US20080059570A1 (en) * | 2006-09-05 | 2008-03-06 | Aol Llc | Enabling an im user to navigate a virtual world |
US20090055484A1 (en) * | 2007-08-20 | 2009-02-26 | Thanh Vuong | System and method for representation of electronic mail users using avatars |
US20100156781A1 (en) * | 2008-12-19 | 2010-06-24 | Samsung Electronics Co., Ltd. | Eye gaze control during avatar-based communication |
US20100189354A1 (en) * | 2009-01-28 | 2010-07-29 | Xerox Corporation | Modeling images as sets of weighted features |
US20100220897A1 (en) * | 2009-02-27 | 2010-09-02 | Kabushiki Kaisha Toshiba | Information processing apparatus and network conference system |
US20110085139A1 (en) * | 2009-10-08 | 2011-04-14 | Tobii Technology Ab | Eye-tracking using a gpu |
US20130109302A1 (en) * | 2011-10-31 | 2013-05-02 | Royce A. Levien | Multi-modality communication with conversion offloading |
US20130120522A1 (en) * | 2011-11-16 | 2013-05-16 | Cisco Technology, Inc. | System and method for alerting a participant in a video conference |
US20130293584A1 (en) * | 2011-12-20 | 2013-11-07 | Glen J. Anderson | User-to-user communication enhancement with augmented reality |
Cited By (350)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11425068B2 (en) | 2009-02-03 | 2022-08-23 | Snap Inc. | Interactive avatar in messaging environment |
US11869165B2 (en) | 2010-04-07 | 2024-01-09 | Apple Inc. | Avatar editing environment |
US11481988B2 (en) | 2010-04-07 | 2022-10-25 | Apple Inc. | Avatar editing environment |
US20130257876A1 (en) * | 2012-03-30 | 2013-10-03 | Videx, Inc. | Systems and Methods for Providing An Interactive Avatar |
US10702773B2 (en) * | 2012-03-30 | 2020-07-07 | Videx, Inc. | Systems and methods for providing an interactive avatar |
US11595617B2 (en) | 2012-04-09 | 2023-02-28 | Intel Corporation | Communication using interactive avatars |
US9357174B2 (en) | 2012-04-09 | 2016-05-31 | Intel Corporation | System and method for avatar management and selection |
US11303850B2 (en) | 2012-04-09 | 2022-04-12 | Intel Corporation | Communication using interactive avatars |
US11229849B2 (en) | 2012-05-08 | 2022-01-25 | Snap Inc. | System and method for generating and displaying avatars |
US11925869B2 (en) | 2012-05-08 | 2024-03-12 | Snap Inc. | System and method for generating and displaying avatars |
US11607616B2 (en) | 2012-05-08 | 2023-03-21 | Snap Inc. | System and method for generating and displaying avatars |
US20180003983A1 (en) * | 2012-09-12 | 2018-01-04 | Sony Corporation | Image display device, image display method, and recording medium |
US9460541B2 (en) | 2013-03-29 | 2016-10-04 | Intel Corporation | Avatar animation, social networking and touch screen applications |
US9948887B2 (en) | 2013-08-09 | 2018-04-17 | Samsung Electronics Co., Ltd. | Hybrid visual communication |
US9998705B2 (en) * | 2013-08-09 | 2018-06-12 | Samsung Electronics Co., Ltd. | Hybrid visual communication |
US20160353056A1 (en) * | 2013-08-09 | 2016-12-01 | Samsung Electronics Co., Ltd. | Hybrid visual communication |
US11443772B2 (en) | 2014-02-05 | 2022-09-13 | Snap Inc. | Method for triggering events in a video |
US11651797B2 (en) | 2014-02-05 | 2023-05-16 | Snap Inc. | Real time video processing for changing proportions of an object in the video |
US10991395B1 (en) | 2014-02-05 | 2021-04-27 | Snap Inc. | Method for real time video processing involving changing a color of an object on a human face in a video |
JP2015172883A (en) * | 2014-03-12 | 2015-10-01 | 株式会社コナミデジタルエンタテインメント | Terminal equipment, information communication method, and information communication program |
US20160062987A1 (en) * | 2014-08-26 | 2016-03-03 | Ncr Corporation | Language independent customer communications |
CN104301655A (en) * | 2014-10-29 | 2015-01-21 | 四川智诚天逸科技有限公司 | Eye-tracking video communication device |
US11295502B2 (en) | 2014-12-23 | 2022-04-05 | Intel Corporation | Augmented facial animation |
US10419497B2 (en) * | 2015-03-31 | 2019-09-17 | Bose Corporation | Establishing communication between digital media servers and audio playback devices in audio systems |
US20170046065A1 (en) * | 2015-04-07 | 2017-02-16 | Intel Corporation | Avatar keyboard |
US20170069124A1 (en) * | 2015-04-07 | 2017-03-09 | Intel Corporation | Avatar generation and animations |
US20180211096A1 (en) * | 2015-06-30 | 2018-07-26 | Beijing Kuangshi Technology Co., Ltd. | Living-body detection method and device and computer program product |
US11887231B2 (en) | 2015-12-18 | 2024-01-30 | Tahoe Research, Ltd. | Avatar animation system |
US11048916B2 (en) | 2016-03-31 | 2021-06-29 | Snap Inc. | Automated avatar generation |
US11631276B2 (en) | 2016-03-31 | 2023-04-18 | Snap Inc. | Automated avatar generation |
CN107333086A (en) * | 2016-04-29 | 2017-11-07 | 掌赢信息科技(上海)有限公司 | A kind of method and device that video communication is carried out in virtual scene |
US11662900B2 (en) | 2016-05-31 | 2023-05-30 | Snap Inc. | Application control using a gesture based trigger |
US10602053B2 (en) | 2016-06-12 | 2020-03-24 | Apple Inc. | User interface for camera effects |
US11641517B2 (en) | 2016-06-12 | 2023-05-02 | Apple Inc. | User interface for camera effects |
US11165949B2 (en) | 2016-06-12 | 2021-11-02 | Apple Inc. | User interface for capturing photos with different camera magnifications |
US11245837B2 (en) | 2016-06-12 | 2022-02-08 | Apple Inc. | User interface for camera effects |
US11962889B2 (en) | 2016-06-12 | 2024-04-16 | Apple Inc. | User interface for camera effects |
US10984569B2 (en) | 2016-06-30 | 2021-04-20 | Snap Inc. | Avatar based ideogram generation |
EP4254342A3 (en) * | 2016-06-30 | 2023-12-06 | Snap Inc. | Avatar based ideogram generation |
US10360708B2 (en) | 2016-06-30 | 2019-07-23 | Snap Inc. | Avatar based ideogram generation |
WO2018006053A1 (en) * | 2016-06-30 | 2018-01-04 | Snapchat, Inc. | Avatar based ideogram generation |
US11509615B2 (en) | 2016-07-19 | 2022-11-22 | Snap Inc. | Generating customized electronic messaging graphics |
US10848446B1 (en) | 2016-07-19 | 2020-11-24 | Snap Inc. | Displaying customized electronic messaging graphics |
US11438288B2 (en) | 2016-07-19 | 2022-09-06 | Snap Inc. | Displaying customized electronic messaging graphics |
US10855632B2 (en) | 2016-07-19 | 2020-12-01 | Snap Inc. | Displaying customized electronic messaging graphics |
US11418470B2 (en) | 2016-07-19 | 2022-08-16 | Snap Inc. | Displaying customized electronic messaging graphics |
US20180027307A1 (en) * | 2016-07-25 | 2018-01-25 | Yahoo!, Inc. | Emotional reaction sharing |
US10573048B2 (en) * | 2016-07-25 | 2020-02-25 | Oath Inc. | Emotional reaction sharing |
US11962598B2 (en) | 2016-10-10 | 2024-04-16 | Snap Inc. | Social media post subscribe requests for buffer user accounts |
US11438341B1 (en) | 2016-10-10 | 2022-09-06 | Snap Inc. | Social media post subscribe requests for buffer user accounts |
US11100311B2 (en) | 2016-10-19 | 2021-08-24 | Snap Inc. | Neural networks for facial modeling |
US11218433B2 (en) | 2016-10-24 | 2022-01-04 | Snap Inc. | Generating and displaying customized avatars in electronic messages |
US11580700B2 (en) | 2016-10-24 | 2023-02-14 | Snap Inc. | Augmented reality object manipulation |
US10880246B2 (en) | 2016-10-24 | 2020-12-29 | Snap Inc. | Generating and displaying customized avatars in electronic messages |
US11843456B2 (en) | 2016-10-24 | 2023-12-12 | Snap Inc. | Generating and displaying customized avatars in media overlays |
US10938758B2 (en) | 2016-10-24 | 2021-03-02 | Snap Inc. | Generating and displaying customized avatars in media overlays |
US11876762B1 (en) | 2016-10-24 | 2024-01-16 | Snap Inc. | Generating and displaying customized avatars in media overlays |
WO2018128996A1 (en) * | 2017-01-03 | 2018-07-12 | Clipo, Inc. | System and method for facilitating dynamic avatar based on real-time facial expression detection |
US11616745B2 (en) | 2017-01-09 | 2023-03-28 | Snap Inc. | Contextual generation and selection of customized media content |
US11704878B2 (en) | 2017-01-09 | 2023-07-18 | Snap Inc. | Surface aware lens |
US11989809B2 (en) | 2017-01-16 | 2024-05-21 | Snap Inc. | Coded vision system |
US11544883B1 (en) | 2017-01-16 | 2023-01-03 | Snap Inc. | Coded vision system |
US11991130B2 (en) | 2017-01-18 | 2024-05-21 | Snap Inc. | Customized contextual media content item generation |
US10951562B2 (en) | 2017-01-18 | 2021-03-16 | Snap. Inc. | Customized contextual media content item generation |
US11870743B1 (en) | 2017-01-23 | 2024-01-09 | Snap Inc. | Customized digital avatar accessories |
US11231591B2 (en) * | 2017-02-24 | 2022-01-25 | Sony Corporation | Information processing apparatus, information processing method, and program |
US11069103B1 (en) | 2017-04-20 | 2021-07-20 | Snap Inc. | Customized user interface for electronic communications |
US11593980B2 (en) | 2017-04-20 | 2023-02-28 | Snap Inc. | Customized user interface for electronic communications |
US11474663B2 (en) | 2017-04-27 | 2022-10-18 | Snap Inc. | Location-based search mechanism in a graphical user interface |
US11392264B1 (en) | 2017-04-27 | 2022-07-19 | Snap Inc. | Map-based graphical user interface for multi-type social media galleries |
US11418906B2 (en) | 2017-04-27 | 2022-08-16 | Snap Inc. | Selective location-based identity communication |
US11842411B2 (en) | 2017-04-27 | 2023-12-12 | Snap Inc. | Location-based virtual avatars |
US11782574B2 (en) | 2017-04-27 | 2023-10-10 | Snap Inc. | Map-based graphical user interface indicating geospatial activity metrics |
US11451956B1 (en) | 2017-04-27 | 2022-09-20 | Snap Inc. | Location privacy management on map-based social media platforms |
US11995288B2 (en) | 2017-04-27 | 2024-05-28 | Snap Inc. | Location-based search mechanism in a graphical user interface |
US10952013B1 (en) | 2017-04-27 | 2021-03-16 | Snap Inc. | Selective location-based identity communication |
US11385763B2 (en) | 2017-04-27 | 2022-07-12 | Snap Inc. | Map-based graphical user interface indicating geospatial activity metrics |
US11893647B2 (en) | 2017-04-27 | 2024-02-06 | Snap Inc. | Location-based virtual avatars |
US10963529B1 (en) | 2017-04-27 | 2021-03-30 | Snap Inc. | Location-based search mechanism in a graphical user interface |
US11830209B2 (en) | 2017-05-26 | 2023-11-28 | Snap Inc. | Neural network-based image stream modification |
US11687224B2 (en) | 2017-06-04 | 2023-06-27 | Apple Inc. | User interface camera effects |
US11204692B2 (en) | 2017-06-04 | 2021-12-21 | Apple Inc. | User interface camera effects |
US10528243B2 (en) | 2017-06-04 | 2020-01-07 | Apple Inc. | User interface camera effects |
US11122094B2 (en) | 2017-07-28 | 2021-09-14 | Snap Inc. | Software application manager for messaging applications |
US11659014B2 (en) | 2017-07-28 | 2023-05-23 | Snap Inc. | Software application manager for messaging applications |
US11882162B2 (en) | 2017-07-28 | 2024-01-23 | Snap Inc. | Software application manager for messaging applications |
US11610354B2 (en) | 2017-10-26 | 2023-03-21 | Snap Inc. | Joint audio-video facial animation system |
US11120597B2 (en) | 2017-10-26 | 2021-09-14 | Snap Inc. | Joint audio-video facial animation system |
US20220284650A1 (en) * | 2017-10-30 | 2022-09-08 | Snap Inc. | Animated chat presence |
CN111344745A (en) * | 2017-10-30 | 2020-06-26 | 斯纳普公司 | Animated chat presentation |
US10657695B2 (en) * | 2017-10-30 | 2020-05-19 | Snap Inc. | Animated chat presence |
US20190130629A1 (en) * | 2017-10-30 | 2019-05-02 | Snap Inc. | Animated chat presence |
US11030789B2 (en) | 2017-10-30 | 2021-06-08 | Snap Inc. | Animated chat presence |
US11930055B2 (en) | 2017-10-30 | 2024-03-12 | Snap Inc. | Animated chat presence |
KR102401392B1 (en) * | 2017-10-30 | 2022-05-24 | 스냅 인코포레이티드 | Animated Chat Presence |
KR20200049895A (en) * | 2017-10-30 | 2020-05-08 | 스냅 인코포레이티드 | Animated Chat Presence |
US11706267B2 (en) * | 2017-10-30 | 2023-07-18 | Snap Inc. | Animated chat presence |
US11354843B2 (en) | 2017-10-30 | 2022-06-07 | Snap Inc. | Animated chat presence |
US11460974B1 (en) | 2017-11-28 | 2022-10-04 | Snap Inc. | Content discovery refresh |
US10936157B2 (en) | 2017-11-29 | 2021-03-02 | Snap Inc. | Selectable item including a customized graphic for an electronic messaging application |
US11411895B2 (en) | 2017-11-29 | 2022-08-09 | Snap Inc. | Generating aggregated media content items for a group of users in an electronic messaging application |
US10244208B1 (en) * | 2017-12-12 | 2019-03-26 | Facebook, Inc. | Systems and methods for visually representing users in communication applications |
US10839563B2 (en) | 2017-12-20 | 2020-11-17 | Samsung Electronics Co., Ltd. | Method and apparatus for processing image interaction |
US11769259B2 (en) | 2018-01-23 | 2023-09-26 | Snap Inc. | Region-based stabilized face tracking |
US10949648B1 (en) | 2018-01-23 | 2021-03-16 | Snap Inc. | Region-based stabilized face tracking |
KR102661019B1 (en) * | 2018-02-23 | 2024-04-26 | 삼성전자주식회사 | Electronic device providing image including 3d avatar in which motion of face is reflected by using 3d avatar corresponding to face and method for operating thefeof |
KR20190101835A (en) * | 2018-02-23 | 2019-09-02 | 삼성전자주식회사 | Electronic device providing image including 3d avatar in which motion of face is reflected by using 3d avatar corresponding to face and method for operating thefeof |
US11798246B2 (en) | 2018-02-23 | 2023-10-24 | Samsung Electronics Co., Ltd. | Electronic device for generating image including 3D avatar reflecting face motion through 3D avatar corresponding to face and method of operating same |
US11120601B2 (en) | 2018-02-28 | 2021-09-14 | Snap Inc. | Animated expressive icon |
US11688119B2 (en) | 2018-02-28 | 2023-06-27 | Snap Inc. | Animated expressive icon |
US11880923B2 (en) | 2018-02-28 | 2024-01-23 | Snap Inc. | Animated expressive icon |
US11468618B2 (en) | 2018-02-28 | 2022-10-11 | Snap Inc. | Animated expressive icon |
US11523159B2 (en) | 2018-02-28 | 2022-12-06 | Snap Inc. | Generating media content items based on location information |
US10979752B1 (en) | 2018-02-28 | 2021-04-13 | Snap Inc. | Generating media content items based on location information |
US11310176B2 (en) | 2018-04-13 | 2022-04-19 | Snap Inc. | Content suggestion system |
US11875439B2 (en) | 2018-04-18 | 2024-01-16 | Snap Inc. | Augmented expression system |
US11722764B2 (en) | 2018-05-07 | 2023-08-08 | Apple Inc. | Creative camera |
US10325417B1 (en) | 2018-05-07 | 2019-06-18 | Apple Inc. | Avatar creation user interface |
US11682182B2 (en) | 2018-05-07 | 2023-06-20 | Apple Inc. | Avatar creation user interface |
US11380077B2 (en) | 2018-05-07 | 2022-07-05 | Apple Inc. | Avatar creation user interface |
CN112042182A (en) * | 2018-05-07 | 2020-12-04 | 谷歌有限责任公司 | Manipulating remote avatars by facial expressions |
US10580221B2 (en) | 2018-05-07 | 2020-03-03 | Apple Inc. | Avatar creation user interface |
US10523879B2 (en) | 2018-05-07 | 2019-12-31 | Apple Inc. | Creative camera |
US10410434B1 (en) | 2018-05-07 | 2019-09-10 | Apple Inc. | Avatar creation user interface |
US10325416B1 (en) | 2018-05-07 | 2019-06-18 | Apple Inc. | Avatar creation user interface |
US10861248B2 (en) | 2018-05-07 | 2020-12-08 | Apple Inc. | Avatar creation user interface |
US11178335B2 (en) * | 2018-05-07 | 2021-11-16 | Apple Inc. | Creative camera |
US10375313B1 (en) * | 2018-05-07 | 2019-08-06 | Apple Inc. | Creative camera |
US11443462B2 (en) * | 2018-05-23 | 2022-09-13 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for generating cartoon face image, and computer storage medium |
US20190371039A1 (en) * | 2018-06-05 | 2019-12-05 | UBTECH Robotics Corp. | Method and smart terminal for switching expression of smart terminal |
US11699293B2 (en) | 2018-06-11 | 2023-07-11 | Fotonation Limited | Neural network image processing apparatus |
US11314324B2 (en) * | 2018-06-11 | 2022-04-26 | Fotonation Limited | Neural network image processing apparatus |
US11074675B2 (en) | 2018-07-31 | 2021-07-27 | Snap Inc. | Eye texture inpainting |
US11715268B2 (en) | 2018-08-30 | 2023-08-01 | Snap Inc. | Video clip object tracking |
US11030813B2 (en) | 2018-08-30 | 2021-06-08 | Snap Inc. | Video clip object tracking |
US11468625B2 (en) | 2018-09-11 | 2022-10-11 | Apple Inc. | User interfaces for simulated depth effects |
US10896534B1 (en) | 2018-09-19 | 2021-01-19 | Snap Inc. | Avatar style transformation using neural networks |
US11348301B2 (en) | 2018-09-19 | 2022-05-31 | Snap Inc. | Avatar style transformation using neural networks |
US11868590B2 (en) | 2018-09-25 | 2024-01-09 | Snap Inc. | Interface to display shared user groups |
US11294545B2 (en) | 2018-09-25 | 2022-04-05 | Snap Inc. | Interface to display shared user groups |
US10895964B1 (en) | 2018-09-25 | 2021-01-19 | Snap Inc. | Interface to display shared user groups |
US11455082B2 (en) | 2018-09-28 | 2022-09-27 | Snap Inc. | Collaborative achievement interface |
US11171902B2 (en) | 2018-09-28 | 2021-11-09 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
US11321857B2 (en) | 2018-09-28 | 2022-05-03 | Apple Inc. | Displaying and editing images with depth information |
US11704005B2 (en) | 2018-09-28 | 2023-07-18 | Snap Inc. | Collaborative achievement interface |
US11895391B2 (en) | 2018-09-28 | 2024-02-06 | Apple Inc. | Capturing and displaying images with multiple focal planes |
US11245658B2 (en) | 2018-09-28 | 2022-02-08 | Snap Inc. | System and method of generating private notifications between users in a communication session |
US11824822B2 (en) | 2018-09-28 | 2023-11-21 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
US11189070B2 (en) | 2018-09-28 | 2021-11-30 | Snap Inc. | System and method of generating targeted user lists using customizable avatar characteristics |
US11128792B2 (en) | 2018-09-28 | 2021-09-21 | Apple Inc. | Capturing and displaying images with multiple focal planes |
US11669985B2 (en) | 2018-09-28 | 2023-06-06 | Apple Inc. | Displaying and editing images with depth information |
US11477149B2 (en) | 2018-09-28 | 2022-10-18 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
US10904181B2 (en) | 2018-09-28 | 2021-01-26 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
US11610357B2 (en) | 2018-09-28 | 2023-03-21 | Snap Inc. | System and method of generating targeted user lists using customizable avatar characteristics |
US11321896B2 (en) | 2018-10-31 | 2022-05-03 | Snap Inc. | 3D avatar rendering |
US10872451B2 (en) | 2018-10-31 | 2020-12-22 | Snap Inc. | 3D avatar rendering |
US11103795B1 (en) | 2018-10-31 | 2021-08-31 | Snap Inc. | Game drawer |
US11176737B2 (en) | 2018-11-27 | 2021-11-16 | Snap Inc. | Textured mesh building |
US11836859B2 (en) | 2018-11-27 | 2023-12-05 | Snap Inc. | Textured mesh building |
US11620791B2 (en) | 2018-11-27 | 2023-04-04 | Snap Inc. | Rendering 3D captions within real-world environments |
US20220044479A1 (en) | 2018-11-27 | 2022-02-10 | Snap Inc. | Textured mesh building |
US12020377B2 (en) | 2018-11-27 | 2024-06-25 | Snap Inc. | Textured mesh building |
US11887237B2 (en) | 2018-11-28 | 2024-01-30 | Snap Inc. | Dynamic composite user identifier |
US10902661B1 (en) | 2018-11-28 | 2021-01-26 | Snap Inc. | Dynamic composite user identifier |
US11783494B2 (en) | 2018-11-30 | 2023-10-10 | Snap Inc. | Efficient human pose tracking in videos |
US11315259B2 (en) | 2018-11-30 | 2022-04-26 | Snap Inc. | Efficient human pose tracking in videos |
US10861170B1 (en) | 2018-11-30 | 2020-12-08 | Snap Inc. | Efficient human pose tracking in videos |
US11698722B2 (en) | 2018-11-30 | 2023-07-11 | Snap Inc. | Generating customized avatars based on location information |
US11199957B1 (en) | 2018-11-30 | 2021-12-14 | Snap Inc. | Generating customized avatars based on location information |
US11055514B1 (en) | 2018-12-14 | 2021-07-06 | Snap Inc. | Image face manipulation |
US11798261B2 (en) | 2018-12-14 | 2023-10-24 | Snap Inc. | Image face manipulation |
US11516173B1 (en) | 2018-12-26 | 2022-11-29 | Snap Inc. | Message composition interface |
US11877211B2 (en) | 2019-01-14 | 2024-01-16 | Snap Inc. | Destination sharing in location sharing system |
US11032670B1 (en) | 2019-01-14 | 2021-06-08 | Snap Inc. | Destination sharing in location sharing system |
US11751015B2 (en) | 2019-01-16 | 2023-09-05 | Snap Inc. | Location-based context information sharing in a messaging system |
US10945098B2 (en) | 2019-01-16 | 2021-03-09 | Snap Inc. | Location-based context information sharing in a messaging system |
US10939246B1 (en) | 2019-01-16 | 2021-03-02 | Snap Inc. | Location-based context information sharing in a messaging system |
US11107261B2 (en) | 2019-01-18 | 2021-08-31 | Apple Inc. | Virtual avatar animation based on facial feature movement |
US11294936B1 (en) | 2019-01-30 | 2022-04-05 | Snap Inc. | Adaptive spatial density based clustering |
US11693887B2 (en) | 2019-01-30 | 2023-07-04 | Snap Inc. | Adaptive spatial density based clustering |
US10666902B1 (en) | 2019-01-30 | 2020-05-26 | Microsoft Technology Licensing, Llc | Display conflict elimination in videoconferencing |
US10984575B2 (en) | 2019-02-06 | 2021-04-20 | Snap Inc. | Body pose estimation |
US11714524B2 (en) | 2019-02-06 | 2023-08-01 | Snap Inc. | Global event-based avatar |
US11010022B2 (en) | 2019-02-06 | 2021-05-18 | Snap Inc. | Global event-based avatar |
US11557075B2 (en) | 2019-02-06 | 2023-01-17 | Snap Inc. | Body pose estimation |
US11809624B2 (en) | 2019-02-13 | 2023-11-07 | Snap Inc. | Sleep detection in a location sharing system |
US11275439B2 (en) | 2019-02-13 | 2022-03-15 | Snap Inc. | Sleep detection in a location sharing system |
US10936066B1 (en) | 2019-02-13 | 2021-03-02 | Snap Inc. | Sleep detection in a location sharing system |
US11574431B2 (en) | 2019-02-26 | 2023-02-07 | Snap Inc. | Avatar based on weather |
US10964082B2 (en) | 2019-02-26 | 2021-03-30 | Snap Inc. | Avatar based on weather |
US11301117B2 (en) | 2019-03-08 | 2022-04-12 | Snap Inc. | Contextual information in chat |
US10852918B1 (en) | 2019-03-08 | 2020-12-01 | Snap Inc. | Contextual information in chat |
US11868414B1 (en) | 2019-03-14 | 2024-01-09 | Snap Inc. | Graph-based prediction for contact suggestion in a location sharing system |
US11852554B1 (en) | 2019-03-21 | 2023-12-26 | Snap Inc. | Barometer calibration in a location sharing system |
US11166123B1 (en) | 2019-03-28 | 2021-11-02 | Snap Inc. | Grouped transmission of location data in a location sharing system |
US11039270B2 (en) | 2019-03-28 | 2021-06-15 | Snap Inc. | Points of interest in a location sharing system |
US11638115B2 (en) | 2019-03-28 | 2023-04-25 | Snap Inc. | Points of interest in a location sharing system |
US10992619B2 (en) | 2019-04-30 | 2021-04-27 | Snap Inc. | Messaging system with avatar generation |
US11973732B2 (en) | 2019-04-30 | 2024-04-30 | Snap Inc. | Messaging system with avatar generation |
US10970909B2 (en) | 2019-04-30 | 2021-04-06 | Beihang University | Method and apparatus for eye movement synthesis |
CN110174942A (en) * | 2019-04-30 | 2019-08-27 | 北京航空航天大学 | Eye movement synthetic method and device |
US10735643B1 (en) | 2019-05-06 | 2020-08-04 | Apple Inc. | User interfaces for capturing and managing visual media |
US10652470B1 (en) | 2019-05-06 | 2020-05-12 | Apple Inc. | User interfaces for capturing and managing visual media |
US10681282B1 (en) | 2019-05-06 | 2020-06-09 | Apple Inc. | User interfaces for capturing and managing visual media |
US11706521B2 (en) | 2019-05-06 | 2023-07-18 | Apple Inc. | User interfaces for capturing and managing visual media |
US11223771B2 (en) | 2019-05-06 | 2022-01-11 | Apple Inc. | User interfaces for capturing and managing visual media |
US10735642B1 (en) | 2019-05-06 | 2020-08-04 | Apple Inc. | User interfaces for capturing and managing visual media |
US10674072B1 (en) | 2019-05-06 | 2020-06-02 | Apple Inc. | User interfaces for capturing and managing visual media |
US10791273B1 (en) | 2019-05-06 | 2020-09-29 | Apple Inc. | User interfaces for capturing and managing visual media |
US11770601B2 (en) | 2019-05-06 | 2023-09-26 | Apple Inc. | User interfaces for capturing and managing visual media |
US10645294B1 (en) | 2019-05-06 | 2020-05-05 | Apple Inc. | User interfaces for capturing and managing visual media |
USD916810S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD916871S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
USD916872S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD916811S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
USD916809S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
US11917495B2 (en) | 2019-06-07 | 2024-02-27 | Snap Inc. | Detection of a physical collision between two client devices in a location sharing system |
US11601783B2 (en) | 2019-06-07 | 2023-03-07 | Snap Inc. | Detection of a physical collision between two client devices in a location sharing system |
US10893385B1 (en) | 2019-06-07 | 2021-01-12 | Snap Inc. | Detection of a physical collision between two client devices in a location sharing system |
US11676199B2 (en) | 2019-06-28 | 2023-06-13 | Snap Inc. | Generating customizable avatar outfits |
US11189098B2 (en) | 2019-06-28 | 2021-11-30 | Snap Inc. | 3D object camera customization system |
US11188190B2 (en) | 2019-06-28 | 2021-11-30 | Snap Inc. | Generating animation overlays in a communication session |
US11443491B2 (en) | 2019-06-28 | 2022-09-13 | Snap Inc. | 3D object camera customization system |
US11823341B2 (en) | 2019-06-28 | 2023-11-21 | Snap Inc. | 3D object camera customization system |
US11714535B2 (en) | 2019-07-11 | 2023-08-01 | Snap Inc. | Edge gesture interface with smart interactions |
US11307747B2 (en) | 2019-07-11 | 2022-04-19 | Snap Inc. | Edge gesture interface with smart interactions |
US11455081B2 (en) | 2019-08-05 | 2022-09-27 | Snap Inc. | Message thread prioritization interface |
US10911387B1 (en) | 2019-08-12 | 2021-02-02 | Snap Inc. | Message reminder interface |
US11588772B2 (en) | 2019-08-12 | 2023-02-21 | Snap Inc. | Message reminder interface |
US11956192B2 (en) | 2019-08-12 | 2024-04-09 | Snap Inc. | Message reminder interface |
US11320969B2 (en) | 2019-09-16 | 2022-05-03 | Snap Inc. | Messaging system with battery level sharing |
US11662890B2 (en) | 2019-09-16 | 2023-05-30 | Snap Inc. | Messaging system with battery level sharing |
US11822774B2 (en) | 2019-09-16 | 2023-11-21 | Snap Inc. | Messaging system with battery level sharing |
US11425062B2 (en) | 2019-09-27 | 2022-08-23 | Snap Inc. | Recommended content viewed by friends |
US11676320B2 (en) | 2019-09-30 | 2023-06-13 | Snap Inc. | Dynamic media collection generation |
US11270491B2 (en) | 2019-09-30 | 2022-03-08 | Snap Inc. | Dynamic parameterized user avatar stories |
US11080917B2 (en) | 2019-09-30 | 2021-08-03 | Snap Inc. | Dynamic parameterized user avatar stories |
US11218838B2 (en) | 2019-10-31 | 2022-01-04 | Snap Inc. | Focused map-based context information surfacing |
US11063891B2 (en) | 2019-12-03 | 2021-07-13 | Snap Inc. | Personalized avatar notification |
US11563702B2 (en) | 2019-12-03 | 2023-01-24 | Snap Inc. | Personalized avatar notification |
US11128586B2 (en) | 2019-12-09 | 2021-09-21 | Snap Inc. | Context sensitive avatar captions |
US11582176B2 (en) | 2019-12-09 | 2023-02-14 | Snap Inc. | Context sensitive avatar captions |
US11036989B1 (en) | 2019-12-11 | 2021-06-15 | Snap Inc. | Skeletal tracking using previous frames |
US11594025B2 (en) | 2019-12-11 | 2023-02-28 | Snap Inc. | Skeletal tracking using previous frames |
US11636657B2 (en) | 2019-12-19 | 2023-04-25 | Snap Inc. | 3D captions with semantic graphical elements |
US11908093B2 (en) | 2019-12-19 | 2024-02-20 | Snap Inc. | 3D captions with semantic graphical elements |
US11227442B1 (en) | 2019-12-19 | 2022-01-18 | Snap Inc. | 3D captions with semantic graphical elements |
US11810220B2 (en) | 2019-12-19 | 2023-11-07 | Snap Inc. | 3D captions with face tracking |
US11263817B1 (en) | 2019-12-19 | 2022-03-01 | Snap Inc. | 3D captions with face tracking |
US11128715B1 (en) | 2019-12-30 | 2021-09-21 | Snap Inc. | Physical friend proximity in chat |
US11140515B1 (en) | 2019-12-30 | 2021-10-05 | Snap Inc. | Interfaces for relative device positioning |
US11169658B2 (en) | 2019-12-31 | 2021-11-09 | Snap Inc. | Combined map icon with action indicator |
US11893208B2 (en) | 2019-12-31 | 2024-02-06 | Snap Inc. | Combined map icon with action indicator |
US11263254B2 (en) | 2020-01-30 | 2022-03-01 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
US11651022B2 (en) | 2020-01-30 | 2023-05-16 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
US11831937B2 (en) | 2020-01-30 | 2023-11-28 | Snap Inc. | Video generation system to render frames on demand using a fleet of GPUS |
US11991419B2 (en) | 2020-01-30 | 2024-05-21 | Snap Inc. | Selecting avatars to be included in the video being generated on demand |
US11036781B1 (en) | 2020-01-30 | 2021-06-15 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
US11356720B2 (en) | 2020-01-30 | 2022-06-07 | Snap Inc. | Video generation system to render frames on demand |
US11284144B2 (en) | 2020-01-30 | 2022-03-22 | Snap Inc. | Video generation system to render frames on demand using a fleet of GPUs |
US11651539B2 (en) | 2020-01-30 | 2023-05-16 | Snap Inc. | System for generating media content items on demand |
US11729441B2 (en) | 2020-01-30 | 2023-08-15 | Snap Inc. | Video generation system to render frames on demand |
US11619501B2 (en) | 2020-03-11 | 2023-04-04 | Snap Inc. | Avatar based on trip |
US11217020B2 (en) | 2020-03-16 | 2022-01-04 | Snap Inc. | 3D cutout image modification |
US11775165B2 (en) | 2020-03-16 | 2023-10-03 | Snap Inc. | 3D cutout image modification |
US11978140B2 (en) | 2020-03-30 | 2024-05-07 | Snap Inc. | Personalized media overlay recommendation |
US11625873B2 (en) | 2020-03-30 | 2023-04-11 | Snap Inc. | Personalized media overlay recommendation |
US11818286B2 (en) | 2020-03-30 | 2023-11-14 | Snap Inc. | Avatar recommendation and reply |
US11969075B2 (en) | 2020-03-31 | 2024-04-30 | Snap Inc. | Augmented reality beauty product tutorials |
US11956190B2 (en) | 2020-05-08 | 2024-04-09 | Snap Inc. | Messaging system with a carousel of related entities |
US11061372B1 (en) | 2020-05-11 | 2021-07-13 | Apple Inc. | User interfaces related to time |
US12008230B2 (en) | 2020-05-11 | 2024-06-11 | Apple Inc. | User interfaces related to time with an editable background |
US11442414B2 (en) | 2020-05-11 | 2022-09-13 | Apple Inc. | User interfaces related to time |
US11822778B2 (en) | 2020-05-11 | 2023-11-21 | Apple Inc. | User interfaces related to time |
US11921998B2 (en) | 2020-05-11 | 2024-03-05 | Apple Inc. | Editing features of an avatar |
US11054973B1 (en) | 2020-06-01 | 2021-07-06 | Apple Inc. | User interfaces for managing media |
US11330184B2 (en) | 2020-06-01 | 2022-05-10 | Apple Inc. | User interfaces for managing media |
US11617022B2 (en) | 2020-06-01 | 2023-03-28 | Apple Inc. | User interfaces for managing media |
US11543939B2 (en) | 2020-06-08 | 2023-01-03 | Snap Inc. | Encoded image based messaging system |
US11922010B2 (en) | 2020-06-08 | 2024-03-05 | Snap Inc. | Providing contextual information with keyboard interface for messaging system |
US11822766B2 (en) | 2020-06-08 | 2023-11-21 | Snap Inc. | Encoded image based messaging system |
US11683280B2 (en) | 2020-06-10 | 2023-06-20 | Snap Inc. | Messaging system including an external-resource dock and drawer |
US11580682B1 (en) | 2020-06-30 | 2023-02-14 | Snap Inc. | Messaging system with augmented reality makeup |
US11863513B2 (en) | 2020-08-31 | 2024-01-02 | Snap Inc. | Media content playback and comments management |
US11893301B2 (en) | 2020-09-10 | 2024-02-06 | Snap Inc. | Colocated shared augmented reality without shared backend |
US11360733B2 (en) | 2020-09-10 | 2022-06-14 | Snap Inc. | Colocated shared augmented reality without shared backend |
US11452939B2 (en) | 2020-09-21 | 2022-09-27 | Snap Inc. | Graphical marker generation system for synchronizing users |
US11833427B2 (en) | 2020-09-21 | 2023-12-05 | Snap Inc. | Graphical marker generation system for synchronizing users |
US11888795B2 (en) | 2020-09-21 | 2024-01-30 | Snap Inc. | Chats with micro sound clips |
US11212449B1 (en) | 2020-09-25 | 2021-12-28 | Apple Inc. | User interfaces for media capture and management |
US11910269B2 (en) | 2020-09-25 | 2024-02-20 | Snap Inc. | Augmented reality content items including user avatar to share location |
US11615592B2 (en) | 2020-10-27 | 2023-03-28 | Snap Inc. | Side-by-side character animation from realtime 3D body motion capture |
US11660022B2 (en) | 2020-10-27 | 2023-05-30 | Snap Inc. | Adaptive skeletal joint smoothing |
US11734894B2 (en) | 2020-11-18 | 2023-08-22 | Snap Inc. | Real-time motion transfer for prosthetic limbs |
US11450051B2 (en) | 2020-11-18 | 2022-09-20 | Snap Inc. | Personalized avatar real-time motion capture |
US12002175B2 (en) | 2020-11-18 | 2024-06-04 | Snap Inc. | Real-time motion transfer for prosthetic limbs |
US11748931B2 (en) | 2020-11-18 | 2023-09-05 | Snap Inc. | Body animation sharing and remixing |
US12008811B2 (en) | 2020-12-30 | 2024-06-11 | Snap Inc. | Machine learning-based selection of a representative video frame within a messaging application |
US11790531B2 (en) | 2021-02-24 | 2023-10-17 | Snap Inc. | Whole body segmentation |
US11809633B2 (en) | 2021-03-16 | 2023-11-07 | Snap Inc. | Mirroring device with pointing based navigation |
US11798201B2 (en) | 2021-03-16 | 2023-10-24 | Snap Inc. | Mirroring device with whole-body outfits |
US11978283B2 (en) | 2021-03-16 | 2024-05-07 | Snap Inc. | Mirroring device with a hands-free mode |
US11908243B2 (en) | 2021-03-16 | 2024-02-20 | Snap Inc. | Menu hierarchy navigation on electronic mirroring devices |
US11734959B2 (en) | 2021-03-16 | 2023-08-22 | Snap Inc. | Activating hands-free mode on mirroring device |
US11544885B2 (en) | 2021-03-19 | 2023-01-03 | Snap Inc. | Augmented reality experience based on physical items |
US11562548B2 (en) | 2021-03-22 | 2023-01-24 | Snap Inc. | True size eyewear in real time |
US11418699B1 (en) | 2021-04-30 | 2022-08-16 | Apple Inc. | User interfaces for altering visual media |
US11416134B1 (en) | 2021-04-30 | 2022-08-16 | Apple Inc. | User interfaces for altering visual media |
US11778339B2 (en) | 2021-04-30 | 2023-10-03 | Apple Inc. | User interfaces for altering visual media |
US11539876B2 (en) | 2021-04-30 | 2022-12-27 | Apple Inc. | User interfaces for altering visual media |
US11350026B1 (en) | 2021-04-30 | 2022-05-31 | Apple Inc. | User interfaces for altering visual media |
US11636654B2 (en) | 2021-05-19 | 2023-04-25 | Snap Inc. | AR-based connected portal shopping |
US11941767B2 (en) | 2021-05-19 | 2024-03-26 | Snap Inc. | AR-based connected portal shopping |
US11776190B2 (en) | 2021-06-04 | 2023-10-03 | Apple Inc. | Techniques for managing an avatar on a lock screen |
US11941227B2 (en) | 2021-06-30 | 2024-03-26 | Snap Inc. | Hybrid search system for customizable media |
US11854069B2 (en) | 2021-07-16 | 2023-12-26 | Snap Inc. | Personalized try-on ads |
US12034680B2 (en) | 2021-08-09 | 2024-07-09 | Snap Inc. | User presence indication data management |
US11983462B2 (en) | 2021-08-31 | 2024-05-14 | Snap Inc. | Conversation guided augmented reality experience |
US11908083B2 (en) | 2021-08-31 | 2024-02-20 | Snap Inc. | Deforming custom mesh based on body mesh |
US11670059B2 (en) | 2021-09-01 | 2023-06-06 | Snap Inc. | Controlling interactive fashion based on body gestures |
US11673054B2 (en) | 2021-09-07 | 2023-06-13 | Snap Inc. | Controlling AR games on fashion items |
US11663792B2 (en) | 2021-09-08 | 2023-05-30 | Snap Inc. | Body fitted accessory with physics simulation |
US11900506B2 (en) | 2021-09-09 | 2024-02-13 | Snap Inc. | Controlling interactive fashion based on facial expressions |
US11734866B2 (en) | 2021-09-13 | 2023-08-22 | Snap Inc. | Controlling interactive fashion based on voice |
US11798238B2 (en) | 2021-09-14 | 2023-10-24 | Snap Inc. | Blending body mesh into external mesh |
US11836866B2 (en) | 2021-09-20 | 2023-12-05 | Snap Inc. | Deforming real-world object using an external mesh |
US11983826B2 (en) | 2021-09-30 | 2024-05-14 | Snap Inc. | 3D upper garment tracking |
US11636662B2 (en) | 2021-09-30 | 2023-04-25 | Snap Inc. | Body normal network light and rendering control |
US11651572B2 (en) | 2021-10-11 | 2023-05-16 | Snap Inc. | Light and rendering of garments |
US11836862B2 (en) | 2021-10-11 | 2023-12-05 | Snap Inc. | External mesh with vertex attributes |
US11790614B2 (en) | 2021-10-11 | 2023-10-17 | Snap Inc. | Inferring intent from pose and speech input |
US11763481B2 (en) | 2021-10-20 | 2023-09-19 | Snap Inc. | Mirror-based augmented reality experience |
US11995757B2 (en) | 2021-10-29 | 2024-05-28 | Snap Inc. | Customized animation from video |
US12020358B2 (en) | 2021-10-29 | 2024-06-25 | Snap Inc. | Animated custom sticker creation |
US11996113B2 (en) | 2021-10-29 | 2024-05-28 | Snap Inc. | Voice notes with changing effects |
US11960784B2 (en) | 2021-12-07 | 2024-04-16 | Snap Inc. | Shared augmented reality unboxing experience |
US11748958B2 (en) | 2021-12-07 | 2023-09-05 | Snap Inc. | Augmented reality unboxing experience |
US11880947B2 (en) | 2021-12-21 | 2024-01-23 | Snap Inc. | Real-time upper-body garment exchange |
US11928783B2 (en) | 2021-12-30 | 2024-03-12 | Snap Inc. | AR position and orientation along a plane |
US11887260B2 (en) | 2021-12-30 | 2024-01-30 | Snap Inc. | AR position indicator |
US11823346B2 (en) | 2022-01-17 | 2023-11-21 | Snap Inc. | AR body part tracking system |
US11954762B2 (en) | 2022-01-19 | 2024-04-09 | Snap Inc. | Object replacement system |
US12002146B2 (en) | 2022-03-28 | 2024-06-04 | Snap Inc. | 3D modeling based on neural light field |
US12020384B2 (en) | 2022-06-21 | 2024-06-25 | Snap Inc. | Integrating augmented reality experiences with other components |
US12020386B2 (en) | 2022-06-23 | 2024-06-25 | Snap Inc. | Applying pregenerated virtual experiences in new location |
US11870745B1 (en) | 2022-06-28 | 2024-01-09 | Snap Inc. | Media gallery sharing and management |
US11893166B1 (en) | 2022-11-08 | 2024-02-06 | Snap Inc. | User avatar movement control using an augmented reality eyewear device |
EP4382182A1 (en) * | 2022-12-08 | 2024-06-12 | Sony Interactive Entertainment Europe Limited | Device and method for controlling a virtual avatar on an electronic device |
US12028301B2 (en) | 2023-01-31 | 2024-07-02 | Snap Inc. | Contextual generation and selection of customized media content |
US12033296B2 (en) | 2023-04-24 | 2024-07-09 | Apple Inc. | Avatar creation user interface |
Also Published As
Publication number | Publication date |
---|---|
US20140218459A1 (en) | 2014-08-07 |
US9398262B2 (en) | 2016-07-19 |
US20170111616A1 (en) | 2017-04-20 |
WO2013097264A1 (en) | 2013-07-04 |
CN106961621A (en) | 2017-07-18 |
US20170111615A1 (en) | 2017-04-20 |
US20170310934A1 (en) | 2017-10-26 |
WO2013097139A1 (en) | 2013-07-04 |
CN104011738A (en) | 2014-08-27 |
CN104115503A (en) | 2014-10-22 |
US20170054945A1 (en) | 2017-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20170310934A1 (en) | System and method for communication using interactive avatar | |
US11595617B2 (en) | Communication using interactive avatars | |
US9936165B2 (en) | System and method for avatar creation and synchronization | |
US20140198121A1 (en) | System and method for avatar generation, rendering and animation | |
US9357174B2 (en) | System and method for avatar management and selection | |
US20150213604A1 (en) | Avatar-based video encoding | |
TWI583198B (en) | Communication using interactive avatars | |
TWI682669B (en) | Communication using interactive avatars | |
TW202107250A (en) | Communication using interactive avatars |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTEL CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DU, YANGZHOU;LI, WENLONG;TONG, XIAOFENG;AND OTHERS;REEL/FRAME:032297/0616 Effective date: 20130904 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |